Skip to content

The Guide to Email Marketing Metrics & Reporting

 

 

Let's go back to 1978 (stick with me here). 

Gary Thuerk, a marketer at Digital Equipment Corporation, sent the first-ever marketing email to 400 recipients. It reportedly generated $13 million in sales. The inbox was born as a commercial tool, and the belief that followed has never really gone away:

Send an email. Get sales.

And look — in 1978, that was sort of true. Inboxes were empty. Email was novel, nobody else was doing it, the channel had no competition, no noise, no fatigue. Of course it worked.

Fast forward to today, and that founding myth has calcified into an industry-wide belief that has genuinely broken how most businesses approach email. Because it never got updated. The inbox went from a quiet personal space to one of the most contested, cluttered, frustrating places on the internet — and the expectation that email should still behave like a 1978 novelty has never left the building.

That belief sounds like this:

    • "We need to see email ROI."

    • "Why aren't people clicking?"

    • "Our open rate dropped — what's wrong?"

    • "Send it to more people, we'll get more results."

    • "Email should be generating leads directly."

And it's not just leadership saying these things. Marketers say this too — because for years, this is what the industry has taught. The "£42 for every £1 spent" stat (which was based on a single study with a single brand and has since been repeated by basically everyone) didn't help. It created a culture of expectation that email rarely lives up to when measured in the way people try to measure it.

This blog is about dismantling that belief properly, understanding why opens and clicks are misleading you, why email impacts far more than it gets credit for, and how to actually measure it — differently for D2C, B2C, and B2B.

Let's get into it.

 

Before you dig in, why don't you get access to RE:markable

 RE:markable is the weekly email about emails. Dropping the latest email marketing news, updates, insights, free resources, upcoming masterclasses, webinars, and of course, a little inbox mischief. 

 

The performance channel myth: where it came from and why it stuck

Email became a "performance channel" for one reason: D2C and e-commerce showed it could drive immediate, trackable revenue at scale, and the entire industry took notes from the wrong people.

E-commerce brands could send a promotional email on Monday and see a revenue spike by Tuesday. Attribution was clean — someone clicked, they bought, the platform counted it. Tools like Klaviyo built their entire product philosophy around this. Abandoned basket flows, post-purchase sequences, promotional calendars — this whole infrastructure made sense in a D2C world where buying is frequent, decisions are fast, and the inbox is a shopping channel.

But then everyone else started copying it. B2B SaaS businesses. Professional services firms. Charities. Retailers selling office furniture. Manufacturers. They all inherited the D2C playbook and tried to force it onto audiences, buying cycles, and business models where it had no business being.

The result: email became something everyone expected to produce immediate, measurable, direct revenue — regardless of what they were selling, who they were selling to, or how their audience actually made decisions.

If you sell office chairs, your customers don't need a new chair every month. If you sell enterprise software, your sales cycle is six to eighteen months. If you're a professional services firm, relationships and trust are built over years. None of these businesses should be running email like it's a Tuesday flash sale.

Ask yourself:

Does your current email reporting reflect how your audience actually makes buying decisions — or does it reflect how a D2C brand would like people to behave?

 

Context is everything. The biggest flaw in how email gets evaluated is the complete absence of context. A 15% open rate for a weekly newsletter about complex B2B software is not the same as a 15% open rate for a promotional email to a D2C beauty list. They represent completely different relationships, audiences, and intent levels. Comparing them — or measuring them the same way — tells you nothing useful.

 

More emails does not mean more results — and here's why

This is one I battle with constantly, so let's address it head-on.

On paper, the logic makes sense: the more people who see your emails, the more likely some of them are to act. It's a numbers game. Cast a wider net.

But email is not social media. It is not a broadcast where algorithms decide who sees what. It's a direct channel — one where people have signed up, had an interaction with you, and landed in a highly personal environment. The inbox is not a discovery space. It's a task environment.

When you flood that environment with volume — sending more frequently, to more people, with less relevance — you don't get more results. You get:

  • Higher unsubscribe rates, because relevance has dropped

  • Increasing spam complaints, because people feel harassed

  • Deliverability degradation, because inbox providers see the negative signals

  • Diminishing engagement from your best subscribers, because you've burned their patience

  • Reporting that looks fine on the surface while the programme quietly rots underneath

The email programmes that consistently perform are not the ones sending the most. They're the ones sending with the most intent.

Key takeaway:

Volume is not a strategy. Intent is a strategy. The number of emails you send is a tactic — and it's one of the least important ones.

 

Why email opens are lying to you

Let's deal with opens first, because they're the metric most marketers lead with in reporting, most leaders understand, and most ESPs put front and centre on the dashboard.

Opens feel intuitive: if someone opened your email, they saw it. They were interested. The subject line worked. Something worked.

The reality is far messier.  

 

The technical problems

Apple Mail Privacy Protection (MPP), introduced in 2021, pre-loads email content — including tracking pixels — regardless of whether the recipient actually opens the email. For any audience with a meaningful percentage of Apple Mail users, this inflates open rates automatically. You cannot distinguish a real open from a pre-loaded pixel fire.

Outlook's Reading Pane works the other way: people can read an entire email without triggering a tracking pixel, which means real engagement gets missed and shows as zero. Some users delete emails from the Reading Pane without technically opening them — and that can still register as an open depending on how the ESP tracks it.

Security software and bots — particularly in B2B environments — scan links and pre-load content to check for threats. This is where opens and clicks both get distorted at the same time. In one audit I ran for a client, 5% of their total clicks were bot-generated, and their real human click rate was 0.8%, not the 1.5% their dashboard was reporting. That's a 90% inflation. Decisions were being made on completely fictitious numbers. 

 

The human behaviour problems

Even if you could trust the technology perfectly — and you can't — human behaviour makes open rates unreliable as a success indicator.

  • Open-to-delete: People open emails to clear the unread badge. They've seen your subject line, felt mild curiosity, opened it, and deleted it in under two seconds. That counts as an open.

  • Passive scanning: Someone opens an email on their phone while commuting, glances at the first line, and locks their screen. The open is tracked. No meaningful engagement happened.

  • Inbox triage: Many people batch-process their inbox — opening and archiving emails as quickly as possible to get to zero. A fast open during triage is not the same as an engaged read.

  • The wrong context: Someone opens your email five minutes before a meeting, can't actually read it, and forgets to come back. Open tracked, impact zero.

An open tells you someone's email client retrieved your content. It does not tell you whether they read it, whether it created any impression, whether it influenced anything, or whether it was positive or negative.

Ask yourself:

If your ESP removed the open rate metric tomorrow, what would you measure instead? If the answer is "I don't know" — that's the problem to solve.

 

The benchmark problem

While we're here — industry benchmarks for open rates are largely useless, and I will die on this hill.

A "good" open rate benchmark is calculated by averaging data across thousands of companies, industries, audience types, list sizes, sending frequencies, and ESP configurations — and presenting the result as a number you should aspire to.

But your audience is not the industry average. Your frequency is not the industry average. Your opt-in quality is not the industry average. Your relationship with your subscribers is not the industry average.

Comparing your open rate to a benchmark is like comparing your restaurant's lunch crowd to the national average for "restaurants" — including fast food chains, Michelin star venues, airport cafes, and market stalls. The number means nothing in context.

Your benchmark is your own historical performance, segmented properly, measured consistently. That's it.

Key takeaway:

Stop asking "Is our open rate good?" Start asking "Is our open rate improving for the segments that matter, and what's driving the change?"

 

Why clicks are stronger — but still not the answer

Clicks are a more meaningful signal than opens. A click requires an active decision: someone read enough of the email to find the link, decided it was worth their time, and followed through. That's real engagement.

But clicks are still not the answer to measuring email performance, for several reasons.

 

The technical distortion

The same B2B security scanning that inflates opens also inflates clicks. Tools like Mimecast, Barracuda, and Proofpoint automatically follow links in emails to check them for threats before they reach the recipient. If your link looks clean, the email gets through — but a click has already been recorded.

As I mentioned: in a real audit, one client's reported click rate was 1.5%. Their actual human click rate, after removing bot activity, was 0.8%. Every decision about content effectiveness, journey optimisation, and campaign performance had been made on inflated numbers for months.

Cross-device tracking also muddies the water — someone opens on mobile, clicks later on desktop. Some ESPs count that as two separate interactions, others don't count the second at all. Attribution is inconsistent across platforms.

 

The context problem

Even genuine clicks are contextually meaningless without understanding what the click represents in the context of the business, the audience, and the customer journey.

Consider:

    • You sell premium office chairs. A customer just bought one. Your email newsletter goes out every two weeks with new product content. Why would they click? They don't need a chair. A low click rate is not a failure — it's a reflection of reality.

    • You sell B2B project management software. A prospect is three months into evaluating vendors. They open and read your nurture emails carefully. They don't click because they're not ready to act — but those emails are building the preference that leads to a demo request later. Zero clicks. Real impact.

    • You send a re-engagement email to a dormant segment. The click rate is high because only the genuinely interested people remained. The click rate looks great — but if you compare it to your regular newsletter, it's misleading.

A click rate is not inherently good or bad. It's a signal whose meaning depends entirely on who you sent the email to, what the email was asking them to do, and where they are in their journey with you.

Ask yourself:

What does a click actually represent in the context of your business? What happens after the click — and is that what success looks like?

 

What calling clicks 'success' actually does

When organisations celebrate clicks as the primary success metric, they start optimising for clicks — not for actual outcomes. This leads to:

  • Clickbait subject lines that inflate opens but reduce trust

  • Excessive CTAs crammed into emails so something gets clicked

  • Sending to larger segments because more recipients = more potential clicks

  • Short-termism: optimising individual campaign performance instead of long-term programme health

  • Reporting theatre: making slides that look successful while the programme slowly deteriorates underneath

Clicks measure one type of action in one moment. They do not measure whether email is doing its job.

 

The billboard effect: how email impacts without an open

Here is something almost nobody talks about in email reporting, and it's one of the most important things to understand:

Email creates impact before the open.

Every delivered email — regardless of whether it's opened — is a micro-branding moment. When your email lands in someone's inbox, they see:

  • Your sender name — do they recognise it? Do they trust it?

  • Your subject line — what impression does it leave?

  • Your preheader — does it reinforce the right associations?

  • Your BIMI logo (if enabled) — visual brand consistency in the inbox

Even if they scroll past and delete it in two seconds, something has happened. Your brand has appeared in their environment. Your name has been processed. Your message — at least in headline form — has been registered.

Think of it like a billboard on a motorway. You don't click a billboard. You don't fill in a form from a billboard. You don't convert from a billboard in that moment. But over time, repeated exposure to a billboard builds familiarity, recognition, and association. When you later need what that billboard advertised, you think of them first.

Email works the same way. Especially at scale. Especially over time.

Even if only 30% of your newsletter subscribers open an email, 100% of delivered emails still appeared in someone's inbox. That 70% who didn't open? They still saw your name. They still processed your subject line. They're still building an association between your brand and the value you represent.

This is why consistent sending matters. Not batch-and-blasting, but showing up regularly with relevant content builds the mental availability that makes people choose you when the moment of need arrives.

It also means your email programme is almost certainly under-reported. If you're only measuring opens and clicks, you're measuring the explicit, visible engagements — and ignoring an enormous layer of influence that never produces a trackable data point.

Key takeaway:

Email influence is cumulative, not episodic. The value of your programme compounds over time through visibility, familiarity, and trust — none of which show up in your campaign dashboard.

 

The important caveat: visibility is not a licence to spam

Before anyone uses this to justify blasting their whole database every day — stop.

Awareness without engagement has a ceiling, and that ceiling is deliverability. Inbox providers track negative signals: emails deleted without opening, spam complaints, low engagement patterns across your sending history. If your emails consistently produce negative signals, your inbox placement degrades. And if your emails aren't reaching the inbox, you're not getting the billboard effect either — you're getting the spam folder effect, which is no effect at all.

The awareness argument is not "send more to more people." It's "show up consistently, with enough quality and relevance to protect your deliverability, and let the cumulative effect build over time."

 

Email is an impact channel — and that's actually a good thing

Here's the reframe that changes everything: email is not a conversion channel. It's an impact channel.

This is not a consolation prize. It's actually a more powerful and more defensible position — if you understand what it means and how to measure it.

An impact channel is one that:

    • Builds mental availability over time — keeping you present in someone's mind so they think of you when the need arises

    • Reinforces positioning — shaping how people perceive your expertise, values, and differentiation

    • Creates momentum across the journey — moving people forward through education, reassurance, and trust-building

    • Reduces friction — helping people feel confident enough to take the next step, whenever that step happens

    • Supports other channels — increasing direct traffic, brand search, content engagement, and pipeline velocity

Email's impact is often indirect. The action happens somewhere else — a Google search, a direct visit, a reply to a sales rep, a conversation at an event — but the influence started in the inbox.

When businesses only measure direct attribution from email — "they clicked the email and converted" — they are measuring a tiny fraction of email's real job. The majority of email's impact is invisible in standard reporting. Which means email almost always gets undervalued, under-resourced, and over-blamed.

That invisible impact has a name: Return on Impact (ROI²).

ROI² is the value your email programme delivers across the full customer journey — even when you can't directly track it. It shows up as momentum in the sales pipeline, lifts in brand search, direct traffic after campaigns, reply sentiment and emotional engagement, and long-term LTV increases linked to retention.

Ask any business that has paused email and watched pipeline velocity drop, direct traffic slide, or retention rates soften — that's impact becoming visible only once it's gone.

Ask yourself:

If your email programme stopped tomorrow, what would change in your business within 30 days? 90 days? That's your impact.

 

Attribution theatre: why email gets blamed and credited unfairly

Attribution theatre is when businesses pretend they can measure marketing impact cleanly, while relying on metrics and models that are either wildly incomplete, politically convenient, or both. It looks like certainty. It feels like control. It produces charts. But it's often detached from reality.

Email gets dragged into attribution theatre more than most channels for a few structural reasons.

Email is the most obvious touchpoint. It arrives in an inbox, it's timestamped, and it's easy to point at in a meeting. When someone needs a reason for a spike or a dip, "we sent an email" is a simple story.

Email is often the last visible touch before an action. Someone receives an email, then searches your brand, then visits your site directly, then converts. Last-click attribution gives the credit to the direct visit. First-click attribution might give it to a paid ad from six months ago. Email, which nudged the person back into the journey, gets nothing.

Email reporting looks deceptively clean. Open rates, click rates, "revenue attributed" — they look like answers. The problem is they're often proxies that can be distorted by Apple MPP, bot clicks, list health, and inbox placement issues.

The two unfair stories that result:

Email gets blamed when engagement dips (regardless of root cause), when revenue slows (regardless of whether email is responsible), and when someone needs a reason for a missed target. The email team becomes the scapegoat. And the response is usually more volume, more pressure, worse results, and a deepening blame cycle.

Email gets over-credited when someone sends an email and sales spike the next day — ignoring every other touchpoint, market condition, seasonal factor, and cumulative influence that contributed. This sets unrealistic expectations that collapse the moment conditions change.

Both are forms of attribution theatre. Both are damaging. And both come from not having a mature, honest measurement framework for what email actually does.

 

How to actually measure email — the framework

The goal is not perfect attribution. Perfect attribution does not exist in marketing, and chasing it is a waste of time. The goal is credible measurement — a measurement approach that reflects reality, helps you make better decisions, and holds up in a leadership conversation.

There are three layers of email measurement. Each plays a different role. Most programmes only track the first.

 

What to measure — by business type

The principles above apply universally. The specific metrics that matter most vary significantly by business model. Here's how to think about it for D2C/e-commerce, B2C, and B2B.

 

D2C and e-commerce: email as a revenue and retention engine

D2C is the closest email gets to a performance channel — buying is frequent, decisions are relatively fast, and attribution is more trackable. But even here, most programmes over-index on campaign-level revenue and under-measure programme-level health. 

Priority metrics for D2C:

Revenue per 1,000 recipients (RPM) — not total revenue attributed, but revenue per person reached. This is the metric that makes volume vs. relevance arguments easy to win.

Customer lifetime value by email cohort — do subscribers acquired or nurtured via email have higher LTV than those who aren't?

Repeat purchase rate for lifecycle-exposed customers — are customers who receive your post-purchase and retention journeys buying again more often?

Time to second purchase — email's job in D2C is often to accelerate this. Are your onboarding and product education emails shortening the gap?

Cart abandonment recovery rate — not just whether the abandoned basket email sent, but what percentage of abandoned baskets are recovered via email vs. other channels.

Winback rate — of lapsed customers who receive a re-engagement journey, what percentage reactivate? And how does that compare to those who receive no communications?

Unsubscribe rate by journey — where in the lifecycle are you losing people? This identifies friction points that metrics alone can't surface.

Deliverability by mailbox provider — Gmail, Yahoo, Apple Mail, and Outlook can behave very differently. Segment your placement data.

 

The D2C trap to avoid: over-attributing all revenue to email. The click happened to come through email, but the purchase decision may have been influenced by TikTok, a friend's recommendation, a review, and three previous emails none of which generated a click. Report email's contribution as one layer of a multi-channel journey.

 

B2C (non-e-commerce): email as an awareness and relationship channel

B2C brands outside of pure e-commerce — gyms, subscription services, hospitality, financial services, media — often have longer consideration cycles and more complex journeys. Email here is less about immediate conversion and more about staying present, building trust, and supporting decisions that happen elsewhere. 

Priority metrics for B2C:

Brand search lift — after consistent email campaigns, do more people search for your brand organically? This is measurable and defensible.

Direct traffic correlation — is there a consistent pattern between your email cadence and direct visits to your site?

Subscriber-to-customer conversion rate over time — not from one campaign, but how many subscribers become customers within 30, 60, 90, 180 days?

Retention rates by lifecycle journey — are customers who receive your retention and loyalty emails staying longer, renewing more, or churning less?

Event or programme sign-up rate — for brands where email drives attendance, registrations, or memberships, this is a clean and meaningful metric.

Net Promoter Score for email subscribers vs. non-subscribers — do your email subscribers have higher NPS? This is a proxy for whether email is building the right relationship.

Reply rate and sentiment — people do not reply to emails unless they care. A meaningful reply rate is a strong programme health signal.

Read depth (where available) — some ESPs and tools can show how far down an email people read. Consistent scroll depth is a meaningful engagement signal. 

 

 

B2B: email as a pipeline support and trust-building channel

B2B is where the performance channel myth causes the most damage. Sales cycles are long, buying committees are complex, and 95% of your audience is out of market at any given moment. Expecting email to produce immediate, direct pipeline is like expecting a single networking event to close a six-figure deal. It's one input in a long process. 

Priority metrics for B2B:

Pipeline velocity for nurtured contacts — do leads who are actively in your email programme move through pipeline stages faster than those who aren't? This is one of the strongest arguments for email's value in B2B.

Time from first touch to demo/discovery request — is email shortening this gap? Comparing subscribers to non-subscribers on this metric is revealing.

Sales-qualified lead (SQL) rate from email-influenced contacts — what percentage of leads who have been in your email programme convert to SQL, compared to those who haven't?

Content engagement by account (ABM) — for ABM programmes, what content are target accounts engaging with? This informs sales conversations and signals where accounts are in their evaluation.

Reply rate from nurture sequences — in B2B, a reply to a nurture email is one of the highest-intent signals you can get. It should be tracked and acted on.

Webinar or event attendance from email — is email driving people into your pipeline-building activities (webinars, demos, roundtables)?

Account re-engagement rate — for accounts that have gone cold, are your re-engagement sequences successfully bringing them back into active consideration?

Email influence on closed-won deals — in your CRM, how many closed-won deals had email as a touchpoint in the journey? This is often surprisingly high — and surprisingly unmeasured. 

 

The B2B conversation to have with leadership: Email in B2B is a compound channel. Its primary job is to keep you present, build trust, and support the sales conversation — not to generate immediate revenue. Measuring it with e-commerce metrics will always make it look like it's failing. The right measurement asks: are our email-influenced leads converting at a higher rate? Are nurtured accounts closing faster? Are we the brand they think of when the need arrives?

 

Ask yourself:

Does your current email reporting match your business model — or are you applying D2C metrics to a B2B or B2C programme and wondering why email always looks like it's underperforming?

 

How to build a reporting framework that leadership will actually understand

Leadership does not need a lecture on attribution models. They need risk, opportunity, clarity, and a decision framework. Here's how to structure reporting that gives them that. 

  • Step 1: Separate visibility from engagement

    Before you report on opens, clicks, or anything else, you need to know whether your emails are actually reaching people. Inbox placement is the silent killer of attribution accuracy. A programme with 70% inbox placement is not comparable to one with 95% — but most dashboards treat them identically.

    Start every report with: how much of our list can we actually reach? What percentage are we landing in the inbox for? This context changes everything else.

  • Step 2: Report by segment, not by 'the list'

    Reporting one open rate or one click rate for your entire database is like reporting one average temperature for the whole country. It smooths out reality and makes it impossible to see what's actually happening.

    Report separately for: new subscribers vs. established subscribers; intentional opt-ins vs. consequential opt-ins; engaged vs. cooling vs. dormant; customers vs. prospects; B2B account tiers or D2C customer value tiers.

  • Step 3: Show leading and lagging indicators together

    Engagement is a lagging indicator. It reflects conditions that were created weeks or months earlier. If engagement is falling, the cause is almost never the subject line — it's something upstream. Present both so leadership can see the whole picture:

    · Leading indicators: expectation alignment at sign-up, welcome journey completion rates, segmentation quality, inbox placement, list hygiene

    · Lagging indicators: engagement rates, revenue attributed, pipeline influenced, retention rates 

  • Step 4: Use controlled pilots to prove impact

    If you want to demonstrate email's value without arguing theory, run a pilot. Take a clean engaged segment. Run a structured approach — better segmentation, clearer journeys, stronger exclusion logic. Compare against a control group or a previous period. Measure over a quarter, not a single campaign.

    Pilots replace opinion with evidence. They are the fastest way to move a leadership conversation from "prove it" to "scale it."

  • Step 5: Measure the ROI of sending less

    This is the most counterintuitive but often the most persuasive move: show that reducing volume to the right people increases impact per recipient.

    · Revenue per 1,000 recipients goes up when you exclude unengaged segments

    · Complaint rates drop when you stop sending to people who don't want to hear from you

    · Inbox placement improves when negative engagement signals reduce

    · Winback rates improve when you stop burning dormant contacts with promotions

    This turns exclusion from a hygiene task into a commercial strategy. And it gives you a powerful line in leadership meetings: "We sent to fewer people. We made more per person. The programme is healthier. Here's the data." 

     

 

Lines that tend to land well in leadership conversations:

 "Email isn't a last-click channel. It's a visibility and momentum channel."

"We're measuring impact over time, not crediting single sends for conversions."

"If we degrade deliverability, we lose inbox visibility — and that reduces total marketing impact."

"We can prove this with a controlled pilot rather than debating opinions."

"We're not trying to win attribution. We're trying to protect and grow impact." 

Email health check

Free Email Marketing Health Check

Audit your entire ecosystem in under 30 minutes

Answer a set of honest questions across every area of email marketing and get a personalised score, traffic-light priorities, and clear actions and improvements you can make today. 

The honest summary

Opens and clicks are not useless. They're indicators — weak, unreliable, context-dependent indicators. Use them as one data point among many. Don't build your programme around them. Don't report them to leadership as proof of success or failure. And definitely don't make strategic decisions based on them in isolation.

The real measurement question for email is not "how many people opened it?" It's:

    • Is email keeping us present in our audience's mind over time?

    • Is email building the trust that makes people choose us when the moment arrives?

    • Is email supporting pipeline, retention, or revenue in ways that show up in business outcomes — not just campaign dashboards?

    • Is our email programme healthy enough to keep delivering that impact long-term?

When you start measuring those things — with the right metrics for your business model, reported honestly, tracked over time — email stops being a channel you have to defend and starts being one of the most valuable things in your marketing mix.

Because it always was. You just weren't measuring the right things.

 

Further reading from The Vault:

 

Like this blog? You'll love RE:markable

 RE:markable is the weekly email about emails. Dropping the latest email marketing news, updates, insights, free resources, upcoming masterclasses, webinars, and of course, a little inbox mischief. 

Email, CRM and HubSpot Support

I help marketers and businesses globally improve, design and fix their email, CRM, and HubSpot ecosystems, from strategy through to execution.

My services include:

  • Email marketing strategy, audits, training, workshops, and consultancy

  • CRM strategy and enablement

  • Full HubSpot implementations, optimisation and onboarding through my agency

If you’re looking for experienced external support (and lots of enjoyment along the way), this is where to start.