The Email & CRM Vault

The Guide to Email Marketing Metrics & Reporting

Written by Beth O'Malley | 03/2026

 

 

Let's go back to 1978 (stick with me here). 

Gary Thuerk, a marketer at Digital Equipment Corporation, sent the first-ever marketing email to 400 recipients. It reportedly generated $13 million in sales. The inbox was born as a commercial tool, and the belief that followed has never really gone away:

Send an email. Get sales.

And look — in 1978, that was sort of true. Inboxes were empty. Email was novel, nobody else was doing it, the channel had no competition, no noise, no fatigue. Of course it worked.

Fast forward to today, and that founding myth has calcified into an industry-wide belief that has genuinely broken how most businesses approach email. Because it never got updated. The inbox went from a quiet personal space to one of the most contested, cluttered, frustrating places on the internet — and the expectation that email should still behave like a 1978 novelty has never left the building.

That belief sounds like this:

    • "We need to see email ROI."

    • "Why aren't people clicking?"

    • "Our open rate dropped — what's wrong?"

    • "Send it to more people, we'll get more results."

    • "Email should be generating leads directly."

And it's not just leadership saying these things. Marketers say this too — because for years, this is what the industry has taught. The "£42 for every £1 spent" stat (which was based on a single study with a single brand and has since been repeated by basically everyone) didn't help. It created a culture of expectation that email rarely lives up to when measured in the way people try to measure it.

This blog is about dismantling that belief properly, understanding why opens and clicks are misleading you, why email impacts far more than it gets credit for, and how to actually measure it — differently for D2C, B2C, and B2B.

Let's get into it.

 

 

The performance channel myth: where it came from and why it stuck

Email became a "performance channel" for one reason: D2C and e-commerce showed it could drive immediate, trackable revenue at scale, and the entire industry took notes from the wrong people.

E-commerce brands could send a promotional email on Monday and see a revenue spike by Tuesday. Attribution was clean — someone clicked, they bought, the platform counted it. Tools like Klaviyo built their entire product philosophy around this. Abandoned basket flows, post-purchase sequences, promotional calendars — this whole infrastructure made sense in a D2C world where buying is frequent, decisions are fast, and the inbox is a shopping channel.

But then everyone else started copying it. B2B SaaS businesses. Professional services firms. Charities. Retailers selling office furniture. Manufacturers. They all inherited the D2C playbook and tried to force it onto audiences, buying cycles, and business models where it had no business being.

The result: email became something everyone expected to produce immediate, measurable, direct revenue — regardless of what they were selling, who they were selling to, or how their audience actually made decisions.

If you sell office chairs, your customers don't need a new chair every month. If you sell enterprise software, your sales cycle is six to eighteen months. If you're a professional services firm, relationships and trust are built over years. None of these businesses should be running email like it's a Tuesday flash sale.

 

Context is everything. The biggest flaw in how email gets evaluated is the complete absence of context. A 15% open rate for a weekly newsletter about complex B2B software is not the same as a 15% open rate for a promotional email to a D2C beauty list. They represent completely different relationships, audiences, and intent levels. Comparing them — or measuring them the same way — tells you nothing useful.

 

More emails does not mean more results — and here's why

This is one I battle with constantly, so let's address it head-on.

On paper, the logic makes sense: the more people who see your emails, the more likely some of them are to act. It's a numbers game. Cast a wider net.

But email is not social media. It is not a broadcast where algorithms decide who sees what. It's a direct channel — one where people have signed up, had an interaction with you, and landed in a highly personal environment. The inbox is not a discovery space. It's a task environment.

When you flood that environment with volume — sending more frequently, to more people, with less relevance — you don't get more results. You get:

  • Higher unsubscribe rates, because relevance has dropped

  • Increasing spam complaints, because people feel harassed

  • Deliverability degradation, because inbox providers see the negative signals

  • Diminishing engagement from your best subscribers, because you've burned their patience

  • Reporting that looks fine on the surface while the programme quietly rots underneath

The email programmes that consistently perform are not the ones sending the most. They're the ones sending with the most intent.

 

Why email opens are lying to you

Let's deal with opens first, because they're the metric most marketers lead with in reporting, most leaders understand, and most ESPs put front and centre on the dashboard.

Opens feel intuitive: if someone opened your email, they saw it. They were interested. The subject line worked. Something worked.

The reality is far messier.  

 

The technical problems

Apple Mail Privacy Protection (MPP), introduced in 2021, pre-loads email content — including tracking pixels — regardless of whether the recipient actually opens the email. For any audience with a meaningful percentage of Apple Mail users, this inflates open rates automatically. You cannot distinguish a real open from a pre-loaded pixel fire.

Outlook's Reading Pane works the other way: people can read an entire email without triggering a tracking pixel, which means real engagement gets missed and shows as zero. Some users delete emails from the Reading Pane without technically opening them — and that can still register as an open depending on how the ESP tracks it.

Security software and bots — particularly in B2B environments — scan links and pre-load content to check for threats. This is where opens and clicks both get distorted at the same time. In one audit I ran for a client, 5% of their total clicks were bot-generated, and their real human click rate was 0.8%, not the 1.5% their dashboard was reporting. That's a 90% inflation. Decisions were being made on completely fictitious numbers. 

 

The human behaviour problems

Even if you could trust the technology perfectly — and you can't — human behaviour makes open rates unreliable as a success indicator.

  • Open-to-delete: People open emails to clear the unread badge. They've seen your subject line, felt mild curiosity, opened it, and deleted it in under two seconds. That counts as an open.

  • Passive scanning: Someone opens an email on their phone while commuting, glances at the first line, and locks their screen. The open is tracked. No meaningful engagement happened.

  • Inbox triage: Many people batch-process their inbox — opening and archiving emails as quickly as possible to get to zero. A fast open during triage is not the same as an engaged read.

  • The wrong context: Someone opens your email five minutes before a meeting, can't actually read it, and forgets to come back. Open tracked, impact zero.

An open tells you someone's email client retrieved your content. It does not tell you whether they read it, whether it created any impression, whether it influenced anything, or whether it was positive or negative.

 

The benchmark problem

While we're here — industry benchmarks for open rates are largely useless, and I will die on this hill.

A "good" open rate benchmark is calculated by averaging data across thousands of companies, industries, audience types, list sizes, sending frequencies, and ESP configurations — and presenting the result as a number you should aspire to.

But your audience is not the industry average. Your frequency is not the industry average. Your opt-in quality is not the industry average. Your relationship with your subscribers is not the industry average.

Comparing your open rate to a benchmark is like comparing your restaurant's lunch crowd to the national average for "restaurants" — including fast food chains, Michelin star venues, airport cafes, and market stalls. The number means nothing in context.

Your benchmark is your own historical performance, segmented properly, measured consistently. That's it.

 

Why clicks are stronger — but still not the answer

Clicks are a more meaningful signal than opens. A click requires an active decision: someone read enough of the email to find the link, decided it was worth their time, and followed through. That's real engagement.

But clicks are still not the answer to measuring email performance, for several reasons.

 

The technical distortion

The same B2B security scanning that inflates opens also inflates clicks. Tools like Mimecast, Barracuda, and Proofpoint automatically follow links in emails to check them for threats before they reach the recipient. If your link looks clean, the email gets through — but a click has already been recorded.

As I mentioned: in a real audit, one client's reported click rate was 1.5%. Their actual human click rate, after removing bot activity, was 0.8%. Every decision about content effectiveness, journey optimisation, and campaign performance had been made on inflated numbers for months.

Cross-device tracking also muddies the water — someone opens on mobile, clicks later on desktop. Some ESPs count that as two separate interactions, others don't count the second at all. Attribution is inconsistent across platforms.

 

The context problem

Even genuine clicks are contextually meaningless without understanding what the click represents in the context of the business, the audience, and the customer journey.

Consider:

    • You sell premium office chairs. A customer just bought one. Your email newsletter goes out every two weeks with new product content. Why would they click? They don't need a chair. A low click rate is not a failure — it's a reflection of reality.

    • You sell B2B project management software. A prospect is three months into evaluating vendors. They open and read your nurture emails carefully. They don't click because they're not ready to act — but those emails are building the preference that leads to a demo request later. Zero clicks. Real impact.

    • You send a re-engagement email to a dormant segment. The click rate is high because only the genuinely interested people remained. The click rate looks great — but if you compare it to your regular newsletter, it's misleading.

A click rate is not inherently good or bad. It's a signal whose meaning depends entirely on who you sent the email to, what the email was asking them to do, and where they are in their journey with you.

 

What calling clicks 'success' actually does

When organisations celebrate clicks as the primary success metric, they start optimising for clicks — not for actual outcomes. This leads to:

  • Clickbait subject lines that inflate opens but reduce trust

  • Excessive CTAs crammed into emails so something gets clicked

  • Sending to larger segments because more recipients = more potential clicks

  • Short-termism: optimising individual campaign performance instead of long-term programme health

  • Reporting theatre: making slides that look successful while the programme slowly deteriorates underneath

Clicks measure one type of action in one moment. They do not measure whether email is doing its job.

 

The billboard effect: how email impacts without an open

Here is something almost nobody talks about in email reporting, and it's one of the most important things to understand:

Email creates impact before the open.

Every delivered email — regardless of whether it's opened — is a micro-branding moment. When your email lands in someone's inbox, they see:

  • Your sender name — do they recognise it? Do they trust it?

  • Your subject line — what impression does it leave?

  • Your preheader — does it reinforce the right associations?

  • Your BIMI logo (if enabled) — visual brand consistency in the inbox

Even if they scroll past and delete it in two seconds, something has happened. Your brand has appeared in their environment. Your name has been processed. Your message — at least in headline form — has been registered.

Think of it like a billboard on a motorway. You don't click a billboard. You don't fill in a form from a billboard. You don't convert from a billboard in that moment. But over time, repeated exposure to a billboard builds familiarity, recognition, and association. When you later need what that billboard advertised, you think of them first.

Email works the same way. Especially at scale. Especially over time.

Even if only 30% of your newsletter subscribers open an email, 100% of delivered emails still appeared in someone's inbox. That 70% who didn't open? They still saw your name. They still processed your subject line. They're still building an association between your brand and the value you represent.

This is why consistent sending matters. Not batch-and-blasting, but showing up regularly with relevant content builds the mental availability that makes people choose you when the moment of need arrives.

It also means your email programme is almost certainly under-reported. If you're only measuring opens and clicks, you're measuring the explicit, visible engagements — and ignoring an enormous layer of influence that never produces a trackable data point.

 

The important caveat: visibility is not a licence to spam

Before anyone uses this to justify blasting their whole database every day — stop.

Awareness without engagement has a ceiling, and that ceiling is deliverability. Inbox providers track negative signals: emails deleted without opening, spam complaints, low engagement patterns across your sending history. If your emails consistently produce negative signals, your inbox placement degrades. And if your emails aren't reaching the inbox, you're not getting the billboard effect either — you're getting the spam folder effect, which is no effect at all.

The awareness argument is not "send more to more people." It's "show up consistently, with enough quality and relevance to protect your deliverability, and let the cumulative effect build over time."

 

Email is an impact channel — and that's actually a good thing

Here's the reframe that changes everything: email is not a conversion channel. It's an impact channel.

This is not a consolation prize. It's actually a more powerful and more defensible position — if you understand what it means and how to measure it.

An impact channel is one that:

    • Builds mental availability over time — keeping you present in someone's mind so they think of you when the need arises

    • Reinforces positioning — shaping how people perceive your expertise, values, and differentiation

    • Creates momentum across the journey — moving people forward through education, reassurance, and trust-building

    • Reduces friction — helping people feel confident enough to take the next step, whenever that step happens

    • Supports other channels — increasing direct traffic, brand search, content engagement, and pipeline velocity

Email's impact is often indirect. The action happens somewhere else — a Google search, a direct visit, a reply to a sales rep, a conversation at an event — but the influence started in the inbox.

When businesses only measure direct attribution from email — "they clicked the email and converted" — they are measuring a tiny fraction of email's real job. The majority of email's impact is invisible in standard reporting. Which means email almost always gets undervalued, under-resourced, and over-blamed.

That invisible impact has a name: Return on Impact (ROI²).

ROI² is the value your email programme delivers across the full customer journey — even when you can't directly track it. It shows up as momentum in the sales pipeline, lifts in brand search, direct traffic after campaigns, reply sentiment and emotional engagement, and long-term LTV increases linked to retention.

Ask any business that has paused email and watched pipeline velocity drop, direct traffic slide, or retention rates soften — that's impact becoming visible only once it's gone.

 

Attribution theatre: why email gets blamed and credited unfairly

Attribution theatre is when businesses pretend they can measure marketing impact cleanly, while relying on metrics and models that are either wildly incomplete, politically convenient, or both. It looks like certainty. It feels like control. It produces charts. But it's often detached from reality.

Email gets dragged into attribution theatre more than most channels for a few structural reasons.

Email is the most obvious touchpoint. It arrives in an inbox, it's timestamped, and it's easy to point at in a meeting. When someone needs a reason for a spike or a dip, "we sent an email" is a simple story.

Email is often the last visible touch before an action. Someone receives an email, then searches your brand, then visits your site directly, then converts. Last-click attribution gives the credit to the direct visit. First-click attribution might give it to a paid ad from six months ago. Email, which nudged the person back into the journey, gets nothing.

Email reporting looks deceptively clean. Open rates, click rates, "revenue attributed" — they look like answers. The problem is they're often proxies that can be distorted by Apple MPP, bot clicks, list health, and inbox placement issues.

The two unfair stories that result:

Email gets blamed when engagement dips (regardless of root cause), when revenue slows (regardless of whether email is responsible), and when someone needs a reason for a missed target. The email team becomes the scapegoat. And the response is usually more volume, more pressure, worse results, and a deepening blame cycle.

Email gets over-credited when someone sends an email and sales spike the next day — ignoring every other touchpoint, market condition, seasonal factor, and cumulative influence that contributed. This sets unrealistic expectations that collapse the moment conditions change.

Both are forms of attribution theatre. Both are damaging. And both come from not having a mature, honest measurement framework for what email actually does.

 

How to actually measure email — the framework

The goal is not perfect attribution. Perfect attribution does not exist in marketing, and chasing it is a waste of time. The goal is credible measurement — a measurement approach that reflects reality, helps you make better decisions, and holds up in a leadership conversation.

There are three layers of email measurement. Each plays a different role. Most programmes only track the first.

 

What to measure — by business type

The principles above apply universally. The specific metrics that matter most vary significantly by business model. Here's how to think about it for D2C/e-commerce, B2C, and B2B.

 

D2C and e-commerce: email as a revenue and retention engine

D2C is the closest email gets to a performance channel — buying is frequent, decisions are relatively fast, and attribution is more trackable. But even here, most programmes over-index on campaign-level revenue and under-measure programme-level health. 

 

The D2C trap to avoid: over-attributing all revenue to email. The click happened to come through email, but the purchase decision may have been influenced by TikTok, a friend's recommendation, a review, and three previous emails none of which generated a click. Report email's contribution as one layer of a multi-channel journey.

 

B2C (non-e-commerce): email as an awareness and relationship channel

B2C brands outside of pure e-commerce — gyms, subscription services, hospitality, financial services, media — often have longer consideration cycles and more complex journeys. Email here is less about immediate conversion and more about staying present, building trust, and supporting decisions that happen elsewhere. 

 

 

B2B: email as a pipeline support and trust-building channel

B2B is where the performance channel myth causes the most damage. Sales cycles are long, buying committees are complex, and 95% of your audience is out of market at any given moment. Expecting email to produce immediate, direct pipeline is like expecting a single networking event to close a six-figure deal. It's one input in a long process. 

 

The B2B conversation to have with leadership: Email in B2B is a compound channel. Its primary job is to keep you present, build trust, and support the sales conversation — not to generate immediate revenue. Measuring it with e-commerce metrics will always make it look like it's failing. The right measurement asks: are our email-influenced leads converting at a higher rate? Are nurtured accounts closing faster? Are we the brand they think of when the need arrives?

 

 

How to build a reporting framework that leadership will actually understand

Leadership does not need a lecture on attribution models. They need risk, opportunity, clarity, and a decision framework. Here's how to structure reporting that gives them that. 

 

The honest summary

Opens and clicks are not useless. They're indicators — weak, unreliable, context-dependent indicators. Use them as one data point among many. Don't build your programme around them. Don't report them to leadership as proof of success or failure. And definitely don't make strategic decisions based on them in isolation.

The real measurement question for email is not "how many people opened it?" It's:

    • Is email keeping us present in our audience's mind over time?

    • Is email building the trust that makes people choose us when the moment arrives?

    • Is email supporting pipeline, retention, or revenue in ways that show up in business outcomes — not just campaign dashboards?

    • Is our email programme healthy enough to keep delivering that impact long-term?

When you start measuring those things — with the right metrics for your business model, reported honestly, tracked over time — email stops being a channel you have to defend and starts being one of the most valuable things in your marketing mix.

Because it always was. You just weren't measuring the right things.

 

Further reading from The Vault: