Before you dig in, why don't you get access to RE:markable RE:markable is the weekly email about...
The Email Experience Audit: How to Audit What Subscribers Actually Live Through
Before you dig in, why don't you get access to RE:markable
RE:markable is the weekly email about emails. Dropping the latest email marketing news, updates, insights, free resources, upcoming masterclasses, webinars, and of course, a little inbox mischief.
10–14 minute read (depending on how many rabbit holes you enjoy)
Most email audits fail for one simple reason: Marketers audit what they can see - not what people actually experience.
So we stare at open rates, click rates, subject lines, campaign calendars, heatmaps, and templates. We rewrite CTAs, we change button colours (or move them higher - lol), we “refresh the design”, we debate send times like it’s the stock market.
Meanwhile, your subscribers are experiencing something completely different:
- They experience interruption
- They experience repetition
- They experience confusion
- They experience mistimed messages
- They experience broken promises
- They experience emotional friction
And none of that shows up neatly in your ESP dashboard (because they are rubbbbbish).
This is why you can “optimise” for months and still feel like email is plateauing. You’re adjusting the visible layer while the lived experience stays messy.
So let’s talk about what a real email audit looks like. Not a metrics audit. Not a content audit. An experience audit, across all external comms, because your audience does not care which system or team sent what.
To them, there is just one brand or business they are receiving emails from.
1) Your subscribers don’t experience “email marketing” - they experience your brand
Here’s the bit teams forget: email isn’t just “marketing sends”.
Your subscriber might receive messages from:
- Your newsletter platform (marketing)
- Your order/transactional system
- Your review system
- Your customer service tool
- Your events/webinars tool
- Your sales team (sequencing tools, manual outreach, follow-ups)
- Your product platform (usage updates, onboarding nudges)
- Your community platform
- Your billing/finance system
That is one inbox experience. One stream & one relationship.
So if you audit only your marketing sends, you miss the real story: collision, confusion, and inconsistency across the whole comms ecosystem.
And if you’re sat there wondering why engagement is down, while your customer is getting:
- a “how are you finding it?” email
- a “leave us a review” email
- a “your order is delayed” email
- a “limited time offer!” promo
- a “book a call” email from sales
- a webinar invite
- a password reset
- and three nurture emails that do not acknowledge any of the above…
…then yes. Engagement will drop.
Not because your subject lines are weak, but because your experience design is.
2) Before you audit, you need to accept inbox reality
A consumer inbox and a work inbox are not the same environment
A consumer inbox is often noisy, promotional, and dopamine-driven. People are trained to expect offers, discounts, and a sense of urgency. They skim fast, delete faster, and treat most brand emails as background noise until they need something.
A work inbox is task-heavy, decision-heavy, and interruption-heavy. People are triaging constantly: what needs action, what can wait, what can be ignored safely.
Different environment mean different rules.
But both share one thing: the inbox is a utility environment. It’s not a browsing space, people go there to deal with things, not explore your brand’s creative direction for the week.
People are not “ignoring you” - they are protecting their attention
Your emails are judged in a brutal context you don’t control:
- What else landed that day
- What their calendar looks like
- How stressed they are
- How many sales emails are they’re deleting
- What they’ve learned to expect from brands like you
This is why I say engagement is not a action, it's acondition.
And your audit should focus on the conditions.
3) The invisible forces that break email performance (without you noticing)
Predictive coding: when people feel like they already know what your email is
If every email looks the same, sounds the same, and pushes the same thing, the brain stops evaluating it. It shortcuts.
- “This is a promo.”
- “This is another generic newsletter.”
- “This is a sales push disguised as ‘quick idea’.”
No open required - the decision happens at scan-speed.
This is one reason “best practice” templates can be dangerous. When everyone follows the same patterns, inboxes become visually and tonally repetitive, and attention drops before content has a chance.
Habit and pattern recognition
Subscribers build patterns fast:
- “They email too much.”
- “They only email when they want something.”
- “Their emails are long and hard to read.”
- “I can’t find the point quickly.”
- “I signed up for one thing and now I get everything.”
Once that pattern is formed, your future emails are judged through it.
Trust erosion through micro-friction
This is the bit that kills performance slowly and quietly.
Micro-friction is a small moment of “ugh” that chips away at trust.
It’s things like:
- getting a discount email right after purchasing (10/10 do not recommend)
- receiving a sales push immediately after downloading a guide
- being asked to “book a demo” when they’re clearly still learning
- being sent irrelevant product categories repeatedly
- duplicated messages because two automations fired
- being emailed while they have an open support ticket
- being put into a promo cadence without consent for that cadence
- transactional emails coming from a different “brand voice” entirely
-
“last chance” urgency every three days
One micro-friction moment rarely causes a mass unsubscribe. But it causes something more dangerous:
silent disengagement.
And then later, someone asks why open rates are down, and the team starts rewriting subject lines like that’s the root cause.
Why most attribution models break down (especially for email)
Most attribution models assume people behave like spreadsheets - sorry, but they don't.
People behave like humans with memory, context, competing priorities, and delayed action.
Let’s talk about the real world for a second.
- People read an email on mobile, then purchase later on desktop - or visa versa or even the week after
- People see your email, don’t open it, then search your brand or product names two days later.
- People click nothing, but mention your newsletter in a sales call.
- People engage for weeks, then convert without any obvious “trigger” email.
- People are influenced by the consistency of your presence, not one campaign.
Email impact is often cumulative, not immediate.
It’s also often indirect. Email supports other channels by increasing brand recall, reinforcing messaging, nudging people back into the journey, and keeping you mentally available.
So when you rely on last-click, platform-native dashboards, or “revenue attributed” within the ESP, you are measuring the narrowest, most convenient slice of the truth.
Not because you’re incompetent, because the tools are not built for nuance.
What email actually does (the stuff attribution misses)
This is the part marketers intuitively know but struggle to defend.
Email doesn’t just “convert”. It:
Builds mental availability:
Being seen consistently matters. A lot of email’s value is memory-based: recognition, familiarity, and trust accumulation. People choose the brand they remember when they finally need the thing.
Reinforces positioning:
Even if someone doesn’t click, they absorb tone, worldview, proof points, and differentiation. Email is one of the few channels where you can repeatedly shape perception.
Creates momentum across the journey:
Welcome flows, onboarding, education sequences, and retention journeys move people forward. The influence is often gradual, not “this email = this sale.”
Reduces friction:
Good lifecycle emails help people complete tasks: find info, understand value, feel confident, and take action. That impact doesn’t always show up as a click.
Supports other channels:
Email can lift direct traffic, brand search, content consumption, and pipeline velocity. The action often happens elsewhere, but the influence starts in the inbox.
So if your measurement only captures “clicked from email and converted in the same session”, you are ignoring a massive chunk of email’s real job. Which means the business undervalues it, over-demands from it, and misdiagnoses it when performance shifts.
The dangerous myth: “If it didn’t click, it didn’t work”
This myth is responsible for so much unnecessary panic. Clicks are not “impact.” Clicks are one type of action that happens when someone is ready and your email is the most convenient route.
Plenty of people:
- Read and remember
- Read and act later
- Act via another channel
- Act without opening at all (hello inbox scanning behaviour)
- Convert because of accumulated exposure, not a single email
Email isn’t just a performance channel, it's an awareness and relationship channel too.
If your leadership expects every email to “perform”, you will end up designing emails for the dashboard, not for humans, and the channel will decay.
READ: How to Start Using Email Marketing as an Awareness Channel
Email marketing can be associated more often with bottom-of-funnel than top-of-funnel, and using email as an awareness channel typically isn’t on anyone’s radar for that very reason.
Let’s talk about the untapped email opportunity: awareness.
How to measure email impact without lying to yourself
Now for the practical bit. This is the part you can actually use.
You do not need perfect attribution. You need credible measurement that reflects reality, and that helps you make better decisions.
Here’s how to do that:
1) Separate visibility from engagement
Before you measure impact, you need to know if people are even seeing your emails.
Inbox placement is the silent killer of attribution accuracy. If a chunk of your list is landing in spam, promotions or “Other”, you will misread engagement and assume the content is the problem.
Your baseline must include:
- inbox placement testing (even if it’s lightweight)
- provider-level performance (Gmail vs Outlook vs Yahoo behave differently)
- complaint rates and negative signals
- list health and bounce patterns
Because if deliverability is compromised, performance attribution becomes theatre. You’re analysing behaviour that never had a chance to happen.
2) Stop reporting on the whole list like it’s a single audience
The “send to all” mindset corrupts reporting.
A healthy email programme separates:
- new subscribers vs established subscribers
- intentional opt-ins vs consequential opt-ins
- engaged vs cooling vs dormant
- provider segments (especially if Microsoft is causing pain)
- lifecycle stages (prospect vs customer vs churn risk)
- core persona segments or audience profiles
If you report one open rate or one click rate for “the list,” you are smoothing out reality and making it impossible to see what’s actually happening.
Segment-based reporting is where truth lives.
3) Use leading indicators to explain lagging ones
Engagement is a lagging indicator. It reflects conditions that were created weeks or months earlier.
If engagement is falling, you need to look at leading indicators such as:
- expectation alignment at sign-up
- welcome journey completion
- frequency and cadence stability
- segmentation relevance
- inbox placement consistency
- churn and list hygiene
- content alignment to intent
This is how you stop blaming subject lines for systemic issues.
4) Track “email-assisted” impact intentionally
If you want to show email’s real role, you need to track outcomes that reflect influence, not just direct clicks.
Depending on your business, that can include:
- branded search lift after sends
- direct traffic trends correlated to consistent email cadence
- returning visitor rate by subscriber cohort
- pipeline velocity for nurtured contacts vs non-nurtured
- repeat purchase rate for lifecycle-exposed customers
- time to first meaningful action (especially in B2B)
- retention and reactivation rates by journey exposure
This is where email’s value shows up: not as a single click, but as sustained behavioural movement.
5) Use controlled pilots (your leverage move)
If you want to prove impact without arguing theory, run pilots.
This is the simplest, most defensible method:
- pick a clean engaged segment (5–20%)
- run a structured approach (segmentation, cadence, journey logic)
- compare against a baseline group or previous period
- measure over time, not one send
Pilots reduce confrontation, they create evidence.
And they make it much easier to walk into leadership conversations with something stronger than opinion.
6) Measure the ROI of exclusions (yes, really)
One of the most persuasive ways to demonstrate maturity is to show that sending less can create more impact.
Track:
- revenue per 1,000 delivered
- complaint rate change
- inbox placement improvements
- re-engagement improvements
- pipeline quality lift
Then make the point clearly:
Volume down doesn’t equal impact down. Often it’s the opposite.
This is how you shift leadership away from “send it to everyone” and toward “send smarter”.
READ: Master the Modern Customer Journey for 2025
I talk more about friction points in this blog aswell as how to stop shooting in the dark, but instead target your customers with total accuracy, and they’ll be feeling supported, satisfied, and ready to convert in no time.
4) The reframe: you are not a sender — you are an experience designer
This is the job!
Your job isn’t to “send emails”. Your job is to design a coherent communication experience over time that:
- honours what people were promised
- respects their context
- supports the journey
- avoids collision
- builds trust
- and creates conditions for engagement
If you approach auditing from this angle, everything changes. Because now you’re not just asking: “what did we send?”
You’re asking: “what did it feel like to receive all of this?”
5) What is a collision in email marketing (and why you must audit it)
A collision is when messages overlap in a way that creates confusion, redundancy, or emotional whiplash.
A collision happens when:
- Two automations send similar messages in the same week
- Sales outreach overlaps marketing nurture (“book a call” from three directions)
- Transactional emails contradict marketing emails (“your order is delayed” + “treat yourself today!”)
- A customer receives acquisition messaging after purchasing
- A churn-risk customer receives a hard sell instead of help
- A subscriber is re-entered into a flow they already completed
- Suppression and exclusions aren’t built properly (or at all)
Collision isn’t just “too many emails”. It’s too many mismatched signals.
A good audit identifies collision points and redesigns the system so the experience makes sense.
The External Comms Experience Audit
A step-by-step process you can actually follow
This is the part you asked for: practical steps, real questions, and a process you can run (even if you’re a team of one).
Step 1: Set the audit scope (otherwise you’ll drown)
Start by defining what you’re auditing:
Channels:
→ marketing emails (campaigns + automations)
→ transactional emails
→ sales emails/sequences
→ customer service comms
→ review requests
→ product/platform comms
→ event/webinar comms
Time window:
→ last 90 days for sends + performance
→ full lifecycle for journeys (welcome, onboarding, winback)
Success definition:
→ not “higher opens”
→ but “clearer experience, fewer collisions, higher trust, measurable impact”
Step 2: Build a “subscriber reality” inbox view
This is the most important mindset shift.
Create 2–4 test identities and subscribe like a real human:
- brand new subscriber (newsletter sign-up)
- consequential opt-in (download/webinar/checkout tick box)
- customer (after purchase / after trial)
- lapsed/cooling contact (if you can simulate it)
Then document what happens in the first:
→ 5 minutes
→ 24 hours
→ 7 days
→ 30 days
Questions to answer:
- What is the first email they receive and how quickly?
- Does it match the promise made at sign-up?
- Is the next step obvious?
- Are you setting expectations for frequency and content?
- Does the tone feel aligned across emails and systems?
- Is there any immediate friction (pushy CTA, irrelevant offer, confusing message)?
Step 3: Audit entry points and promise vs reality
Most email programmes break at the start.
People sign up under one expectation, then receive a completely different experience.
Audit every entry point:
- website forms
- popups
- checkout opt-in
- gated content
- event sign-ups
- enquiry forms
- lead magnets
- social sign-up links
Questions to ask:
- What exactly are we promising here (in plain language)?
- Is this an intentional opt-in or a consequential opt-in?
- What data are we collecting - and is it useful and strategic, or just “nice to have”?
- Are we setting frequency expectations (or hoping nobody notices)?
- Are we clear about what they’ll receive next?
- Does the first email deliver on the thing they wanted?
If you treat consequential opt-ins like intentional newsletter subscribers, your metrics will lie to you and your journeys will feel irrelevant fast.
Step 4: Map your message ecosystem (the “where is email coming from?” audit)
Make a list of every system that sends emails externally.
Then answer:
- What kinds of emails does it send?
- Who receives them?
- Under what triggers?
- Do we control content/design/tone?
- Are there overlaps with other systems?
This is where you find the “surprise” emails that leadership forgot existed.
(They’re always there.)
Step 5: Run a journey and automation audit (the collision hunt)
Now you map journeys like a systems person, not a content person.
List every automation and flow, then document:
- entry criteria
- exit criteria
- suppression rules
- exclusions
- goals
- triggers
- timing
- ownership
Questions to ask:
- Can a person qualify for more than one journey at once?
- What happens if they purchase midway?
- What happens if they book a call?
- What happens if they open a support ticket?
- Do we have “ghost workflows” still running?
- Are old campaigns still triggering?
- Are there loops that re-add people accidentally?
This is where you typically find the real reason people are overwhelmed: not campaigns, but automation sprawl.
Step 6: Audit cadence and emotional load (yes, emotional load)
Cadence is not just volume. It’s pressure.
An audit should identify how often you’re asking for something vs giving something.
Questions to ask:
- How many emails in the last 14 days were “asks”?
- How many were useful, orientating, educational, or supportive?
- Are we sending urgency constantly?
- Are we training people that our emails are always demands?
- Does the tone match the state the reader is likely in?
If every email is “do this now”, your subscribers will eventually treat you like noise.
Step 7: Audit design consistency and comprehension (not aesthetics)
This is where your design handbook thinking comes in.
Don’t ask: “is it pretty?”
Ask: “is it easy to process?”
Questions to ask:
- Can someone understand the point of the email in 3 seconds?
- Does the first screen bridge the promise made pre-open?
- Is there a clear visual hierarchy (hooking vs floating)?
- Do clickable elements look clickable?
- Is it readable on mobile without effort?
- If images don’t load, does the email still work?
This is a comprehension audit, not a branding debate.
Step 8: Deliverability reality check (because dashboards will not tell you the truth)
If you don’t know where your emails land, you cannot interpret performance.
A deliverability check (or audit, you can learn exactly how in an upcoming masterclass here) should include:
- inbox placement testing (especially Microsoft/Outlook if you’re B2B)
- list validation and bounce patterns
- complaint rates
- engagement segmentation (active vs cooling vs dormant)
- provider-based differences
If deliverability is your blind spot, you will waste months optimising the wrong layer.
If you want to learn how to audit deliverability properly, this is exactly what I teach inside my deliverability masterclasses — and if you need someone to run the audit and give you a fix plan, that’s also something I do as a service.
Step 9: Measure what matters (and stop using averages to describe humans)
Your audit should produce reporting that reflects real life.
Instead of:
- overall open rate
- overall click rate
Move towards:
- time to disengaged
- days since email engagement
- performance by lifecycle stage
- performance by opt-in type (intentional vs consequential)
- performance by mailbox provider
- revenue/pipeline per 1,000 delivered (not total volume)
- time to first meaningful action
- retention or repeat behaviour (where relevant)
- engagement decay curves (how quickly people drop off)
Key takeaway: Your subscribers do not experience “overall metrics”. They experience sequences.
Step 10: Turn findings into an experience redesign plan (not a to-do list of tweaks)
A proper audit ends with:
- what to stop
- what to simplify
- what to rebuild
- what to protect
Your redesign plan should include:
- updated entry point promises
- revised welcome/orientation logic
- new exclusions and suppression rules
- collision prevention rules (sales/service/product conflicts)
- cadence rules by segment
- a “value ratio” target (how much is ask vs help)
- content/journey mapping by intent
- measurement that shows impact over time
This is where the audit becomes an operating system, not a one-off project.
Quick win: the “external comms snapshot” exercise
If you do nothing else, do this:
- Pick one segment (e.g., new subscribers)
- Trace every email they could receive in the first 30 days
- Across all systems
Then answer:
- Is this coherent?
- Does it honour the promise?
- Is anything duplicated?
- Are we asking too early?
- Does any email feel emotionally tone-deaf?
- Is there a clear path from “new” → “trusted”?
This one exercise often reveals more than a month of dashboard staring.
An email audit is a responsibility audit
Email is not just output, it is experience.
And if you are responsible for email, you are responsible for designing that experience — intentionally, coherently, and with respect for how humans actually use inboxes.
So before you optimise subject lines, before you redesign templates, before you chase clicks:
Audit what people live through in their real lives.
Because when the experience is clear, calm, and aligned… performance is a byproduct.
Email, CRM and HubSpot Support
I help marketers and businesses globally improve, design and fix their email, CRM, and HubSpot ecosystems, from strategy through to execution.
My services include:
-
Email marketing strategy, audits, training, workshops, and consultancy
-
CRM strategy and enablement
-
Full HubSpot implementations, optimisation and onboarding through my agency
If you’re looking for experienced external support (and lots of enjoyment along the way), this is where to start.
Like this blog? You'll love RE:markable
RE:markable is the weekly email about emails. Dropping the latest email marketing news, updates, insights, free resources, upcoming masterclasses, webinars, and of course, a little inbox mischief.