A/B testing

Compare two versions of a page, email, or feature to determine which performs better using statistical methods that isolate the impact of specific changes.

A/B testing

A/B testing

definition

Introduction

A/B testing (also called split testing) is a controlled experiment that compares two versions of a single element to measure which performs better against a defined metric. One version is the control; the other is the variant. By showing version A to one group and version B to another, you isolate the impact of the change and make decisions based on data rather than intuition.

A/B tests work because they control for variables. When you change only one element—a call-to-action button colour, email subject line, landing page headline, or checkout flow—you can confidently attribute performance differences to that specific change. This removes noise from the equation. A 15% lift in conversion rate on a redesigned form is meaningful. A 15% lift from changing two things simultaneously is meaningless; you don't know which one caused it.

The practice has become standard in B2B marketing because it compounds. A 5% improvement to email open rates, 3% to click-through rates, and 7% to landing page conversions stacks across your entire pipeline. Over a year, these incremental gains become significant revenue gains. But they only work if you run proper tests, reach statistical significance, and move deliberately rather than changing everything at once.

Why it matters

Prevents expensive guesses

Without testing, marketing decisions rest on opinion, trends, or what competitors do. A redesigned homepage might look beautiful but underperform. An email with personalisation might get lower engagement than expected. Testing removes the guesswork and validates assumptions before rolling out changes across your entire audience.

Builds evidence for larger decisions

A single A/B test might improve conversion by 2%. But when you run dozens of tests across your funnel, each small improvement multiplies. The tests also generate internal credibility—stakeholders see the data and buy in to further optimisation work. This accelerates decision-making across product, design, and marketing.

Uncovers unexpected insights

A/B tests often reveal counter-intuitive results. The longer form might outperform the short one. The urgent copy might underperform the educational copy. Testing exposes what your actual audience responds to, not what you assumed they would.

How to apply it

Define your hypothesis and metric

Start by identifying one element to test and the outcome you're measuring. Don't test 'everything looks better'—test 'changing the button from blue to green will increase form completions by 5%'. The metric must be trackable: conversion rate, click-through rate, email open rate, or time on page.

Split your audience randomly

Divide your traffic or user base equally between control and variant. Randomisation prevents selection bias. If your high-intent users all see version B, you can't claim B is better—it's just attracting higher-intent visitors.

Run the test long enough

Reach statistical significance before concluding. A 10% lift from 50 clicks is noise. A 10% lift from 5000 clicks is signal. Most platforms require at least 100-200 conversions per variant before results are reliable.

Document and iterate

Record every test, winner, and insight. This creates institutional memory and prevents repeating failed experiments. Winning tests often become the baseline for the next test—continuous improvement compounds.

Email subject line testing at a SaaS company

A B2B SaaS firm testing email subject lines to their database of 50,000 prospects found that 'Your team is losing £5,000 per week on X' significantly outperformed 'Learn how to improve X efficiency' (32% open rate vs 18%). The test was run to 5,000 people, well above the sample size needed for significance. The winning line was then used in all future campaigns to that segment.

Landing page headline testing for a consulting firm

A management consulting firm tested two headlines: 'Strategy consulting for growth-stage SaaS' vs 'Close your critical strategy gaps in 90 days'. The second variant converted at 8.2% compared to 6.1%. The specificity of the outcome ('close gaps') and the time constraint ('90 days') resonated more than the generic category positioning.

Call-to-action button colour test in a payment platform

A fintech company tested CTA button colours on their checkout page. The contrasting orange button outperformed the muted grey button by 4.3%, a seemingly small lift. Applied across 2 million annual transactions, this translated to an additional £180,000 in revenue annually with zero product changes.

Keep learning

Growth leadership

How do you make all four engines work together instead of in isolation?

Explore playbooks

Data & dashboards

Data & dashboards

Build the dashboards and data pipelines that show your growth engines in one view so you can spot bottlenecks and make decisions in minutes, not meetings.

Growth team tools

Growth team tools

The wrong tools create friction. The right ones multiply your output without adding complexity. These are the tools I recommend for growth teams that move fast.

Review and plan next cycle

Review and plan next cycle

Analyse last cycle's results across all twelve metrics, identify the highest-leverage improvements, and set priorities that compound into the next period.

Revisit quarterly

Revisit quarterly

Pressure-test your strategy against market shifts, performance data, and team capacity so your direction stays relevant and ambitious.

Related books

Hacking growth

Sean Ellis

Rating

Rating

Rating

Rating

Rating

Hacking growth

A practical framework for experiments and insights. Build loops, run tests and adopt a cadence that ships learning every week.

Lean Startup

Eric Ries

Rating

Rating

Rating

Rating

Rating

Lean Startup

A disciplined approach to experiments. Define hypotheses, design MVPs and learn before you scale.

Lean Analytics

Alistair Croll

Rating

Rating

Rating

Rating

Rating

Lean Analytics

Pick the One Metric that Matters for your stage. Build lean dashboards and use data to decide the next best move.

Related chapters

1

Building your backlog

Random testing wastes time and teaches you nothing. Learn how to collect experiment ideas systematically and prioritise them based on potential impact so you always know what to run next.

2

Creating strong hypotheses

Most experiments fail before they start because the hypothesis is vague or untestable. Learn how to write hypotheses that are specific enough to prove or disprove and tied to metrics that matter.

Wiki

Activity tracking

Log emails, calls, and meetings automatically to understand what drives deals forward and coach reps based on actual behaviour rather than guesswork.

Sales-led growth

Win customers through direct sales conversations where reps guide prospects from discovery to close with personalised solutions and relationship building.

Growth marketing

Apply disciplined experimentation across the entire customer lifecycle, optimising every stage through rapid testing and data-driven iteration.

Workflow automation

Connect triggers to actions across systems so repetitive tasks happen automatically and teams can focus on work that requires judgement instead of admin.

Value proposition

Articulate the specific outcome customers get from your solution to communicate why they should choose you over doing nothing or using alternatives.

Pareto Principle

Focus effort on the 20% of activities that drive 80% of results, systematically eliminating low-yield work to maximise output per hour invested.

Trigger

Define events that start automation workflows so the right message reaches people at the right moment based on their actual behaviour not arbitrary timing.

Compound growth rate

Calculate your true growth trajectory by measuring the rate at which your business grows when gains build on previous gains over multiple periods.

Customer data platform

Unify customer data from every touchpoint to create complete profiles that power personalised experiences across marketing, sales, and product.

Integration

Connect tools so data flows automatically between systems to eliminate manual entry, keep records current, and enable sophisticated workflows across platforms.

Last-touch attribution

Assign full conversion credit to the final touchpoint before purchase to identify which channels close deals but miss earlier influences that started journeys.

Prioritisation

Systematically rank projects and opportunities using objective frameworks, ensuring scarce resources flow to highest-impact work.

Conversion tracking

Measure which marketing activities drive desired outcomes to allocate budget toward channels that actually generate revenue instead of vanity metrics.

Deal stage

Define pipeline progression steps to standardise how reps advance opportunities and give managers visibility into where deals stall or convert unexpectedly.

Competitive advantage

Identify what you do better or differently that competitors can't easily copy to defend margins and win customers consistently over time.

Marketing stack

Organise the tools that capture leads, nurture prospects, and measure performance to automate repetitive work and connect customer data across systems.

Contact management

Organise customer and prospect information to track relationships, communication history, and next steps without losing context or duplicating effort.

North Star Metric

Choose one metric that best predicts long-term success to align your entire team on what matters and avoid conflicting priorities that dilute focus.

API

Enable tools to exchange data programmatically so you can build custom integrations and automate processes that vendor-built integrations don't support.

Constraint

Identify and leverage limitations as forcing functions that drive creative problem-solving and strategic focus.