Landing Pages A/B Testing Conversion Optimisation Analytics

Landing Page A/B Testing Guide: What to Test and How to Measure Results

Jason Poonia
|

You’ve built a landing page, driven traffic to it, and achieved some conversions. But how do you know if it’s performing as well as it could? The answer lies in A/B testing—systematically comparing variations to discover what truly drives results.

A/B testing removes guesswork from optimisation. Instead of debating whether a green button converts better than a blue one, you test both and let the data decide. This guide will teach you what to test, how to set up tests properly, and how to know when you’ve found a real winner.

What Is A/B Testing?

A/B testing (also called split testing) is a method of comparing two versions of a webpage to determine which performs better. You divide your traffic between version A (the original) and version B (the variation), then measure which version achieves more conversions.

The concept is straightforward:

  1. Create a variation that changes one element
  2. Split traffic between original and variation
  3. Measure conversion rates for each
  4. Declare a winner based on statistical significance
  5. Implement the winning version and test something new

Done systematically over time, A/B testing compounds small improvements into substantial conversion gains.

What to Test: Highest-Impact Elements

Not all elements have equal impact on conversions. Focus your testing efforts on elements that influence the most visitors’ decisions.

Headlines (Highest Impact)

Your headline is the first thing visitors read and significantly influences whether they stay or leave. Test:

  • Value proposition angles: Does emphasising cost savings outperform emphasising time savings?
  • Specificity: Does “Increase Your Leads by 47%” beat “Dramatically Increase Your Leads”?
  • Emotional versus logical: Does a fear-based headline outperform an aspiration-based one?
  • Question versus statement: Does “Struggling to Find Reliable Trades?” outperform “Find Reliable Trades Today”?

Calls-to-Action (High Impact)

Your CTA button is where conversions happen. Test:

  • Button copy: “Get My Free Quote” versus “Request Pricing” versus “See Your Options”
  • Button colour: While colour alone rarely creates dramatic differences, testing confirms what works for your audience
  • Button size and placement: Does a larger button increase clicks? Does adding a secondary CTA help or hurt?
  • Surrounding text: Does adding “No obligation” or “Takes 30 seconds” near the button increase conversions?

Social Proof (High Impact)

The type and presentation of social proof can significantly impact trust. Test:

  • Testimonial format: Text versus video, long versus short
  • Testimonial content: Focusing on results versus focusing on experience
  • Review display: Overall rating versus number of reviews versus both
  • Client logos: Presence versus absence, number displayed

Form Design (High Impact)

Form friction directly affects completion rates. Test:

  • Number of fields: Does removing the phone number field increase completions enough to offset lower contact rates?
  • Form layout: Single column versus multi-column, labels above versus beside fields
  • Multi-step versus single-step: Does breaking the form into steps increase completions?
  • Required versus optional fields: Does making some fields optional help?

Supporting Content (Medium Impact)

  • Amount of copy: Long-form versus short-form pages
  • Content order: Does moving testimonials higher increase conversions?
  • Specific sections: Does adding a guarantee section help? An FAQ section?
  • Imagery: Hero image variations, presence of human faces

Page Design (Lower Impact)

Design elements typically have less impact than copy and positioning, but can still be worth testing:

  • Colour schemes: Brand colour variations
  • Layout structure: Different arrangements of the same content
  • Visual hierarchy: What draws attention first

How to Set Up Tests Properly

Poorly structured tests lead to misleading results. Follow these principles for reliable insights.

Test One Element at a Time

When you change multiple elements simultaneously, you can’t know which change caused any difference in results. If you change your headline, button colour, and form length all at once and see an improvement, which change drove it?

Isolate variables by testing one element at a time. Once you find a winner, lock it in and test the next element.

Exception: Multivariate testing (testing multiple elements simultaneously) is valid if you have very high traffic and use proper multivariate testing tools. For most landing pages, stick to simple A/B tests.

Define Your Primary Metric

Before launching any test, clearly define what you’re measuring. Your primary metric should be:

  • Specific: “Form submissions” not “engagement”
  • Measurable: Something you can accurately track
  • Aligned with business goals: Usually conversions or leads, not vanity metrics like time on page

Secondary metrics are fine to monitor, but base your winner decisions on the primary metric to avoid cherry-picking results.

Determine Required Sample Size

One of the biggest testing mistakes is calling a winner too early. With small sample sizes, random chance can create the appearance of a significant difference when none exists.

Before launching a test, use a sample size calculator to determine how many conversions you need for a statistically valid result. Variables that affect required sample size:

  • Baseline conversion rate: Your current conversion rate
  • Minimum detectable effect: The smallest improvement you want to be able to detect
  • Statistical significance level: Typically 95% (meaning 5% chance the result is random)
  • Statistical power: Typically 80% (meaning 80% chance of detecting a real difference if one exists)

For example, if your baseline conversion rate is 5% and you want to detect a 20% relative improvement (to 6%), you might need approximately 8,000 visitors per variation for a statistically valid result.

Run Tests for Adequate Duration

Even with sufficient traffic, run tests for at least one full week—ideally two—to account for day-of-week variations. A test that runs Friday to Monday might show different results than one running Monday to Thursday.

Also consider:

  • Seasonal variations
  • Beginning/end of month patterns
  • Any scheduled marketing that might affect traffic quality

Use Proper Testing Tools

While you could theoretically run tests by manually alternating pages, dedicated tools make testing far more reliable. Options include:

  • Google Optimize: Free, integrates with Google Analytics (note: check current availability as Google has sunset some tools)
  • VWO: Comprehensive testing platform
  • Optimizely: Enterprise-grade testing solution
  • Unbounce: Landing page builder with built-in A/B testing

These tools handle traffic splitting, statistical calculations, and prevent common mistakes like counting the same visitor twice.

Understanding Statistical Significance

Statistical significance is the probability that your observed difference is real rather than random chance. Here’s what you need to know:

The 95% Threshold

By convention, most testers require 95% statistical significance before declaring a winner. This means there’s only a 5% probability that the observed difference occurred by chance.

Important: This doesn’t mean the winning variation is “95% better.” It means you’re 95% confident there is a real difference.

How Significance Is Calculated

Testing tools calculate significance using statistical tests (typically chi-squared for conversion rate comparisons). The calculation considers:

  • Sample size for each variation
  • Number of conversions for each variation
  • The magnitude of the difference between variations

Larger sample sizes and larger differences between variations lead to faster statistical significance.

The Danger of Peeking

Checking results repeatedly and stopping when you see a “winner” is called peeking, and it inflates your false positive rate. Here’s why:

As data accumulates, results naturally fluctuate. If you check daily and stop whenever one variation looks better, you might catch a random fluctuation rather than a real difference.

The solution: Determine your required sample size before starting, then wait until you’ve collected that amount before analysing results. Or use tools with built-in peeking corrections.

What Happens When Tests Are Inconclusive

Sometimes tests end without a clear winner. This is valuable information—it tells you that the element you tested doesn’t significantly impact conversions, and you should focus testing efforts elsewhere.

Don’t force a winner from an inconclusive test. Simply implement whichever version you prefer and move on to testing something else.

Building a Testing Programme

Systematic testing compounds gains over time. Here’s how to build an ongoing programme:

Create a Testing Backlog

Maintain a prioritised list of test ideas. For each idea, note:

  • What you’re testing
  • Why you think it might improve conversions
  • Expected impact (high/medium/low)
  • Effort to implement

Prioritise by potential impact and ease of implementation.

Document Everything

For every test, record:

  • Start and end dates
  • Variations tested (with screenshots)
  • Traffic and conversion numbers for each variation
  • Statistical significance achieved
  • Winner and magnitude of improvement
  • Key learnings

This documentation prevents repeating tests, helps identify patterns, and creates institutional knowledge.

Calculate the Value of Testing

To maintain commitment to testing, quantify its impact. If a headline test improves conversions from 5% to 6%, and you get 1,000 visitors per month, that’s 10 additional leads per month. At your customer value, what is that worth annually?

This calculation justifies the effort and resources invested in testing.

Learn From Every Test

Winners teach you what resonates with your audience. But “losing” variations are equally valuable—they teach you what doesn’t work and prevent you from making those mistakes on future pages.

Look for patterns across tests:

  • Do specific benefit types consistently outperform?
  • Does your audience respond better to formal or casual tone?
  • What types of social proof have the biggest impact?

Common Testing Mistakes to Avoid

Ending tests early: Wait for statistical significance, even if results look promising.

Testing insignificant elements: Don’t waste traffic testing minor copy changes or subtle design tweaks.

Ignoring mobile: Ensure your test variations work well on mobile, where most of your traffic likely comes from.

Testing during abnormal periods: Major promotions, seasonal events, or technical issues can skew results.

Not acting on results: Tests are only valuable if you implement winners and apply learnings to future pages.

Getting Started

If you haven’t run A/B tests before, start simple:

  1. Choose your highest-traffic landing page
  2. Set up a testing tool
  3. Create one headline variation based on a different angle or emphasis
  4. Run the test until you reach statistical significance
  5. Document results and implement the winner
  6. Test your CTA next

With each test, you’ll build confidence in the process and develop a clearer picture of what resonates with your audience.

A/B testing transforms landing page optimisation from guesswork into a disciplined, data-driven practice. Start testing today, and let your visitors tell you exactly what drives them to convert.

Ready to Generate More Leads?

Let's discuss how we can help you get 30 qualified leads in 30 days with our proven TAP System.

Book a Free Strategy Call

Related Articles

Continue learning about lead generation and paid advertising

Written by

Jason Poonia

Jason Poonia

Founder & Lead Generation Specialist

Jason Poonia is the founder of Lucid Leads, helping service businesses across New Zealand generate qualified leads through paid advertising and conversion-focused funnels. With a background in Computer Science from the University of Auckland and over 5 years of experience running lead generation campaigns, Jason has helped businesses in construction, trades, real estate, and professional services generate thousands of qualified leads. His data-driven approach combines targeted ad strategies with rapid lead qualification to deliver prospects who are ready to buy.

BSc Computer Science, University of Auckland Meta Certified Media Buyer Google Ads Certified
Facebook & Instagram Ads Google Ads Lead Generation Funnels Conversion Optimisation