tl;dr Split-testing doesn’t verify your numbers, it only verifies which option is better. If one of the options tested is a clear winner, you’ll know with small amounts of data. So, I use split-tests to look for big winners. If my test doesn’t show a big winner quickly, I move on quickly.
[caption id="attachment_1547" align="alignnone" width="300"]![]() |
[caption id="attachment_1546" align="alignnone" width="300"]![]() |
Version | Conversion rate | # of visitors | |
A | 10% | 100 visitors | |
B | 25% | 30 visitors | Winner with 99% confidence! |
Ask yourself, what was it so confident about?
That Option B was better. Maybe only slightly better, but better.
The test did not tell me that Option B would continue to perform at 25% or would be 15% better than Option A - just that Option B is very likely to outperform Option A in the long run.
Split testing only tells you which option is better, not how much better.
Get it? In a split-test, the only number you can really act on is the statistical confidence of which option is better. The conversion rates, impressions and click-through rates are not reliable as predictions, just the winning option. That's why you don't necessarily need big numbers to get confidence.If you have a big winner on your hands, split-testing will tell you this quickly. So, especially when I’m starting, I look for big wins quickly. If my first test, say about a picture or a headline, doesn’t give me statistical confidence after 100-200 visitors, I usually scrap the test.
I would rather quickly abandon a version that might have worked better if I ran the test longer, because I can better invest that time in testing other things that might yield a big win. (There’s a balance to be found with sampling error here, but since I’m testing frequently and moving forward with so many unknowns, I accept false negatives in the interest of speed, and address sampling error when I’ve found a hit.)
This is how split-testing gives you actionable results fast.
Thanks Tendayi Viki and Andreas Klinger for reviewing this post.
I'm Salim Virani. These days I'm a Kernel Steward, and also having fun building random stuff.
In the past, I designed peer learning programs for Oxford, UCL, Techstars, Microsoft Ventures and The Royal Academy Of Engineering. I also played a role in creating the Lean Startup methodology, and the European startup ecosystem. You can read about this here.
I’m working on a communication tool for community groups and unconferences. It focuses on autonomising focused teams rather than top-down coordination.
I’m on the Kernel Stewards team, where we help ~2,000 fellows understand the what the development of blockchains mean to humanity on anthropological scales, and how to use them altruistically and prudently.
Getting retreats right for remote teams (2023)
What do you need right now? (2023)
Making sense of DAOs (2022)
An Agile starter pack for DAOs (2022)
Building ecosystems with grant programs (2021)
Safe spaces make for better learning (2021)
Choose happiness (2021)
Working 'Remote' after 10 years (2020)
Emotional Vocabulary (2020)
Project portfolios (2020)
Expectations (2019)
Amperage - the inconvenient truth about energy for Africa's off-grid. (2018)
The history Of Lean Startup (2016)
Get your loved ones off Facebook (2015)
Entrepreneurship is craft (2014)