tl;dr Split-testing doesn’t verify your numbers, it only verifies which option is better. If one of the options tested is a clear winner, you’ll know with small amounts of data. So, I use split-tests to look for big winners. If my test doesn’t show a big winner quickly, I move on quickly.
|[caption id="attachment_1547" align="alignnone" width="300"] Version A: Crappy starting version.||[caption id="attachment_1546" align="alignnone" width="300"] Version B: Blatant copy of Buffer version|
|Version||Conversion rate||# of visitors|
|B||25%||30 visitors||Winner with 99% confidence!|
Ask yourself, what was it so confident about?
That Option B was better. Maybe only slightly better, but better.
The test did not tell me that Option B would continue to perform at 25% or would be 15% better than Option A - just that Option B is very likely to outperform Option A in the long run.
Split testing only tells you which option is better, not how much better.Get it? In a split-test, the only number you can really act on is the statistical confidence of which option is better. The conversion rates, impressions and click-through rates are not reliable as predictions, just the winning option. That's why you don't necessarily need big numbers to get confidence.
If you have a big winner on your hands, split-testing will tell you this quickly. So, especially when I’m starting, I look for big wins quickly. If my first test, say about a picture or a headline, doesn’t give me statistical confidence after 100-200 visitors, I usually scrap the test.
I would rather quickly abandon a version that might have worked better if I ran the test longer, because I can better invest that time in testing other things that might yield a big win. (There’s a balance to be found with sampling error here, but since I’m testing frequently and moving forward with so many unknowns, I accept false negatives in the interest of speed, and address sampling error when I’ve found a hit.)
This is how split-testing gives you actionable results fast.
I’m in a cozy place, preparing for parenthood, dabbling with some art projects, getting my hands dirty with ZK and ML. One of my more “product-y” projects is a communication tool for community groups and unconferences. It focuses on autonomising teams rather than “coordinating”. Another is a set of primitives for “hyperstructuring” Free Software to help contributors get paid fairly.
I’m also part of Discover Mode - where I’m a solver-for-hire helping a Web3 projects with product design and strategy.
In the past, I've designed peer-learning programs for Oxford, UCL, Techstars, Microsoft Ventures, The Royal Academy Of Engineering, and Kernel, careering from startups to humanitech and engineering. I also played a role in the Lean Startup methodology, and the European startup ecosystem. You can read about this here.
Menus and kitchens (2023)
Retreats for remote teams (2023)
What do you need right now? (2023)
Choose happiness (2021)
Emotional Vocabulary (2020)
Project portfolios (2020)
The history Of Lean Startup (2016)
Entrepreneurship is craft (2014)