
In the dynamic digital world, understanding what works and what doesn't can be the key difference between a successful product and a failed one. A/B testing, has emerged as an invaluable tool in this regard. By testing two versions of a product or feature against each other, businesses can precisely pinpoint what resonates with their users. This approach not only optimizes product performance, but also enhances the customer experience.
Companies can create more user-centric solutions, ensuring that the product’s evolution aligns with the customer's desires.
I've always advocated for the "test everything" mantra, implying that each code modification or newly introduced feature should undergo some form of experiment. Unexpected outcomes can arise from even the most minute changes, affecting both the product's effectiveness and the overall user experience.
On a recent episode of Lenny Rachitsky’s podcast, Ronny Kohavi, an expert with a stellar track record from giants like Amazon, Microsoft, and Airbnb, delved deep into the nuances of online experimentation.
Kohavi's approach to testing is layered. He suggests initiating with a modest 2% audience, and then, if successful, scaling to 50%. Such a step-by-step strategy facilitates immediate detection of any unforeseen challenges. Analogous to water's freezing point at 0ºC (32ºF), there exists a pivotal moment in product deployments when dramatic shifts become noticeable. By monitoring these shifts closely, significant product improvements can be achieved.
However, the world of experimentation is also fraught with failures. A startling revelation is that most experimental ideas don't translate well in practical scenarios. But these failures aren't fruitless. They provide indispensable lessons that can guide subsequent attempts. Kohavi's experiences offer a revealing lens into this reality.
At Microsoft, only 33% of experimental ideas met their intended objectives.
Airbnb's scenario paints a starker picture: less than 10% of tested concepts passed, although the successful ones spiked booking conversions by 6%.
The success rates at Bing, Booking.com, Google Ads, and Netflix hover around the 10-15% mark.
Kohavi emphasizes the importance of an iterative mindset and recommends assembling a team that thrives on consistent evolution. It's essential to foster an environment where testing is interwoven into the developmental phase. As the frequency of deployments escalates, the significance of rigorous testing becomes glaringly obvious. Yet, a word of caution: sporadic deployments can be risky, placing undue pressure on the triumph of a single feature.
And you? how do you integrate experimentation in your product developments?
For those keen on exploring the rich tapestry of experimentation and gleaning more from Kohavi's vast knowledge, tuning into Lenny Rachitsky’s podcast is highly recommended.
Written by Nicola Arnese