A/B testing is instrumental in measuring anything that a computer can count. It’s great to measure simple actions, such as a user tapping a button or conversions, but you shouldn’t base long-term product decisions on these results.
I cringe when I see product teams “measuring” engagement, usability improvements, or even “extracting” behavioral insights from A/B tests without talking to users. That’s a self-fulfilling prophecy.
The output of an A/B test is a set of numbers, yet many people incorrectly interpret them as insights to explain why users prefer one thing over the other.
Let me explain: After running an A/B test, you find that changing the color of a button to pink triggers 25% more conversions. — That’s great, but unless you talk to your users and understand what they think, you will have no idea why pink works better than blue. What about orange? Or maybe a different font size?
As you don’t know what your users are thinking, the only decision you can make based solely on an A/B test is whether to leave things as they are or run another A/B test.
I get it. We all want to release only those things that render the best results, and it’s more than ok to test the design, but when it comes to product development, using A/B tests as the first validation method means you’ll be missing the big picture.
It’s ok to use A/B tests to make design decisions but when it comes to product development, is better to go outside, talk to your users, and rely on your qualitative research findings first.