A/B testing & experimentation help your site or app become its best self. If you’ve got an influx of visitors and you’re paying for that traffic (you’re always paying--in time, ads, or otherwise), A/B testing helps ensure that more of those visitors sign up, convert, and buy. Experimentation is the only way to tell which features actually move the needle in product development.
A/B testing & experimentation help your site or app become its best self. If you've got an influx of visitors and you're paying for that traffic (you're always paying--in time, ads, or otherwise), A/B testing helps ensure that more of those visitors sign up, convert, and buy. Experimentation is the only way to tell which features actually move the needle in product development.
Part of the reason an A/B testing framework (also known as split testing or multivariate testing) is so crucial is because of cognitive biases. All of us naturally assume that others like what we like, and as product teams design sites and apps, they inevitably shape them to their own preferences. The best way to restore every image, button, and feature back to an ideal state for users is through A/B user testing---essentially, polling thousands of users to find which variants they prefer.
This process, known as conversion rate optimization (CRO), helps the site or app achieve its intended purpose---to drive the maximum engagement or revenue.
Benefits to A/B testing:
- Convert more existing traffic
- Resolve visitor pain points
- Improve site or app ROI
- Reduce cost-per-acquisition (CPA)
- Redesign your site or app
- Reduce bounce rate
- A/B test design
- A/B testing marketing messages
- Answer product questions with data
How do you perform an A/B test?
At DevCycle, we find that a four-step A/B testing methodology works best: First, we validate that testing can, in fact, answer the question we're asking. Then we elaborate on early tests to scientifically test hypotheses, iterate to test additional theories, and build (or rebuild) features based on the results.
Here's how to do it:
1. Validate your test
First, we validate that testing can answer our question. We run small, low-cost tests we call quick-wins that, ideally, don't involve any custom code or help from a designer. We simply alter existing features and built-in functionality, say, by moving buttons or adding fields to forms.
Always test big things first. Before you go testing the colour or placement of a button, make sure the page itself helps users along their journey.
To validate your test, create an A/B testing plan:
- Set your goal. Shortlist your highest-potential testing opportunities. Where might testing offer the highest return? Often, it's the lowest-converting stage in your funnel.
- Establish a baseline. Record how your site or app is performing today. Document your goal with the test, as well as the metrics you'll use to judge success.
- Record your hypothesis. What do you think will happen? Record it in a specific, affirmative statement such as, "By reducing the number of form fields from eight to seven, we'll increase checkouts by 5 percent."
- Establish a time-period. How long will the test run for? Typically, 2-3 weeks is sufficient, but it depends on the size of your user base and expected traffic. You'll need enough visits for the test to be statistically significant. (If you have a testing platform, it should tell you this.)
With your goal, baseline, hypothesis, and time period recorded, set up your test. Give it a quality assurance check before it goes live, then launch and review the results. Did you prove or disprove your hypothesis? What can you improve in the next round of testing?
How do you create an A/B test?
To create an A/B test, you need an A/B testing platform---a software that lets you select features within your site or app and test variants. Within the visual editor, select an element, feature, or button you want to test and click to create a new test.
2. Elaborate on it
Based on what you've learned in the validate stage, decide whether to invest more resources into this line of inquiry. Were you able to get statistically significant results? Did it prove or disprove the hypothesis? Do you still believe it's a high-potential test opportunity?
If you proceed, invest more resources. Talk to more people on the team to inform your next hypotheses, and ask for engineering and design resources if needed.
As much as testing sounds like a purely statistical exercise, its success really hinges on your team's intuition and creativity. The more diverse and innovative your ideas are for improving, say, a checkout page conversion, the more useful results you'll get in your test. Once you've run the test, record your learnings in a testing log.
3. Iterate further
Based on what you've learned in the elaborate phase, build more variant tests and iterate on what's worked. Did a more conversational tone in your push notification lead to more sign-ups? See if that tone increases conversions in other areas. Did you find that adding a testimonial on the checkout page increased purchases? Test several testimonials to see which converts the best.
4. Build to spec
After the iterate stage, you should be seeing meaningful results and lessons you can draw from your tests. Apply them to your product design. Some common examples of A/B testing implementation lessons:
- Users are more likely to respond to fear of loss rather than the prospect of gain.
- Sending too many push notifications increases the opt-out rate.
- Social proof increases checkout conversions.
- Personalized article recommendations increase time on site.
- Fewer form fields increase conversions, but not necessarily sales.
A/B testing examples
Here are a few A/B testing examples---case studies on the impact an A/B testing framework can have:
The hiring platform Good&Co increased user engagement 27%
The Good&Co app uses quizzes to match job seekers with employers, and the team wanted to improve its onboarding flow to encourage users to take more quizzes. With A/B testing, the team tested two onboarding flows and identified one that increased user engagement by over a quarter---27 percent.
The social network Houseparty doubled its number of new user friend requests
With 20 million active users, the team at Houseparty was looking for ways to more scientifically test the impact of their product updates. "We would make changes, and see some metrics go up and some go down," says Jeff Needles, Head of Business Operations and Analytics at Houseparty. The team implemented A/B testing to isolate variables and it led them to make product changes that increased the number of first-day friend requests by 2x.
Ticket retailer TodayTix increased ticket sales 9% with one experiment
The team at TodayTix found that users were trying its Rush and Lottery ticket features at a much lower rate than expected. "We noticed a lot of new customers would come into the app and not make any purchases in their first couple sessions after downloading," says Pragya Saboo, Product Manager at TodayTix. Pragya and the team used A/B testing to test two new variations of the onboarding flow and discovered one that increased sales 9 percent.
More factors to try in your A/B testing plan
Some additional A/B testing examples:
- E-commerce A/B testing: Add or hide pricing, change product descriptions, test images, recommend products.
- SaaS user A/B testing: Selectively roll out new features, test landing pages, test reminder emails.
- Publishing A/B testing: Add or hide comments, test sign-up CTAs, test paywall copy.
- Food service A/B testing: Test flows, test contact rates, reorder questions in the FAQ.
- Consumer goods A/B testing: Test checkout flow, test online store design.
Top comments (0)