Experimentation on ASO Giraffe
History
Previously, we relied on integration with Google Optimize to provide our AB testing functionality. This had a number of drawbacks:
- Most mobile marketers aren't familiar with web-based optimisation tools, and may not have had a good knowledge of web analytics to set things up quickly.
- The Bayesian testing that Google Optimize uses is not well-suited to the type of tests that we need to run.
In building ASO Giraffe's experiment feature, we wanted to combine rigorous statistical analysis with a practical approach to the difficulties that mobile marketers face when running tests like these.
From a statistical perspective, it was important that:
- The insights that come from ASO Giraffe's tests are reliable - When we return a result, you can be confident in those results.
- The algorithms we used have a strong statistical basis - We didn't want to reinvent the wheel, instead we wanted to ensure that we relied on time-tested AB-testing implementations that our users can trust.
- The inputs are clear in advance - It's poor practice to let tests run on and on until the results you're looking for come up, or to pause tests early. We want to be able to give a traffic number in advance and stick to it. This maximises efficiency and makes the results more predictable and reliable..
From a practical perspective, it was important that:
- We make things easy to understand - The necessary information on what to do next should always be available to ASO Giraffe's users, it should never be unclear what the next step should be.
- We don't bamboozle you with jargon and make ourselves feel like very smart people™ - We want to explain things as simply as possible so you know exactly what's happening at every stage.
- The tests get the maximum insight from as little traffic as possible - Because you introduce an extra step for users into the journey, it makes sense to make this as efficient as possible to avoid interrupting your users.
Our solution
In light of these two perspectives, we looked around for available solutions, and decided to go for a Sequential A/B test Implementation.
One of the strongest features of Sequential A/B tests is that they are relatively easy to understand. Here's the decision rules for the test:
- At the beginning of the experiment, choose your sample size, N (the amount of traffic you'll be sending to your spoof page in our case.
- Assign subjects randomly to a control and a treatment variation, 50% to each.
- Keep track of the number of installs on your treatment page, T.
- Keep track of the number of installs on your control page, C.
- If T-C reaches 2√N stop the test. Declare the treatment page as the winner.
- If T+C reaches N stop the test. Declare no winner.
Though simple in construction, Sequential A/B testing is a very powerful tool for running AB tests. You can read more about the logic and maths behind them here
Conclusion
We're really excited to be releasing native experiment support in ASO Giraffe, and can't wait to have you using the tool. If you have any questions, please message in via the live chat on the site, or send an email to fede@asogiraffe.com, we look forward to working with you!