Mobile A/B Testing: Best Practices | Mixpanel
Blog Post

Mobile A/B Testing: Walkthrough & Best Practices

Using Mobile A/B testing, you can test changes to your app without writing code. You can measure the impact on any conversion, like signups or purchases. In this Community Tip, we’ll provide a walkthrough and best practices for running A/B tests in your iOS or Android app.


To get started with Mobile A/B testing, all you need is the Mixpanel SDK installed and initialized in your app. If you haven’t done this step yet, you can jump over to our iOS Integration Docs or Android Integration Docs for instructions on installing Mixpanel.

To work with complex A/B tests which go beyond modifying UI elements, such as changing view flows, modifying defined constants, and more, you will need to implement developer tweaks (iOS / Android) in your app. Tweaks allow you to adjust actual variables in your code from the comfort of your Mixpanel dashboard. Once you add a Tweak to your code, it will appear in the Mixpanel UI when connected to the A/B testing designer.

It is important to note that different UI elements have varying levels of adjustable properties. Below is a quick reference to the different ways the basic built-in elements from the iOS UIKit can be modified using A/B testing.

Table showing levels of adjustability for user interface.

Create an A/B test

We are going to borrow Danqing’s open source game 2048 and pretend this is our incredibly addictive game. One thing that we have learned about this application so far is that games go quickly. So, we have a hypothesis: If we can make the tiles move more quickly, a user will play more games, which will cause them to come back to my app even more! Let’s create an experiment using Mixpanel’s Mobile A/B testing to test this hypothesis.

We have set the variable for animation speed in our game as a Tweak that we can adjust within our A/B test. We create three variations for our users:

  • regular speed
  • faster than regular
  • really fast

Test your experiment first

Once you have created your variants, it is a great idea to test the experiment in your app to make sure that the A/B test performs in the manner that you planned. Since this is only a test, you will want to make sure that you are the only user who is delivered the experiment. You can do this by targeting yourself in the A/B test, filtering with a unique identifier such as email.

Once you click “Save and Run” the experiment will be queued to send to your device and we can jump into the app to test it directly. To ensure you can see the Mixpanel SDK fetching new A/B tests and applying them to your app, enable debugging and logging (iOS / Android) within your app.

With debugging and logging enabled you can now see all of the behind the scenes actions that the Mixpanel library is performing while your application is running. For example, I can see that my application became active and then the library checks for any experiments. The logs will even show the raw experiment data which was retrieved from our server and ultimately applied for the current user.

Logs show raw data exported from the server in real time.

After your experiment has been thoroughly tested you are ready to release it to the masses!

Analyzing Results

There are a couple of ways in which experiment information is recorded within you Mixpanel data:

“Experiment Started” Event:

This event is tracked at the point which a user is delivered an experiment on their device. Thus, you should use this event as a placeholder to know when an experiment has actually been deployed to a device and is active for a user.

“Experiments” Super Property:

This super property stores a list of all experiments and variants a user has ever been exposed to. Since super properties will append to all subsequent events, you can filter your reports by the experiment a user was previously shown.

Ultimately, these two tools give you unique methods of analyzing your experiment data. If you would like to see stats based off of an active experiment, you should be using the “Experiment Started” event with the “Experiment Id” property set as the experiment you wish to analyze. If you wish to see the behavior of users exposed to a previous experiment which is not necessarily active, you can use the “Experiments” property and segment by the specific experiment.

Remember our 2048 game speed experiment from above? By taking advantage of both the “Experiment Started” event and the “Experiments” property we can create a retention report to see if more people came back to our app from using a different game speed!

This retention report compares users in an active mobile a/b testing experiment.

For this example, it looks like our second variant brought users back slightly more in the long run. With this knowledge in hand, you can push a change in your code so the winning value from your experiment is now set as the default for all users in the next release of the app.

Best Practices

One A/B test, many variants

Creating multiple experiments breeds a test environment which makes finding the root cause of any conversion improvement difficult. We highly recommend creating individualized tests, discovering conclusive improvements, making appropriate code changes, and moving on to brand new tests.

Let’s apply this concept to our game from earlier. For our experiment we may have multiple things we want to A/B test: an integer for speed, a boolean for night mode, and a text value for the message when you lose. In this scenario, you should create one A/B test in Mixpanel with each of the different Tweak values as their own variants. You should not create one test for each single Tweak value.

The result of creating a test in this manner is that you will be able to compare all of these scenarios to one another, allowing you to conclusively decide which factor results in the best conversions.

Don’t rely on a variant as your first view

When your application opens and the Mixpanel SDK is initialized, our library by default will request experiment information from our servers. We work hard to optimize our server response time, however, it is important to note that on your initial app open you will not instantly receive an experiment. This is done intentionally to prevent against performance issues which can occur due to latency when using this feature in your app.

If you wish to A/B test on the initial app view, the process differs between Android and iOS. For iOS, you should enable the option checkForVariantsOnActive (enabled by default) to grab data when the app is opened and then utilize the joinExperimentsWithCallback method to apply experiment information once it is retrieved from the server. For Android, you should employ the addOnMixpanelUpdatesReceivedListener to know when test data is available and then the joinExperimentIfAvailable() method to apply the experiment to the view.

Do not forget to take delivery latency into account when utilizing this type of A/B test as latency which is unaccounted for can result in a poor experience for end users.

Bind your Tweaks (iOS only)

An A/B test’s Tweak values will only be assigned once you have entered the experiment and that Tweak’s code block is executed. Until this happens the Tweak will show as it’s default value. The best practice is to use MPTweakBind to have Tweak values applied immediately after a test is received:

UILabel *label = [[UILabel alloc] init];
MPTweakBind(label, text, @"label text", @"Hello World");

Now when this Tweak is changed you will immediately see the new value of the Tweak applied to the given object and property. This allows you to safely use Tweaks throughout your app without having to worry if the Tweak value from the experiment has or has not been applied yet.

Target users on static properties

Our iOS library queues events and profile updates which are batched into requests that are flushed every 60 seconds or on the backgrounding of the app. This means that it is tricky to coordinate targeting for experiments to users based on people properties that are not yet set at the moment of app open.

Take an experiment that was set up to show a certain feature after a new app release where the targeting criteria was a property of “App Version” equaling 2.0.0. It would take a people set call to update the “App Version” property and a queue flush of the property update for the user to actually qualify for the experiment. The A/B test would not show until the user hits a new decide call on the next restarting of the app.

In scenarios such as these, you will either have to be okay with the test applying later in the app’s lifecycle or instead utilize the A/B testing targeting to simply target all of your users for the test so it delivers more quickly.

A/B test, analyze, modify, repeat

While highlighted in the setup section above, it is important to reiterate that to effectively use A/B testing you should not keep experiments running indefinitely. Instead, you should A/B test a portion of your app, analyze the results from the experiment, and then make the appropriate modifications to your code.

If you repeat the above process in the same manner for each A/B test you run, over time your app UX will improve based on the A/B test results and you will be able to continue running additional A/B tests on the other portions of your app.

Have questions about setting up and utilizing Mixpanel mobile A/B testing in your app? Reach out to to speak to someone smart, quickly.

Get the latest from Mixpanel
This field is required.