8

I wanted to perform two A/B tests on an app using Firebase A/B Testing with Remote Config.

The problem is that the two tests audiences should be mutually exclusive. Forming part of both experiments might pollute the results.

I've thought in setting a Firebase Analytics user property when the user enters in the Experiment 1 and excluding this property value from Experiment 2 audience, but I'm afraid that the user enters in both experiments simultaneously when fetching the Remote Config values.

Is there a better solution for preventing the user from entering on both experiments?

willy
  • 490
  • 4
  • 11

3 Answers3

6

(For the purpose of this answer, I'm assuming you're talking about the new A/B testing framework we just launched last week)

So right now, you can't really ensure mutually exclusive experiment groups with the new A/B testing framework. If you specify that 10% of your users are in experiment A and 10% are in experiment B, then a small portion of your users in experiment B (specifically, about 10% of them) will also be in experiment A.

The good news is that those users from experiment A should be evenly distributed among your variants in experiment B. But still, if you find yourself in a case where you feel like these experimental users will favor one variant over another (and thereby skew your results), you have two options:

  1. Run your A/B tests serially instead of in parallel. Just wait until you've stopped your first experiment before running your second.

  2. If it makes sense, try combining them into a single multi-variant experiment. For example, let's say experiment A is adding a faster sign-in flow, and experiment B is pushing your sign-in flow until later in the process. You could try creating a multi-variant experiment like this:

+---------------------+---------------+----------------+
|        Group        | Sign-in speed | Sign-in timing |
+---------------------+---------------+----------------+ 
| Control             | (default)     | (default)      |
| Speedy              | Speedy        | (default)      |
| Deferred            | (default)     | Deferred       |
| Speedy and Deferred | Speedy        | Deferred       |
+---------------------+---------------+----------------+

The benefit here is that you'll get some extra insight into whether being in both experiments really does affect your users in the ways you're suspecting.

Todd Kerpelman
  • 16,875
  • 4
  • 42
  • 40
  • Thanks Todd. I will do this multi-variant experiment as a workaround, but, do you know if this feature will be considered for the future? – willy Nov 10 '17 at 10:08
  • @todd-kerpelman what if I want to run 2 experiments targeting 100% of users each? In my case I am confident that the variable which I set in each of them are not related, so I do not expect any cross-correlation. Will Firebase still be able to determine the influence of each variable? – Andriy Gordiychuk Aug 13 '19 at 10:30
4

I would set property with a random number between 1~10 only on installation.

Then you should be able to do "exclusive A/B testing" by filtering users with it.

Bright Lee
  • 2,306
  • 2
  • 27
  • 55
0

It should be able to run mutually exclusive A/B testing experiments in parallel leveraging "activation event", however, it needs extra work to maintain some states on the client side.