2

If I have a productOpen activation event (a custom chat-opening event inside the app), it starts counting from this event to evaluate the results as stated in Firebase ab-testing documentation.

The question is, at what point does the traffic split for all tests inside Firebase occur? With the startSession event (by default, opening the app), or with the selected activation event or something else?

I'll be very grateful for the answer!!

  • Could you elaborate on what you mean by "traffic split"? Users can very well be part of multiple A/B tests at the same time, if that's what you mean. It is your responsibility to prevent that from happening (if you don't want that) by e.g. utilizing appropriate user targeting. However this can quickly get complicated, so generally try not to run tests optimizing for the same KPIs at the same time. – ubuntudroid May 27 '21 at 10:23
  • @ubuntudroid No, it is not about multiple tests. I will try to explain: there are 2 variants: Baseline and modified and user who opens an application that has an active test. Events in the Firebase AB-test are counted from a certain point - the opening of the chat inside the app - this is the so-called "activation event". The question is, at what point exactly does the user get the experiment variant: during the app opening event ("StartSession") or before that? This is necessary in order to understand whether the user is registered at this moment (whether he received the user_id) or not yet – S_Kseniia May 27 '21 at 11:27
  • actually I want to understand whether firebase counts not yet authorized users, because before certain actions in the application the user is not assigned a user_id – S_Kseniia May 27 '21 at 11:32

1 Answers1

0

You need to differentiate between taking part in the test and playing into results of a test.

Users will get values from one of the test variants if they are part of the target group. They will get that value right when the device fetches and activates data from Remote Config.

However, at that point they will not necessarily be part of the results of the test. That's what the activation event is for, just as you correctly mentioned in your comment to your question.

Btw, I've just recently raised a related question whether users would also leave a test at some point if the target requirements are not fulfilled anymore. However there is has been no reply yet. Same for my other related question on whether the user counts into the test if the activation event happened before the test was rolled out. Those questions might also be of relevant for you looking at the scenario you describe.

Generally the documentation of A/B testing leaves a lot to be desired, but it is still beta, so here's hopes that all those questions will be answered at some point.

ubuntudroid
  • 3,680
  • 6
  • 36
  • 60
  • thanks so much for your answer! agree with you about documentation :) and i will follow to your questions too, it is a tricky situation – S_Kseniia May 28 '21 at 15:12
  • I wonder if we only expose 10% of the user base to an experiment, does it mean that out of 10 TARGET users, only 1 will join the test? Or 10 will join the test but only 1 counts in the results? – AmyN Dec 09 '22 at 07:31
  • @AmyN The former should be the case. – ubuntudroid Dec 10 '22 at 11:04
  • The tool actually said that by default the traffic will be split 50/50 to control and treatment if we don't change the variant weight. But their documentation mentioned little about the advanced models used, including the contextual model (or multi-armed bandit) and this model can dynamically assign traffic to variant with better performance to see if it continues performing better, thus this will create unproportional sample sizes . I wonder whether it is really the case where the variant weight is 50/50 (user setting) and the actual traffic is different? Do you have any idea about this? Tks – AmyN Dec 12 '22 at 04:09
  • Hm, no, unfortunately not. Good question though (and maybe worth an own question here on SO). You could try to track A/B flags to a third party tracking tool like Amplitude and measure there whether you get the partitions you'd expect from what you set in Firebase earlier. I would really be interested in your findings. @AmyN – ubuntudroid Dec 12 '22 at 19:10