Mutually Exclusive Experiments

It would be great to have an option to make experiments mutually exclusive without writing complex JS. A simple option within targeting would be helpful.

For example, if a user is entered into experiment 1, then don't allow the user into experiment 2. Or, if a user is included in any live experiments, then don't allow the user in the current experiment.

I agree that this option should be available and easy to implement for the experiments.

As far as I know, at the moment, the only way to detect and target a visitor by using Custom JavaScript targeting where you read the Optimizely cookie to check if someone is part of an active experiment or not.

This is a great product idea, but there are some edge cases that need to be considered.

Consider experiment A running on the homepage and Experiment B running on the entire site's navigation bar.

1) What if a user lands on the homepage and isn't in Experiment A or Experiment B? Should the tool choose one of the 2 mutually exclusive experiments at random?

2) What if a user lands on on a product page and isn't in Experiment A or Experiment B? Should the tool automatically include them in Experiment B because they are not eligible for Experiment A? If the likelihood that visitors enter the site outside the homepage is high, then the traffic to Experiment A will be much less than Experiment B.

In summary, I think it's pretty straightforward to configure for experiments whose URL targeting conditions are the same, but it gets more complicated with experiments that target different parts of the site and have different likelihoods of being visited.

Just throw in the ID of the experiment you want to exclude. The mutually exclusive JS that is documented in the support section is definitely useful (especially for excluding groups of experiments), but if you're looking for a simple exclusion of one experiment from another give the above code a try. Hope that is helpful!!

This is a request I've had since our evaluation period with Optimizely nearly 2 years ago. I sympathize with their desire to prioritize feature requests based on demand. That said, I'm delighted to see this thread and would encourage all who read it to add a +1 comment if you want to see this as a built-in targeting option.

Meanwhile, as a counterpoint to the post from @Alexis, I'd like to offer a solution for those who wish to exclude ALL experiments from one another (as is the case with our organization).

Building off of the recommended JS for custom targeting criteria, I posed the question of whether or not this could be made more generic. In other words, could we not simply pull the list of active experiments from the Optimizely object rather than having to manually create an array in each experiment (then maintain those arrays when other experiments came online)?

With the aid of Optimizely support, I've refactored this code to sit in all experiments and automatically exclude all other active experiments. The 4 biggest considerations were:

How do we ensure that we're only including running experiments in our list of random distribution? Otherwise, the code would take all non-archived experiment IDs as potential assignment and significantly reduce the traffic allotment.

What if we want to run the occasional "hotfix" experiment independent of all the normally exclusive ones?

How do we accurately predict traffic percentages when we have multiple exclusive experiments?

There's no way (yet) to make this code fully generic. We will always have to manually set the curExperiment ID for each test.

The answer to question 1 is to check typeOf 'enabled' when building the experiment array.

The answer to question 2 is to simply not include the custom target JS in the experiment that we want to run in tandom.

Although I haven't had time to prove or disprove my hypothesis, I believe that this solution will only work if the 'hotfix' experiment is the most recently created experiment. If true, this would mean that to keep it alive, when new exclusive experiments are introduced, we would have to duplicate the 'hotfix' experiment and restart it after all other exclusive experiments are created.

Question 3 becomes complicated because traffic allocation is all selected from the same percentage. In other words, if we have 3 experiments running at 20% traffic, that doesn't mean we'll be using 60% of our site traffic divided 3 ways. All 3 experiments will be sharing the same 20% of the site traffic.

[This last comment is based on my calculations from a year ago and my no longer be the case. Optimizely architecture may have changed how it calculates traffic percentages since then]

The 4th consideration, that there is unfortunately no way to identify the experiment that is running the targeting evaluation, is because this evaluation code is run only once for all active experiments and is therefore agnostic towards any individual experiment. This means that we are still required to manually update the curExperiment ID value for each new exclusive experiment.

As a follow-up I would also request the ability to exclude experiments individually. In other words, to mark an experiment such that it is exclusive from all others; regardless of exclusion groups.

The problem with this recently released solution is that if we have just one experiment that is guaranteed not to play well with others, the only way to isolate it is to put all running/active experiments into one exclusion group. This forces the traffic slicing onto all running experiments and is ultimately impractical for larger scale testing projects.

A possible implementation of this would be to either add a setting on exclusion groups to make them 'universally exclusive' or, perhaps, for each project to have one system defined universal exclusion group. In this case, "Universal Exclusion" would mean that experiments in this group could not overlap with any other experiment whether in or out of any other exclusion group.

I realize this complicates the matter of traffic allocation. In theory, it really just adds a layer of allocation by which an exclusion group would be given its own traffic percentage setting.

Example: Universal Exclusion Group [UEG] traffic setting is: 50%.

This means that all experiments or exclusion groups not in the UEG would have the remaining 50% of traffic from which to draw.

In this scenario,

An experiment set for 100% of traffic would really be getting 50% of Total traffic.

An experiment set for 50% of traffic would be getting 50% of 50% = 25% of Total traffic.

A non-universal exclusion group with 5 experiments, each set to 20% would actually be splitting 50% of traffic resulting in 10% of Total Traffic per experiment.

In practice, this is really not as confusing as it seems. The calculations to determine "Actual Traffic Percentage" are fairly straight forward.

Thanks for your suggestion! I see how this would be a valuable enhancement. I've shared this with our product team who oversees mutually exclusive experiments. I'll update you if they decide to move forward with it.