Error bars and conversion rates in optimizer experiments

I don’t know how Google’s website optimizer picks the winning combination on a multivariate test. It’s some complicated statistics that I don’t know and probably don’t want to know.

The trouble is that the margin for error on a complicated experiment (i.e., an experiment with a lot of options) sometimes overwhelms the results.

For example, if the “winning” combination has a conversion rate of 45% ± 10%, and the second place combination has a conversion rate of 42% ± 10%, how sure can you be that the winning combination really won?

You could let the experiment run for a long time until the margin for error decreases. The problem is that you’re continuing to run the crappy options along with the good options, so you’re hurting your overall conversion rate.

A better option is to trim out the clear losers and simplify the experiment, or run a follow-up experiment.

The “best practice” is to make the complexity of your experiment match the amount of traffic on the page — i.e., simple experiments on pages with a little traffic and complicated experiments on pages with a lot of traffic.

Leave a Reply

Your email address will not be published. Required fields are marked *

thirteen − eleven =