Do display ads really work? If so, how can you know that they work, and how can you know much of an impact they have?
There are studies that show lots of interesting things about display ads. For one thing, most people don’t click on them, but the ad still affects behavior. For example, a person might see an ad and then type in your site’s URL, or he might google one of your brand terms.
The folks who sell ads know this, and they know that ads simply don’t pay for themselves based on clicks. So the ad salesman wants you to measure the effectiveness of their ads based on “view-through conversions,” which is a misnomer. Just because the ad was displayed on the user’s browser does not mean the user saw the ad. “View-through conversions” should really be called “display-through conversions,” and there doesn’t seem to be any reason to take them at face value. It’s way to easy to imagine a scenario where an ad has been displayed and the person purchased for some other reason.
This leaves us with two rotten metrics for ads. Clicks undervalue the effect of a display ad campaign, and display-through conversions over-estimate the value of the campaign. What can you do?
The simplest thing to do is believe the studies, bite the bullet and invest in display ads anyway. If you’re the owner of the company and want to do that, go ahead. It’s your money. But if, like most of us, you’re spending somebody else’s money, you need to show some return. And even if management believes the general idea that display ads increase direct traffic and brand-related search, that doesn’t help to much. How much do you need to spend in display ads to get the effect you want? What is the proper proportion of spend on display ads vs. spend on search? The studies aren’t going to tell you that — at least not for your industry and your product line.
Another (not) solution to this problem is to compare the behavior of people in a “display network” with people outside that network.
Here’s how that would (not) work. As you know if you’ve ever run a spyware problem on your computer, display networks cookies people when they go to a site that show their ads.
Here’s a scenario. I go to D.com, an ad gets displayed on the page, and the ad system writes a cookie to my browser recording that fact. Later I go to your website and buy your product, and your “thank you” page has a tracking pixel that reads the cookie and says, “Hey, look! We showed this guy an ad and then he purchased. Yipee!”
Sounds good, … but … something isn’t right here. The ad might have had nothing to do with the sale. Maybe I got an email that led me to your site. Or maybe I was going to buy anyway. Or maybe I saw the ad and my wife (using the same computer) bought your product.
If you push this, the display ad salesman will say how smart you are and offer something like this.
“Oh, but we can compare the behavior of the people in the network with the people out of the network.”
What he means is this. If the tracking pixel on your thank-you page looks for the cookie and can’t find it, it records that conversion as an “everybody else.” Then, the (phony) argument goes, you can compare the behavior of the in-network and out of network people.
The trouble is that a fraction is made up of a numerator and a denominator, and you have to have a real value in both places. You can’t compare X conversions over Y people in the network with A conversions over “everybody else.” It doesn’t make sense. Unless, of course, you can assign a real number to “everybody else.”
You need to be able to do a split. You need to be able to take a definable universe of people, show the ad to some of them and not to others, and compare the behavior of those two groups. This doesn’t resolve every conceivable objection, but it gets pretty close.
Here’s how you do it.
First, you need a definable group of people. The most natural group is “people who visit your website,” because (1) they’ve shown some level of interest in you, and (2) you can set a cookie on their browsers.
Second, you need to split this group in two. You do that with a google optimizer experiment. Version A drops a retargeting cookie, Version B does not.
Third, you set your “thank you” page as the target page of the google experiment.
Presto. Now you have a defined group of people that you can split in two, show your ads to one group and not the other, and compare the behavior of the two groups.