Those sneaky behavioral marketers

This link is only open access for a little while, so go look at it now.

Behavioral Economics in Marketing: 7 Insights to Lift Results

People who study the brain and decision-making often come to some very disturbing conclusions — for example, that we often make decisions un- or sub-consciously and later rationalize them. We wrap a story around our decision to justify it, but our story isn’t really why we made the decision.

The article linked above points out some ways marketers can use this phenomenon to sell products.

This raises the question whether it’s wise to use such tactics to sell subscription products. If all you’re after is one sale, it might make sense. But with a subscription, you’re relying on renewals, which means the person has to actually like the thing.

So a subscription marketer that uses tricky tactics might want to check to see if renewals for people who purchased from the tricky offer differ from renewals off the standard offer.

Clicks, views and the real effect of display ads

Do display ads really work? If so, how can you know that they work, and how can you know much of an impact they have?

There are studies that show lots of interesting things about display ads. For one thing, most people don’t click on them, but the ad still affects behavior. For example, a person might see an ad and then type in your site’s URL, or he might google one of your brand terms.

The folks who sell ads know this, and they know that ads simply don’t pay for themselves based on clicks. So the ad salesman wants you to measure the effectiveness of their ads based on “view-through conversions,” which is a misnomer. Just because the ad was displayed on the user’s browser does not mean the user saw the ad. “View-through conversions” should really be called “display-through conversions,” and there doesn’t seem to be any reason to take them at face value. It’s way to easy to imagine a scenario where an ad has been displayed and the person purchased for some other reason.

This leaves us with two rotten metrics for ads. Clicks undervalue the effect of a display ad campaign, and display-through conversions over-estimate the value of the campaign. What can you do?

The simplest thing to do is believe the studies, bite the bullet and invest in display ads anyway. If you’re the owner of the company and want to do that, go ahead. It’s your money. But if, like most of us, you’re spending somebody else’s money, you need to show some return. And even if management believes the general idea that display ads increase direct traffic and brand-related search, that doesn’t help to much. How much do you need to spend in display ads to get the effect you want? What is the proper proportion of spend on display ads vs. spend on search? The studies aren’t going to tell you that — at least not for your industry and your product line.

Another (not) solution to this problem is to compare the behavior of people in a “display network” with people outside that network.

Here’s how that would (not) work. As you know if you’ve ever run a spyware problem on your computer, display networks cookies people when they go to a site that show their ads.

Here’s a scenario. I go to D.com, an ad gets displayed on the page, and the ad system writes a cookie to my browser recording that fact. Later I go to your website and buy your product, and your “thank you” page has a tracking pixel that reads the cookie and says, “Hey, look! We showed this guy an ad and then he purchased. Yipee!”

Sounds good, … but … something isn’t right here. The ad might have had nothing to do with the sale. Maybe I got an email that led me to your site. Or maybe I was going to buy anyway. Or maybe I saw the ad and my wife (using the same computer) bought your product.

If you push this, the display ad salesman will say how smart you are and offer something like this.

“Oh, but we can compare the behavior of the people in the network with the people out of the network.”

What he means is this. If the tracking pixel on your thank-you page looks for the cookie and can’t find it, it records that conversion as an “everybody else.” Then, the (phony) argument goes, you can compare the behavior of the in-network and out of network people.

The trouble is that a fraction is made up of a numerator and a denominator, and you have to have a real value in both places. You can’t compare X conversions over Y people in the network with A conversions over “everybody else.” It doesn’t make sense. Unless, of course, you can assign a real number to “everybody else.”

You need to be able to do a split. You need to be able to take a definable universe of people, show the ad to some of them and not to others, and compare the behavior of those two groups. This doesn’t resolve every conceivable objection, but it gets pretty close.

Here’s how you do it.

First, you need a definable group of people. The most natural group is “people who visit your website,” because (1) they’ve shown some level of interest in you, and (2) you can set a cookie on their browsers.

Second, you need to split this group in two. You do that with a google optimizer experiment. Version A drops a retargeting cookie, Version B does not.

Third, you set your “thank you” page as the target page of the google experiment.

Presto. Now you have a defined group of people that you can split in two, show your ads to one group and not the other, and compare the behavior of the two groups.

How small and cheap can a USB drive get?

I was thinking about declining ad revenues today, and it got me wondering how a print publication could make a better connection between the printed page and the online world. Getting somebody to type in a URL isn’t that hard, but it’s also not that sexy or exciting.

How long will it be, do you think, before printers can mass-produce little USB drives with a couple K of data on them?

Imagine a blow-in 3×5 card with a little USB port on the edge. That would open up lots of possibilities.

Is the Kindle a flash in the pan?

I don’t have a Kindle, but I have some friends who have and love them. Despite my general love of technology, so far I’ve preferred books. For a while (years ago) I tried reading the daily news on a Palm, but I didn’t like it.

Recently there’s been a lot of talk about formatting content for the Kindle. I think that’s a very good idea, and publishers should look into it. Along those lines, the good folk at Mequoda are doing a webinar on that topic.

But having said that, and despite it’s short-term success, I don’t see the Kindle as a long-term product. I have two reasons for this.

First, here’s my sane, sensible reason — Right now there’s a whole range of small devices, from the hand-helds, like Palms, Blackberries, iPhones and Droids, to the netbooks. I don’t see any reason why these other devices won’t be able to do everything a Kindle can do, so unless the Kindle starts to take phone calls, I think its utility as a substitute for the paperback has a narrow niche in the technology timeline.

Second, here’s my insane, wild-eyed prediction — I think there’s going to be a ground-breaking leap forward in display technology soon. I’m not sure if it will be digital paper (essentially a computer screen you can fold and put in your pocket) or some sort of projection display (either through eye-glasses or something else). But I don’t think screen-based computing is going to last all that long.

Having said all that, if I were in charge of developing the Kindle, I would look at getting college text books on the device. With the cost of college books these days, the idea of buying on device and a lot of electronic files sounds awfully good.

How google can be the good guy for publishers

I just read Google Versus Publishers, the Sequel, and agree with Mr. Filoux that Google should adopt the more granular ACAP protocol (for controlling what crawlers are allowed on a site). It’s almost impossible to believe that this would be hard for Google to do, and it would be a small gesture to publishers.

But I have a larger issue about copyright, and I think it’s right up Google’s alley.

I recently noticed that somebody Tweeted a link to a free download of a magazine I work for. They forgot to ask our permission. 😉

Some people might say, “but isn’t this to your benefit? Somebody gets a free issue. They may subscribe.”

Maybe it is, and maybe it isn’t, but that’s for us to decide, not some pond-scum internet spammer. Nobody has a right to other people’s property, and that includes intellectual property.

Google can help solve the problem of copyright infringement. Here’s how.

First, there needs to be a centralized content registry for publishers. This registry would be used to

  1. protect branded terms,
  2. protect copyrighted content, and
  3. have a designated person at each publisher to manage abuse issues.

This is how the registry would be used.

  • Search engines would give preference to brand-related searches for companies that own that brand.
  • Publishers would register their content in the repository. Search engines already index content on web pages and compare similar text on different sites, so it should be easy to find unauthorized copies – i.e., to compare the copyrighted text in the repository with text that is being published on some other website. If an unauthorized site publishes copyrighted material, that site would be flagged for abuse and the publisher would be notified. If the issue isn’t resolved, the site would be blacklisted.
  • Blacklisting would mean that the site would not appear in search engine results, and links to that site – on Facebook, Twitter, etc. – would be deleted. (Obviously this would require a cooperative effort between these services.) Accounts associated with blacklisted sites would be suspended.

Obviously this idea could use some refinement, but I think something along these lines would work. Furthermore, it’s consistent with Google’s mission to organize the world’s information. That mission has to include a kind of recognition of copyright. So something like this goes directly to Google’s core competency.

Other big web companies would want to participate in this effort because they’d want to look like good guys.

Finally — as another benefit for Google and the publisher — this publisher repository could be a self-funding effort. If every legit publisher voluntarily contributed their copyrighted material to a centralized content index, there are innumerable ways Google and the publisher could make money off of that.

Tracking subdomains in Google Analytics

If you have multiple subdomains on your site, like store.name.com and www.name.com, you might want to use the code that allows you to see them all in one Analytics profile.

But there’s a downside. Your stats for store.name.com/index.html and www.name.com/index.html will get combined.

There is also a solution. You can create an advanced filter to keep them separate. See How do I track all of the subdomains for my site in one profile?

Update: Unfortunately, this messes up the site overlay feature. When it tries to create a site overlay it puts your subdomain in the wrong place in the URL.

The importance of funnel analysis in landing page tests

Hunter Boyle from Marketing Experiments gave an interesting talk at the SIPA Miami conference on ways to fix leaky sales pages.

He reminded marketers to put themselves in the mind of the visitor, who is asking three basic questions when he comes to your site or page.

  • Where am I?
  • What can I do here? and
  • Why should I do it?

Hunter’s presentation was on “high-impact” changes to sales pages, so he pointed to some of the elements that are most likely to have a dramatic change in response.

  • Headlines
  • Testimonials
  • Forms
  • Navigation steps
  • Price
  • Product images

Some web pages have “related articles” links on the side, right? Well my mind is like one big “related articles” link while I’m reading or listening to a lecture. While Hunter was talking about these changes, I started thinking about some of the issues related to the examples he was sharing.

Hunter mentioned a marketing effort from a client (I believe it was an email) that mentioned “eight things” you could get. Unfortunately the landing page didn’t have the eight things.

So imagine an email campaign that promises eight things and sends people to an A-B landing page test. The test is set up to measure which page is more effective at driving traffic to the registration page.

For these purposes, let’s say the user’s action sequence would go like this.

  1. Open email
  2. Click on link to go to A-B landing page test
  3. Click on “buy now” button to go to registration page
  4. Register and finalize sale

The test measures how many people get to the registration page, but not how many people order. (Google’s website optimizer only allows one goal page, which is something they ought to fix. There should be primary and secondary goals. And maybe even tertiary.)

In this case, the email promises “eight things” of some sort. It’s an effective group of eight things, so it does what it’s supposed to do — drive people to the landing page.

But there are two landing pages. The A version has (or mentions) the “eight things,” and the B version doesn’t. Or maybe the B version mentions them, but they’re not clear enough to the visitor.

The users who go to the A page evaluate the offer, including the eight things, and some of them click through.

The users who go to the B page wonder where the eight things are and suppose they might be on the next page, so they click through — not because they’ve evaluated the offer, but because they’re confused. They get to the registration page, which also doesn’t have the eight things, so they bail.

What has happened? The B version — without the eight things — has won the A-B test, but it gets fewer orders. A marketer might look at his A-B test results and say, “Wow, this page increased my CTR by 75 percent. What a winner!”

Of course it’s not a winner at all. It only increased throughput in one part of the process, for all the wrong reasons.

There are two solutions to this kind of problem.

The first is to set up the experiment so that the “goal” page is the “thank you” page after a successful registration. That way you’re measuring which landing page got more orders, and not merely which landing page got more clicks to the next step.

This is not a good solution. It solves the problem of “which landing page gets me more orders,” but it makes it harder to optimize each piece of the funnel — mostly because it takes a lot longer to run the test.

Marketers should optimize each part of the process.

  • Optimize the subject line of the email to get more opens.
  • Optimize the email creative to get more clicks to the landing page.
  • Optimize the landing page to get more people to the registration page.
  • Optimize the registration page to get more completions.

Which brings us to the second way to fix the problem. Do the A-B test on the landing page only — with the landing page as the test page and the registration page as the goal page — but also set up a funnel in your analytics program so you can monitor every step of the process.

Then if the B page wins the A-B test but kills registrations, you’ll be able to see that in your funnel analysis, and you’ll still be able to run a quick, targeted test on your landing page.