Using Google’s ad network to pre-qualify a website

I get calls from time to time from ad salesmen who want me to spend $10,000 (or more) in display ads on their sites.

They promise to give me a great deal and they say their site is a great fit for my products, so I ask them if they’ll take the risk and charge me on a cost per acquisition basis?

Of course they aren’t interested in doing that, so I’m stuck with plopping down $10,000 (for which I might get $200 in purchases — if I’m lucky), or just giving up on display ads.

(They’ll also give me the story about how you can’t calculate the value of display ads on a click-to-conversion basis, and that the ads pay for themselves in branding and increased search, etc. I’ll post something on that another day.)

If I don’t want to buy specific real estate on specific sites, I can also run display ads on a network, like Google Adwords. I can even target specific sites on Adwords, which gave me a good idea the other day when a salesman called.

Salesman Joe from mygreatsite.com asks me to spend $10,000, and I go through the old cost per acquisition pitch (just for fun — it never works), and the conversation lags. Joe says “sign up for $10,000, and if it isn’t working after a couple weeks, we’ll cancel it and save the balance.”

That’s a little nicer, I guess. That way I only waste $3,000 and get no revenue at all.

But then I pull out my trump card.

I take a quick look at mygreatsite.com and notice that it runs Google ads, so I set up a site-targeted campaign in Google — text and display — that might cost me a couple hundred bucks. I tell Joe to check back in three weeks and we’ll look at the data. If the Google ads worked, that tells me there might be some hope for selling my products on mygreatsite.com, and I’ll consider the $10,000 gig. But if the Google ads don’t work, it’s not worth the risk.

(Of course I’ll always take the cost per acquisition deal if they’re so sure it’s a great opportunity.)

Ten reasons your company shouldn’t tweet

A friend sent this along. Top 10 Reasons Your Company Probably Shouldn’t Tweet

I continue to suspect that Twitter is over-rated — almost certainly for most businesses, and possibly just plain over-rated. I don’t see how it adds much to life.

If you want to keep in touch with Friends, Facebook is better. If you want content, a blog is better. If you want people to know that you just ate a sandwich, you need to get over yourself.

I can see how Twitter would be good for giving people instant updates on an event, like a conference. But … why? Why can’t I read it tomorrow on your blog?

Also, is there anybody I can follow on Twitter who is constantly going to every conference I care about (and who won’t tell me about his dog and his lunch)?

You can write short posts on a blog if you like, or long ones (which you can’t do on Twitter.)

Via RSS feeds (which any blogging software will provide) you can get blog posts a number of ways. Maybe not on your cell phone, but … if not, then soon.

It’s the Big New Thing, but I don’t think it will last. Other, more robust applications (like Facebook) will figure out how to deliver whatever small benefit Twitter offers, and Twitter will go away.

Testing insanity

Good marketers know that their opinion on a particular color or design or choice of wording doesn’t matter. What matters is how the market responds. So marketers like to test things.

Which headline works best? Does a starburst announcing a risk-free trial help or hurt response? Is it better to offer a special report as a premium, or just keep the page and the offer simple?

That’s all well and good, and those are good things to test. But as you get into testing you have to keep your eye on testing insanity. You start to see other things you can test, …

  • Does my e-mail traffic respond differently than my Adwords traffic?
  • Do people respond differently on the weekend than during the week? (I’ve suspected this on some of my tests.)
  • Do people respond differently during office hours?

The more you test the more variables you see. Sometimes it seems like you could go crazy coming up with new things to test.

I’ve asked Google to change its website optimizer tool so that you can see results during a given timeframe — so, for example, I could see if my weekend traffic performs differently than my weekday traffic.

But sometimes I’m glad it’s not possible to test these things. If you can keep your tests simple, you avoid testing insanity.

The “evidence-based” future of marketing

Gerry McGovern is an interesting fellow with good ideas about web design and testing. I recall a MarketingProfs seminar he did a few years ago about “customer care words” that was very good, although the title was slightly confusing. It’s not about words for your customer care department, but words your customers care about.

His “customer care words” approach proposes a simple method for finding out what your customers truly want and designing your website around those words — rather than, for example, around what the CEO thinks is important, or what the company thinks about its products and services.

This post is somewhat along those lines.

Future of web management is evidence-based

Keep your emails simple

Technology makes it easy to do all kinds of cool things with emails and landing pages, but cool things aren’t always helpful. Simplicity is usually your best bet.

When designing new email creative, consider the following.

  • Don’t use too many images.
  • Use “html lite” — mostly text with some color and simple formatting.
  • Avoid columns
  • Make the text simple to scan with short paragraphs.

On simplicity in other email applications, see Don’t Overcomplicate It.

Error bars and conversion rates in optimizer experiments

I don’t know how Google’s website optimizer picks the winning combination on a multivariate test. It’s some complicated statistics that I don’t know and probably don’t want to know.

The trouble is that the margin for error on a complicated experiment (i.e., an experiment with a lot of options) sometimes overwhelms the results.

For example, if the “winning” combination has a conversion rate of 45% ± 10%, and the second place combination has a conversion rate of 42% ± 10%, how sure can you be that the winning combination really won?

You could let the experiment run for a long time until the margin for error decreases. The problem is that you’re continuing to run the crappy options along with the good options, so you’re hurting your overall conversion rate.

A better option is to trim out the clear losers and simplify the experiment, or run a follow-up experiment.

The “best practice” is to make the complexity of your experiment match the amount of traffic on the page — i.e., simple experiments on pages with a little traffic and complicated experiments on pages with a lot of traffic.

Trimming complicated Optimizer experiments to make them more efficient

When you run a landing page test in Google’s Website Optimizer, one thing to consider is how long it’s going to take to get meaningful (“statistically significant”) results.

If you have too many variables, or if a small percentage of your visitors actually make it from the test page to the goal page, it’s going to take a long time to get results.

After you set up your experiment it’s somewhat frustrating to wait for a statistically significant result when it’s obvious that one of your options is a dog. For example, consider this.

Unfinished results

The third option in the first section is a loser. Why allow it to continue to drag down response while waiting for a final answer?

You don’t have to.

If you go into the “combinations” tab for your experiment you can disable the versions that include the losing option. That directs more of the traffic to the options that are still viable and makes the experiment go a lot quicker.