{"id":765,"date":"2015-09-25T15:51:07","date_gmt":"2015-09-25T19:51:07","guid":{"rendered":"http:\/\/gregkrehbiel.com\/marketing\/?p=765"},"modified":"2015-09-28T09:04:06","modified_gmt":"2015-09-28T13:04:06","slug":"nothing-is-easy-a-b-testing-version","status":"publish","type":"post","link":"https:\/\/gregkrehbiel.com\/marketing\/2015\/09\/25\/nothing-is-easy-a-b-testing-version\/","title":{"rendered":"Nothing is Easy, A-B testing version"},"content":{"rendered":"<p>There&#8217;s a song that often comes to mind when I try to evaluate the results of a split test. It&#8217;s Jethro Tull&#8217;s <a href=\"https:\/\/www.youtube.com\/watch?v=HoSOuYNNXjU\">Nothing is Easy<\/a> &#8212; because testing is sometimes difficult to set up, and it&#8217;s often difficult to interpret the data once the test is over.<\/p>\n<p>Marketers do A-B tests because we often don&#8217;t know which version of an effort will get the best results. That applies to direct mail, a web page, an email, a telemarketing script &#8230; just about anything where you are trying to provoke a response.<\/p>\n<p>It&#8217;s good to read best practices and expert guidelines to get ideas for your test, but you still have to test. You don&#8217;t know if any given expert&#8217;s approach will work for your market. (It can vary.)<\/p>\n<p>You might also wonder whether that particular expert&#8217;s advice still applies. (Things do change.)<\/p>\n<p>It&#8217;s lovely to hope that you&#8217;ll send an email with two different subject lines, and one version will win, and then you&#8217;ll be done. Yeah.<\/p>\n<p>But what does &#8220;win&#8221; mean? It&#8217;s often not quite as clear as you want it to be.<\/p>\n<p>For example, let&#8217;s say panel A gets a better open rate, but fewer emails were delivered. (Yes, your subject line can affect deliverability.) Panel B had a lower open rate, but more emails got through to the recipients. Which version won? <\/p>\n<p>I&#8217;d say panel B won, as a general rule, but there may be reasons to prefer panel A. For example, if you&#8217;re experimenting with a subject line for an on-going series like a daily email, the deliverability might work itself out over time, in which case A is the better choice.<\/p>\n<p>Whenever you do a test, you have to measure the right results for your bussiness. In some cases that&#8217;s relatively clear. If you&#8217;re trying to sell soap, whichever email sells more soap is the better choice, right?<\/p>\n<p>Maybe. What if panel A sells $5000 worth of soap, but only get 500 customers, while panel B only sells $4500 worth of soap but gets 550 customers? Which is better? Maybe adding a new customer is worth more than a sale. <\/p>\n<p>Testing requires you to think about what different metrics mean for your business.<\/p>\n<p>Take the example of an email that&#8217;s meant to drive traffic to your website. Panel A and B get about the same number of clicks, but Panel A gets far more unsubscribes and Panel B gets more spam complaints. How do you choose?<\/p>\n<p>I would choose Panel A. <b>An unsubscribe is not a bad thing.<\/b> If somebody doesn&#8217;t want to get your newsletter, you don&#8217;t want to send it to them. But you don&#8217;t want spam complaints. They hurt your deliverability for all your campaigns.  <\/p>\n<p>Also, for some reason, your Panel B is saying &#8220;spam&#8221; to your recipients, which hurts your brand reputation, and that&#8217;s more valuable than any individual email. <\/p>\n<p>Testing is a necessary part of marketing, but it often raises more questions than it solves. Which leads to more testing. Which can become an obsession!<\/p>\n<p>The secret is to stay focused on the numbers that drive your business. Don&#8217;t test for testing&#8217;s sake. Learn to put the right value on each metric, and adjust your valuation based on the goals of the particular campaign.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>There&#8217;s a song that often comes to mind when I try to evaluate the results of a split test. It&#8217;s Jethro Tull&#8217;s Nothing is Easy &#8212; because testing is sometimes difficult to set up, and it&#8217;s often difficult to interpret the data once the test is over. Marketers do A-B tests because we often don&#8217;t &#8230;<\/p>\n<p><a href=\"https:\/\/gregkrehbiel.com\/marketing\/2015\/09\/25\/nothing-is-easy-a-b-testing-version\/\" class=\"more-link\">Continue reading &lsquo;Nothing is Easy, A-B testing version&rsquo; &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11],"tags":[],"class_list":["post-765","post","type-post","status-publish","format-standard","hentry","category-publishing"],"_links":{"self":[{"href":"https:\/\/gregkrehbiel.com\/marketing\/wp-json\/wp\/v2\/posts\/765","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/gregkrehbiel.com\/marketing\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/gregkrehbiel.com\/marketing\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/gregkrehbiel.com\/marketing\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/gregkrehbiel.com\/marketing\/wp-json\/wp\/v2\/comments?post=765"}],"version-history":[{"count":6,"href":"https:\/\/gregkrehbiel.com\/marketing\/wp-json\/wp\/v2\/posts\/765\/revisions"}],"predecessor-version":[{"id":772,"href":"https:\/\/gregkrehbiel.com\/marketing\/wp-json\/wp\/v2\/posts\/765\/revisions\/772"}],"wp:attachment":[{"href":"https:\/\/gregkrehbiel.com\/marketing\/wp-json\/wp\/v2\/media?parent=765"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/gregkrehbiel.com\/marketing\/wp-json\/wp\/v2\/categories?post=765"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/gregkrehbiel.com\/marketing\/wp-json\/wp\/v2\/tags?post=765"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}