When you read through this posting, you'll see a "Test 1234" where you can help out, individually, because the only way I can figure out why our emails aren't working so well is to find if there is a real problem here. Testing can be invasive, irritating, subtle, or (perhaps i will find out) useful in involving your community in your marketing and business challenges. Thanks for helping out. You can click on the image to send the test email or email buckshon@cnrgp.com, but remember to phone or comment to validate the results.
Is your bright marketing idea really so bright? Often, I have to admit, my "best ideas" bomb in the real world, have unintended negative consequences, or simply don't work quite the way I thought they would.
Then of course, there are the ideas that didn't seem to have much going for them in the first place, but for some reason seemed worthy of pursuing. This blog, for example.
Is testing a way around this challenge? Can we keep ourselves out of trouble by trying the idea on a small enough sample and/or short enough time that we can tell if our marketing ideas are good, bad or ugly -- and catch disasters before they happen?
The answer at first may seem to be an obvious "yes", and I'm pretty confident I've saved my skin from some bad idea implementation by trying things out with a simple test first.
In one case, for example, we were exploring starting a new leads service, providing really valuable data in co-operation with a major partner, using relatively low cost print-on-demand technology and providing genuine value and service to readers.
My sixth sense told me to be careful, so I asked a non-employee sales representative (who I know was good at the type of selling required) to work with me for a couple of days to see if the idea would fly. It bombed.
But here is the rub, and challenge, of testing. Maybe the test really wasn't valid. Possibly some variable within the testing process had thrown the numbers off, and with a slight rejigging of the concept, either within framework, market or focus, it would have worked wonderfully. We'll never know, of course.
Conversely, we know of awkward tests that cause real market problems; specifically the infamous "mike tests" where politicians and celebrities are caught with their pants down (or, to use another cliche, their foots in their mouths) when they say things that really aren't meant for broadcast in an AV test.
Finally, in the construction marketing community, we have to contend with the reality that many of our most interesting and important projects arise from long-term and interconnected relationships, which run counter to the principals of simple A/B testing and other measuring methodologies. If the process has a long lead time, how can we figure out when and what to test, and where are the boundaries of good metrics and simply wasting our time counting things that aren't really important?
Here are some thoughts and guidelines that can perhaps help out on our testing methodologies.
If it can be measured, it can be tested.
You will find test results are most meaningful and easy to implement if you have some simple metrics to assess your ideas and results. Online resources provide a wealth of measurement tools, including hits, conversion rates, and the like, and you can develop simple processes to tell if something is working or not. Once you have a baseline, for example, on responses to your emails, you can try different things with A/B samples, sending one message to half the list and another message (with the change you are trying to evaluate) to the other. You'll know which works right away.
Testing works best if is real time, not obtrusive, and doesn't seem to be a test.
The "Hawthorne Effect", where the fact that by letting people know you are testing changes behaviour, is a real risk if you are obvious in your evaluations. So you need to be discreet. Market surveys are often problematic -- many people hate surveys and won't respond (your most important sources of information), or you need to be intrusive with phone calls and personal contacts (thus alienating the person you are seeking to survey, or again, you achieve invalid results.)
You need to catch subtle clues as well as objective measurements in your testing (carefully).
Sometimes the real results are not the ones you think you were going to find; so you need to probe into the observations you obtain and listen carefully to subtle clues. Of course you may have it wrong, but you may also score major insights.
So, yes, test your assumptions, your ideas and your marketing strategies, especially if you are planning to commit major resources to the process. But be wary of the testing pitfalls.
And now for a real-time test (and a request for your assistance) because we are having some strange email problems. Some emails I and my team are sending are not going through; in other cases, people are sending emails to us, which we aren't receiving. The problem may be spam blocks; I don't know.
If you complete this test, I'll send you a copy of my new publication: The Art and Science of Publicity, but most importantly, a really big "Thank You".
- Please send an email marked "Test 1234" to me at buckshon@cnrgp.com.
- At the same time, please post a comment to this blog (which operates on a different server) or phone me at 888-432-3555 ext 224. (If you wish to leave a comment and remain anonymous, just make reference to something in your email which won't disclose your identity.)
The more responses the better -- the key measure will be the number of communications we receive through the blog comments and phone messages which fail to correlate with the number of emails we receive. But I'll learn other things, as well, which I'll share with you tomorrow.
Again, simply email a note to me at buckshon@cnrgp.com with a subject line: "Test 1234" and communicate offline, either by a blog comment or phone call to 888-432-3555 ext 224 to tell me you've sent the email. Thanks.
No comments:
Post a Comment