Why Ad research is like growing a pot plant
Don’t get too excited. Not that kind of pot plant.
Here’s a simple analogy - taught to me many years ago. It will help you spend your research budget wisely. As I watch what goes on in Marketing/Ad/Research land it seems many have forgotten, or simply never come across it.
Think about growing a plant. You spend time and take care choosing good seed and ensuring you have rich soil – good chance of a healthy plant. Plant weak seed into poor soil and you have very little chance at all. Even if, late in the process, you bring in the fertiliser and weed killer, you’re unlikely to get lush growth from a stunted plant that stood little chance in the first place.
Well, that parallels your choices in communications research.
There are distinct stages in the development of communication when you can conduct research. Leading to 4 types of communication research.
One is extremely useful almost all the time.
One is pretty useful most of the time.
And one is occasionally useful.
The fourth – generally the most expensive - is most useful in lining the pockets of shareholders of research companies. It’s least helpful to marketers or their communication Agencies. Often, it’s downright harmful.
In sequence - from planting to harvesting - they are: Strategy Research, Development Research, Advertising Evaluation, and Tracking Research.
The 2 bookends – early and late are most useful:
Early research to inform strategy i.e. well before the brief, is always useful. When conducted well. It’s the gardening equivalent of choosing robust seed and ensuring the earth is fertile.
Tracking research – once the communication is made and in market – is very often very useful.
Development research during the process is sometimes helpful. But there’s a gigantic caveat: it must be research to learn how and why people respond to the idea. And how well it communicates the core strategic thought. It is definitely not about testing. Not about winners. Certainly not about execution.
But I despise – with a passion – the big, expensive pre-tests (a.k.a. copy tests). Pseudo-scientific, mumbo-jumbo mainly used for arse-covering. These big studies cost a bomb. They provide a sense of security because they’re big and bulging with numbers. Generally, they’re not much more reliable than the toss of a coin. And if our CMOs are no better than a coin toss – in my experience, they are – goodness help us all.
(It’s at this point you’ll hear the squealing and gnashing of teeth of a few research companies. And it’s about now that a ‘validation study’ or two will appear.)
If you do have a communications research budget, spend as much as you can afford getting the strategy right, then switch what’s left to tracking. Put a little bit aside for Development research – for the limited occasions it’s needed.
If someone tries to sell you a big, expensive, ‘consistently reliable’ quantitative Evaluation study that ‘usefully informs communication development’, does a great job with emotional response, etc. ask them about their pot plant….
The other kind.
(Please follow me on Twitter: @MarkSareff - it’s a tomato plant in case you were wondering)
I come across this analogy a fair bit, actually. It’s almost always posted by someone in advertising, and you can understand why. It’s essentially saying there is no possible way to usefully consult consumers about execution at any point in an ad’s development. The people who believe this are persuasive types who marshall interesting arguments (or, as here, elegant analogies). They are also, to a human, the people whose work is being assessed in this way. There’s a very strong sense of “you would say that, wouldn’t you”?
It’s like an employee saying, “Look, make your job interview really rigorous before you hire me. And when I leave, conduct a pretty thorough exit interview. But when I’m on the job, don’t do performance reviews or assessments or anything, just trust me, yeah?”
That sounds GREAT, of course. I wish employers thought like that. But I know why they don’t.
(I am equally partisan, of course. Though tracking research is just as shitty, perhaps even shittier, than pre-testing, so I dunno why it gets a free pass here!)