1. Why surveys suck

    This blog post from LoveStats says a lot of things I fundamentally agree with. It’s not the first blog post to say those things: in fact, without wanting to take anything away from LoveStats, I’d say there’s a growing consensus in the MR biz that surveys are tedious and overlong and it’s doing the business no good. Certainly I’ve been hearing it a lot at conferences, and what I don’t ever hear are people standing up and going “Oh come on now, people love hour-long surveys, this is all a lot of fuss about nothing.” Even MR Heretic feels the tide of opinion may be turning!

    So, giant surveys suck, most of us know it, and are things changing? The jury’s out on that one. And we need to push a bit harder on the question: why have surveys got so tedious, and what can we actually do to improve them? The second question is a lot harder so I’ll throw it open to the research blogosphere and concentrate on the first.

    One uncomfortable truth is that surveys have got more boring because the people buying research want them to be more boring. Which is to say, longer and more detailed: obviously no buyer would say “yes, I want a boring survey please”.

    The logic behind longer and more detailed surveys is obvious. The globe of consumer opinion has long since been circumnavigated; the continents of preference and the islands of attitude have been mapped. You know almost everything. And your competitors are also doing research, so they will know it too. But to get a competitive advantage you need insights about your customers and market that your competitors don’t know, and one way to get them is to ask more. It’s like the old advertising saw - half my advertising is worthless; I just don’t know which half. Half - at least - of the questions you ask will tell you nothing useful, you just don’t know which half.

    So are research agencies blameless? Sadly no. It’s in the interest of the industry as a whole to slow or reverse falling response rates, but a tragedy of the commons effect applies since the “interest of the industry as a whole” doesn’t pay the bills when a monster habits and attitudes job comes down the pipe. But even beyond that there’s several things that have happened in the last decade or two which have exacerbated the situation.

    i. An emphasis on change not continuity: A lot of things, in a lot of markets, don’t change very fast. But “change” has become one of the mantras of modern business, especially the information business: consumer behaviour, we’re told, is changing all the time, and you need to change with it, which means you need information. Which increases the pressure to do more and more research. For some categories this has certainly been true, but not for all. It has been in the interests of the research and consultancy professions, though, to push the idea of change as it demands more spending on research.

    ii. An emphasis on consultancy not data: This at first seems counter-intuitive - surely the drive among research firms to become ‘trusted advisors’ would lead to greater understanding of real business issues and hence a focus on effective, rather than expansive research. In theory, yes, and in some cases it’s worked beautifully. But what’s usually happened is that most agencies (big or small) for most clients haven’t got to the ‘consultant’ stage - they aren’t consultants and they aren’t seen as such. They’re in a limbo where they’re promising insight but can’t necessarily deliver it - but to get there they’ve become “client-focused”, which means they’re filled with people who find it not just professionally but ideologically difficult to say “No”.

    iii. The drive to online research: The shift to online research has led to greater efficiency and cost savings and was pretty much inevitable anyway. But it’s come at an unintended price: when your research process relies on human to human contact you have a large number of fieldwork employees who know what a boring or fiddly or badly designed survey looks like. What’s more, it’s in their professional interest to tell you, as their jobs not only become more difficult, they become more unpleasant when they have to confront other human beings with a titanic survey. In a world where online surveys are the preferred methodology, there’s very little human cost to making them longer: if responses drop, you just buy more sample and send the thing out again. The fieldwork and operational people are no longer confronted with the consequences of long surveys, which removes another barrier to doing them.

    This is all a fairly bleak analysis, but I think it’s worth considering this stuff in order to actually work out how to “make surveys better”. That’s a story for future posts, though.

     
  1. blackbeardblog posted this