This is the paper I wrote for the MRS Research 2008 conference, about my experiences moderating an online community (ILX), now uploaded to scribd.com*. Given that it was the first time I’d ever written anything for a conference it did rather well: nominated for Best Paper and Best Newcomer, and winner of Best Presentation.
I read it again last night before uploading it to see how well it stands up. Given that I wrote it before I had any experience of actual research communities, I was expecting to cringe at a few bits. But quite frankly I think it stands up gratifyingly well. Obviously, it’s extrapolating from personal experience of ‘wild communities’, and the industry practise surrounding MROCs (horrid word) means that some of my observations aren’t relevant for those. But the differences themselves lead to unintended consequences. Expect an updated look at the topic on this blog in the future!
*you have to sign up to scribd to see it, but it’s a painless process - I signed up ages ago, had forgotten I did so, and have never had a moment’s bother from them.
Some phrases that jumped out at me from the #esoc (ESOMAR Online Research Conference) tweetstream:
"a tsunami of unstructured data"
"no mentions of representativeness, why not? Fit for purpose, not for rep."
"ticks on the back of the information hippo"
OK, I admit it, the hippo one was mine. But this was a theme of the online conference - data, data, everywhere, data that must be listened to. It must also conform to certain standards of quality, at least when we create it ourselves via surveys, but the definition of quality is subordinate to questions of usefulness - did the information help its buyer make a better decision?
Nothing wrong with that! But at conferences it’s also worth listening for the dogs which don’t bark - in this case privacy, ethics, consent, that kind of stuff. These have been growing topics at UK conferences, it surprised me they weren’t more under discussion here. At one point a guy from Facebook explained that no, they weren’t planning on opening the platform up to research too quickly because the user and her privacy came first. Given how keen Facebook are on advertisers this may not be as high-minded as it sounded, but even so the statement was a rebuke: you lot, he was implying, are not an industry that respects your participants terribly much.
Anyhow, it seems to me we have a nice little research storm brewing. First of all we have an increased pragmatism: as the industry has become more and more client-centric, we’ve moved to a point where the value of the information we sell lies in its ability to help a client make good decisions. It always has been, of course, but the information could be rated another way - how valid was it? It’s all very well saying “research was never representative”, but you don’t get off that easy - representativity might not have been achievable, but it was the ideal to strive for as best you could as you looked for validity. “Fit for purpose” as a test of validity is a big shift towards, essentially, deregulation.
Now, I love social media, I think it’s stuffed with valuable information, and in a sense deregulation suits me just fine. But the industry isn’t actually embracing ‘deregulation’ as such - quite the opposite: it’s bristling with new definitions, taxonomies, codes. And this is the second part of the storm: codification - because the landscape of research is in flux, industry bodies are trying to update their rules and principles, often for the very good reason that they want to stay out of any potential legislative trouble. After all, marketers are increasingly a target for government, why wouldn’t researchers be too?
And the third part of the storm is the sheer abundance of information that now exists about people. People are sharing more information but they’re also leaking more information all the time - not to research companies, but it’s out there and a lot of the time it’s public. This element WAS talked about at the conference but generally in the sense of it being a happy opportunity.
So what we have is a collision of pragmatism and codification in a context of abundance. And I’m saying what that’s going to lead to - it is already - is black hat research. It won’t be called that of course, it probably won’t even refer to itself as research and it won’t be at many of our conferences.
What is it? Well, you have the information gathering which happens within industry codes and conforms to industry quality standards. And you have the law which lays out what information you can actually legally gather but which is full of loopholes and grey areas and stuff. And black hat research is what happens between the codes and the law. Scraping and mashing up personal data, rogue PR surveying, push polling, communities that mix research with marketing… you can add your own, I’m sure. Black hat research is going to get bigger and bigger and more important, a shadowland of information and intelligence provision which passes the “fit for purpose” test but maybe not other ones we might set it.
When Alison Macleod of The Human Element blog posted her plea for market researchers to be more opinionated, I took it as something of a challenge. So here, quite unsupported by anything other than grumpiness and prejudice, are some of my research- (and social media) related opinions.
1. “Insights” aren’t zen koans. If you can express something that briefly, it’s probably banal.
2. Between “data” and “recommendation” comes a little thing called “argument” which we neglect rather too readily.
3. Making two different pieces of information talk to one another is the most important skill a researcher (or almost anyone) can have.
4. We may not be as good as we ought to be at interpreting information, but by god we’re better at it than 90% of the people who end up blogging that information, so we have to get the message right from the start.
5. We are really bad at making celebrities out of our great practitioners: where’s the research Rory Sutherland? Why isn’t she or he at TED?
6. An online community is a factory for unintended consequences, and most of the people using them don’t understand how they work, let alone how to analyse them well. (I am not saying I understand how they work either.)
7. I know some very bright people who do semiotic work but on stage or in case study format it almost always ends up looking like a qual black box, one set up to produce undergraduate cultural studies essays.
8. I am always scrupulously polite about neuromarketing but if you were to WIRE UP MY BRAIN at a convention you’d be able to tell my true feelings. Or I might just be thinking of how nasty the canapes were.
9. The engine powering social media isn’t “influence”, it’s favouritism. Most talk of ‘trust’ is a way of justifying cronyism. This is not always bad, still less is it avoidable, but it’s not a brave new social arrangement either.
10. Hypocritical this in the light of much of the above, but the very worst thing about market research is its unending tendency to flagellate itself and envy people who are a great deal less informed than it is.
I’m not saying these are controversial, or consistent, and certainly I bet they’re not all correct, but there you are. Opine away on your own blogs!
This weekend I came across Goodhart’s Law. Like a lot of “laws” it’s more Goodhart’s Observed Tendency: it basically says that when you start basing policy around an economic indicator, the information value of that indicator falls to zero. It’s a kind of decision-making equivalent of the observer effect, as seen in physics. Or, if you like, a fancy way of saying “a watched pot never boils”.
Now in its original form - talking about government policy - it’s a sly way of asserting that government intervention is useless and so economic agents should be left to their own devices. But I suspect Goodhart’s Law - or something like it - also applies at every other level of measurement activity, including the firms the majority of said agents operate in.
How might this actually work? We design metrics to simplify complex systems. But when a value designed to describe a system becomes a way of assessing success within that system, two things can happen:
- the system adapts to reflect (and therefore game) the metric
- the stuff not reflected in the metric goes unnoticed, becoming a big breeding ground for potential unintended consequences.
And the more important the metric is, the more it gets gamed.
Anyone who’s spent any time working with social media - looking at “buzz”, reputation systems, measuring “influence” etc. - won’t find these ideas particularly foreign: I wasn’t surprised to find good blog posts from a couple of years ago talking about it. You can see it happen on Twitter, say, with “follower counts”.
Over in the world of market research, though, these ideas aren’t quite as commonly expressed and I suspect many wouldn’t agree with them. In this business an aphorism I’ve heard quite a lot is “if you can’t measure it, you can’t manage it”. What Goodhart’s Law is implying is the inverse: “once you measure it, it manages you”
Let’s assume Goodhart’s Law is at least poking at a greater truth about the unreliability of metrics to assess complex systems. What can decision-makers do about it?
1. Abandon the metrics.
2. Double down on Goodhart’s Law and increase the importance of a metric until removal of the metric would crash the system.
3. Increase the opacity of the metric so gaming it becomes significantly more difficult.
Nitsuh Abebe is one of the smartest guys on the Internet (my patch thereof). In this post he offers a defense - or at least the best explanation I’ve seen - of how snark became a default mode of online discourse.
This bit is really crucial I think (emphasis mine):
Flippancy is more fun. The work of reaching out and explaining things is potentially dull and time-wasting; it’s just plain funnier and more exciting and more gratifying to be on the inside of shared assumptions. (We like talking to friends, not strangers.) The histories of a lot of message boards and comments boxes can be traced out along these lines: they begin with a few people earnestly explaining themselves to one another, finding common assumptions and common ground and welcoming newcomers; then they grow, and their shared assumptions solidify, and they get flip and concise and referential and giggle at newcomers who stumble in and Have to Ask.
This is exactly right, and is what I was getting at in my conference paper a couple of years ago when I talked about the shift from “content motivation” to “social motivation” within communities (of course, Nitsuh and I are thinking primarily of the same community, so beware!).
At an individual level community is all about having interesting conversations and meeting new people and suchlike. But at a group level all those conversations and interactions create a complex system out of which emerges social convention and shared knowledge and the perhaps unpleasant glue of snark. And this happens pretty much beyond the level of one or two individuals to change (well, OK, I’m not sure I believe that, for reasons I’ll come on to).
Anyway, occasionally on social media blogs you get someone saying “LOL Twitter is all about what people had for lunch” and someone else then says “No LOL @YOU because we really need this phatic stuff and it’s good for us.” And snark is kind of like next level phatic communication - we’ve invented something which combines the two crucial primate group activities of picking shit out of each others fur AND chasing other apes off the territory. Good for us! (Seriously!)
You can tell I’ve been reading Herd recently, I’m sure.
Which book also reminded me of Philip Ball’s terrific Critical Mass, and the stuff he writes there about metastable states and their applicability to social behaviour. An example of a metastable state - forgive my rubbish layman’s explanation! - is when water stays liquid well below freezing point, until it gets disturbed, and collapses into its stable state (ice) all at once. What I took from Critical Mass is an appreciation not necessarily for the science of metastability but for the concept as a wonderful metaphor for fragile equilibria.
So what I wonder in relation to communities is - and here’s the research-relevant bit - what if generosity is a metastable state of online discourse? With snark being the stable state - the stronger equilibrium that generosity (by which I specifically mean - welcoming to newcomers) is likely to tip into as a community grows and creates its social glue.
If so this would have some interesting implications. One largely unexamined baseline assumption about communities among marketers is that they stabilise around generosity: with a certain amount of light moderation they are basically self-sustaining. But if generosity is metastable we’d expect its maintenance to require more and more effort as the community grows - with consequences for the cost and energy of running one.
My friend Mark Sinker has often told me that influence doesn’t exist. Now, in referencing him and his theory you might think I was disproving it. But of course Mark wasn’t saying that the speech or actions of one person don’t affect those of another. Part of his point was that a whole spectrum of real, interesting, analysable effects gets wrapped up in one more-or-less useless banner word: influence.
And this banner word gets a free pass: everyone assumes they know what it means and so they don’t dig into the reality.
For instance - The Beatles influenced Oasis. This sentence seems obvious enough but what does it actually mean? Did Oasis copy the Beatles, envy the Beatles, learn from the Beatles, admire the Beatles in ways that didn’t directly affect what they did? Did they just sound a bit like them sometimes? If Oasis had hated the Beatles, and tried hard NOT to do what they did, wouldn’t the sentence still be true?
Not to mention that the sentence is the wrong way round: it presents the Beatles as the active agent, but actually the Beatles are the OBJECT - it’s what Oasis is doing that’s important. You don’t get to choose to influence someone or not.
So if, as a music critic, I stop myself before I talk about influence, and think about what I actually MEAN, I end up writing better criticism. This is one of the best tips I’ve ever received (thanks Mark!).
INFLUENCE IN SOCIAL MEDIA
Mark’s point has obvious application to social media, where the i-word is bandied around like nobody’s business. Here - particularly in the marketing world - it seems to have a more concrete meaning, namely the ability to get someone to act on a message. But what that action is varies - a mention, a consideration, a purchase. And attempts to quantify “influence” founder not just because of this ambiguity but because of immense variance in the level of impact. The same source mentioning two things in the same category doesn’t necessarily have anything like the same effect.
And that’s before we get into the kinds of negative or ambient or subconscious effects the word “influence” tends to wave away - as surely in social media as in rock criticism.
We have to come back here to the reverse-direction problem Mark identified - influence isn’t always best understood as something you do, it’s something derived - always after the fact - from looking at other people’s actions. Actions which will certainly have changed or shuffled or misunderstand the content.
So to sum up the layers of complexity we’ve got here:
- Influence describes many different types of content transmission.
- Influence results in many different types of action.
- The frequency and intensity of those actions is unpredictable.
- Each of those actions has the potential to alter the content being transmitted.
- Each of those actions has the potential to create a new source of influence and begin the process again.
Put like that it’s no surprise that there’s a certain amount of handwaving around the subject. As Mark Earls suggests in his fine book Herd, what we call “influence” is, like most social effects, an emergent property of a complex system, and it does us little good to try and analyse it on an individual level (for instance by giving individual agents an influence “score”).
THE PHYSICS OF INFLUENCE
What can you actually do, as a researcher? Throw up your hands and despair? Well, as Frank Kogan points out here (in a useful exchange on the subject with Mark S) there’s nothing inherently wrong with fudgey ambiguous words. But as researchers our job is to understand what’s happening, and why, and what the people paying us can do about it, so we should probably avoid ambiguity.
And that means not using the i-word, but using more specific words and concepts which describe what we’re actually observing. Concepts which are already in circulation in research and social media circles - copying, fandom, remixing, groupthink, heuristics, triangulation, and so on. All of which might add up to “influence” but are better understood - and more easily used or taken into account - separately.
The way I like to think of it is as a sort of back-to-front physics. Scientists discovered the existence of fundamental forces, and discovered how most of those forces interact, and hypothesised the existence of a Grand Unified Theory which would explain all the interaction. Whereas what we’ve done is hit on the Grand Unified Theory first, slapped the name “influence” on it, and not really wanted to look too closely under the hood to see how the bits might function. That’s the work that really needs doing.
I read a lot of blog posts which are followed by a series of “great post!” “awesome stuff!” “you’ve done it again!” type comments.
It’s really nice to get those. I’ve had it happen to me, it gives you a warm feeling.
I think, however, that you should delete them. Here’s why:
1. They’re not always sincere: Most comments double as blog links and so commenters are using them as a way to draw attention to their own site, and get your attention too. Nothing at all wrong with that, but if they’re doing it by simply nodding their heads at whatever you post, isn’t that a bit lazy?
2. They reflect badly on your content.: What would you rather be creating - content that inspires debate, conversation, expansion - or content that inspires people to say “ditto”? If people can’t think of anything to say beyond “yeah!” then sorry, but you’re probably stating the obvious.
3. They clog up your comment threads: For people who do want to engage with your stuff and join a conversation around it, it’s frustrating when 50% of the responses they read through aren’t adding anything.
4. If people really liked your stuff, they’d be sharing it: Sometimes you do see something so great you think “I can’t add anything to that”. But in that case, why not show your appreciation by sharing it, Tweeting it, linking to it on another blog. Positive comments don’t help your content spread.
5. There are better ways of getting that fuzzy feeling: Of course there’s nothing wrong with wanting to give a good blogger a pat on the back. But there’s better mechanisms for doing it than the comment box - a simple “like” mechanism on your blog posts, or a star rating, gives people the opportunity to silently endorse your stuff. Less shy admirers could always drop you an email, or a tweet saying thanks for the good content.
Non-productive positive comments are a lot better than flames, of course, but they don’t honestly add much to what you’re creating, and might subtly detract from it. If deletion’s a step too far, I’d say at least try and build an environment that discourages them.