For as long as there has been a research industry, there has been a large and appreciative audience for clients telling us what’s wrong with it. Ten years ago, one of the many things wrong with it was “black boxes”. Big agencies were offering too many proprietary techniques which relied on these boxes - where you could feed data in and get metrics out but the process by which the one became the other was mysterious and closely guarded.
My first job in research was working on these black boxes. Some of them had sold rather well in the 90s - their promise to reduce complex data to simple metrics was an obviously appealing one, and that meant their mysterious elements seemed clever rather than shady. But the tide gradually turned. In a research industry full of proprietary metrics it was hard for particular scores or tools to gain the necessary authority, and it could also be tough to persuade people higher up in a client organisation that black box techniques were measuring anything concrete, rather than offering a convenient metric which related to little but itself. Most fatally, proprietary techniques were criticised as excuses to increase the margins on research work, rather than adding any value to what was being delivered.
But at the exact same time the black boxes fell out of fashion within the research industry, they entered a remarkable golden age elsewhere. Google’s rapid rise to domination in the search engine sector focused attention on its Pagerank algorithm, a proprietary black box at the heart of its success. An entire industry grew up around understanding, second-guessing, and manipulating this algorithm. And after Pagerank came many more algorithmic black boxes - Facebook’s Edgerank and Klout’s Klout score being two obvious contemporary examples.
These black box techniques are successful in ways the old research ones never were. They are more sophisticated, to be sure. But on the other hand they are still offering metrics which are intentionally opaque, and which as a consequence relate mostly to themselves. So can we say the new black boxes are more accurate?
Well, there’s the issue. The algorithm begins as a tool to abstract metrics from data - over time the metrics it creates become primary data themselves. The algorithm doesn’t become more accurate, ‘accuracy’ changes to reflect the algorithm. People believed Google’s pagerank was a reflection of a page’s importance, and it became one. If people continue to believe a Klout score is a reflection of a person’s influence, it will become one. What the successful new black boxes demonstrate - to paraphrase Karl Marx (Klout Score 81) - is that an algorithm’s success is dependent not on its power to interpret reality, but its power to change it.
I was doing a bit of thinking today about engagement in online communities. Obviously “engagement” is an overweighted word and there’s plenty of debate around what it even is. So for my purposes I’m going to say “community engagement” means “being interested enough in a community to contribute to it”.
This is rightly seen as an important goal in market research. If you’re recruiting 1000 people to take part in a community and only 100 actually do, something seriously inefficient is going on. If you can get that proportion up to, say, half, things seem a lot more respectable.
So engagement in this sense matters. But the next question is: what are people engaging with?