More on them pesky polls



No, not that kind of pole


Nor this kind.

No, it’s the pesky opinion pollsters who are now being pointed at and laughed by so many following their collective failure to predict the big outcome – derision in the media and in politics and in the workplace (let’s be honest, it’s been a great opportunity for anyone with a grudge against market research to have a kick at the broader industry).

Which as David Penn points out is more than a little unfair to people like him, John Kearon and the irresistably innovative Brainjuicer Crew, Ray Poynter and Neal @northresearch and many others who’ve all been party to reworking the practice of other forms of market research based on the wide range of insights in the cognitive and behavioural science (while the pollsters simply learned from themselves and tried – within very narrow parameters – to improve their predictive powers marginally).

This tells us something interesting about the nature of innovation and – yes – copying.

Copying tightly or just those of your immediate competitors may just keep you up with them – the pack – but real advances come from looking far away from your home domain: to cognitive neuroscience, to crowd-based models (within the known limitations…) and to other behavioural sciences (e.g. the convergence of social learning, diffusion science and networks that I’ve been digging in in recent years).

It won’t come from looking left and right, up or down – in the immediate ‘hood. It’ll come from looking further away and copying from there. Recontextualising responses to similar problems to create novelty and improved performance.


Put simply, what’s gone wrong is this: the big polling beasts – the self-declared “gold standard” of the market research industry -have demonstrated that they are anything but. They have failed to embrace new learning from outside their field. They have merely copied each other in what they’ve thought of as a finite game. And carried on looking for advances in their tried and trusted fields of sampling, weighting, and question formulation…

What’s more they’ve let themselves down with some basic bad research practices: there’s even evidence emerging that – like their US cousins – they suppressed some findings and HERDED (god forbid) for fear of being branded an “outlier” by the scourge of pollsters, Nate Silver of Five Thirty Eight.

Final thoughts

I’m also a little bemused that the neuromarketing community haven’t leapt in to solve the pollsters’ problems – maybe it’s a case of discretion being the better form of valour. Certainly, those like Conquest and Brainjuicer who’ve adapted the insights from this world more broadly have made more ground than those still excited by method – galvanometers and MRI scanners.

However, when all’s said and done, it’s perhaps worth remembering this little beauty of an observation from New Scientist:

“But maybe we shouldn’t be too hard on the pollsters. According to the BBC, the Conservatives are on course for a 37 per cent share of the vote versus Labour’s 31 per cent. With many pre-election polls pegging both parties at 34 per cent, that is within the typical 3 per cent margin of error. Perhaps this time polling firms just got extremely unlucky.”

More on unreasonable expectations of precision shortly (as soon as I can dig out that old piece I did on “Were you still up when Bob called it for Kerry?) and (happy days!) revisit some fabulous old Ehrenburg papers on DataReduction.

Both important when learning how to copy well.



BTW for completists: here’s a nice piece by Neal Cole of North Research reviewing the various methodological issues raised by this; another by Brainjuicer’s Tom Ewing on the Blackbeard blog about the same stuff; and Conquest’s own brainbox, David Penn who quite rightly points out that if you look at the underlying System 1 (affective) responses to contenders, the differences have and can provide a much better indication of what’s likely to happen. Not – it should be said – an absolutely precise measure but a better indicator.