copyright 2017


(Don’t) tell me something I don’t know: ON routinisation & predictable insights in market research.

Download as a pdf here.

There is a line I love in Adrian Shaughnessy’s design bible Graphic Design: A User’s Manual. It reads:

“The single most important thing to remember when presenting work to clients is that they are terrified of what they are going to be shown.”

It captures perfectly the fear of the “unpredictable” that prevails in most organisations. Strangely enough, it prevails in market research too.

Even for the professionally curious, it seems the unanticipated is unwelcome.

So unusual are unexpected results that, when they occur, our scepticism has us presume a methodological issue or data entry error is to blame, rather than allow for the fact that something surprising could actually be uncovered; a 10% change in customer satisfaction scores will require considerable explanation, whereas no change requires none at all.

Predictable “insights” are easier to manage, less disruptive.

This is not the fault of the market research industry, nor is it the fault of research buyers. The preference for predictability is a structural issue: an unanticipated outcome of business cultures and processes that prioritise routinisation and standardisation.

Organisations no longer have time for surprises and a lot of work goes on to ensure that there are none: from the detailed specification of supplier briefs right through to identical office fit-outs on different sides of the world.

As far as possible, the potential for risk is minimised.

This doesn’t matter much when the goal is to curb the finance department’s capacity for creativity. Instead, problems arise when these principles are applied generally to cover the “fuzzier” areas of business like advertising, consumer research or design. These are all areas which rely on interpretation (and creativity), and where there is - or should be! - value attached to results which are unpredictable, unexpected, and maybe even surprising.

Of course, routinisation and predictability can make sense in market research too; there are many good methodological reasons for standardisation in continuous research, for example. But it seems a little perverse to ask the same questions, month after month, year after year and expect to learn anything new.

Which is where qualitative research comes in, right?! In theory, yes.

Qualitative research and analysis is defined by flexibility, by its ability to respond to emerging themes as collection proceeds and its ability to pursue the unexpected in conversations and interviews.

Now, think of focus group discussion guides which barely allow the moderator to breathe between the batteries of topics to be covered. Or standardised multi-country qualitative topic guides with tightly specified prompts and timing through which discussions are constrained, reducing the room for flexibility in moderation, undermining the role of the moderator, ignoring local cultural contexts, and, again, producing predictable responses.

Applied to qualitative research, this preference for standardisation undermines its very purpose. Its capacity to deliver “insight” (in the sense of a new or fresh perspective) is hamstrung.

The fundamental problem for qualitative research posed by corporate cultures which emphasise control is that flexibility and improvisation lose out and “insights” become impoverished and predictable.

A diminished appetite (and capacity) for surprise can lead to lower expectations of what constitutes “insight”[1], or, worse, a preference for conservative insights that conform to our existing expectations or prejudices[2] and, thus, require less disruption, less change.

Ultimately, “insights” should be unpredictable, unexpected and, at a minimum, insightful. They should tell us something that we don’t or - possibly - don’t want to know.

You cannot have standardisation AND a fresh perspective or insight.

And if we believe that people’s behaviours and attitudes can be nuanced, ambiguous and situated, then the methods used to explore them should reflect that belief.

Questions, comments to:

Emmet Ó Briain (emmet@quiddity.ie)

twitter: @emmetatquiddity

FOOTNOTES

1 My starting point is that if it’s not insightful, it’s not an insight. Somewhere along the journey in rebranding “market research” to “insights”, the meaning of the word “insight” got lost as the word “statistic” disappeared. See also Phelim O’Leary’s 2011 article on Qualitative Paths in Marketing magazine: http://www.marketing.ie/index.jsp?p=395&n=399&a=673

2 The following factoid was tweeted during the recent Cannes Lions Festival “Women speak around 7000 words a day - men speak about 2000”, and cited as a stat from Microsoft research presented at the Festival. It was retweeted 18 times, including a retweet from Microsoft Advertising.

It was also tweeted by the @Fact twitter account in April 2013, retweeted 152 times.

It was also tweeted in February 2010 by @OMGFacts to its 5.5 million followers, and retweeted 2,506 times.

The source of this factoid is the book “The Female Brain” by Louann Brizendine, a clinical professor of psychiatry at the University of California. And it’s not at all true. 2007 research showing no difference here.