Just saw a headline telling me there’s this great study showing change in TV viewing habits! Wow. They did a great job alerting the media! (Here’s the article about the study.)
We need to be more skeptical about articles reporting research – especially about media change. Let’s start with whether the company sponsoring the research has skin in the game.
In this case – not just skin. The study is funded by “blip.tv” who offers “The Best in Original Web Programming”. These guys are ALL about increasing consumer viewing of programming that’s developed solely for the web. And if they reported anything other than what they reported, we’d all be shocked. Not a good start.
And, let’s remember the fundamental humanity here. Their CEO’s job (and all the employee’s as well) depend on finding the consumer behavior the study reports. I can’t imagine getting any more VC money unless the study reached these conclusions. And the research vendor might not get more research if the study results are unsatisfactory.
The study may be completely accurate. But we who read the study need to remember there’s big stakes for them in finding exactly what they found.
Maybe that explains the more minor points that are self-serving. Check this one out:
The research showed that 43% of audiences had a positive reaction to advertising in front of original web series content. However, when asked the same question about advertisements in front of television content streamed online, users were less receptive, with only 30% reacting positively.
Is this true? I doubt it. It’s a very strange question asked entirely out of context. We know from a half century of TV research that something like this is extraordinarily hard to judge and these answers are probably wrong.
What about the blip.tv headline conclusion? No way to know if they’re right or wrong. My guess is that they wisely are shifting discussion from “cutting the cable” to “trimming back the cable”. And, they’re probably accurate with the times people watch blip.tv because those are just web stats.
Did they find this new behavior? I doubt if the study can be used to reliably judge whether they did or didn’t.
And as a TV guy, I’ll offer a different theory for the watching times. People probably watch online programming earlier because the stuff on traditional TV is better (for the vast majority) but doesn’t air until primetime.
We’ll never know whether or not we’re being played so their VC’s can make a load of money in a future company sale or IPO.
Here’s some specific skepticisms I encourage when reading ANY research article… These skepticisms apply to all research. But are particularly important for new media because most of this research is so fresh it must be funded by the people it benefits most. And given the vast profit potential involved, that has led to a very serious abuse of research as a communication method.
Don’t accept the research until you know the methodology used, specific questions, and the population surveyed. Were these online interviews, phone interviews, behavioral observations, online tracking, or even part of a bigger omnibus survey? How many were interviewed? Who interviewed them and what population was used as the base for finding these people. It’s CRITICAL to know this to understand if the data has validity and a bad sign when it’s not revealed. (For blip.tv, the press coverage tells us nothing, but some of this background is in the press release which you can eventually find.)
Watch Out for Relative Percentages (e.g. 50% more likely). Medical studies are the absolute worst here. They’ll study a behavior like eating cheese with 10,000 people (5,000 who eat 1 pound per week and 5,000 who eat none). Then, when they find 2 died who eat cheese and 3 died who don’t eat cheese, they write up a conclusion that this “huge” study (10K participants) shows “you are 50% more likely to die if you don’t eat a pound of cheese each week”. Riiiiight. Of course we’ll usually find its funded by some dairy or cheese related council.
Did survey participants understand the questions the way the researcher intended? Boston Consulting Group published a recent study about tablet operating systems that asked people which OS they wanted on tablets (Windows won). This is absurd research. Consumers only know one operating system (Windows) and have no distinction mong PC, mobile phone OR a tablet. They don’t even know Android is an OS and won’t recall the blandly named IOS from Apple. So this study got a reflexive response: consumers will answer any question whether they understand it or not. This study should be thrown in the round file. (BCG should know better.)
Were the questions designed to elicit the answers? There are some research companies who are very sophisticated at creating questions that will get the answers they want. But most company sponsored studies don’t report the wording they used. There’s interesting analysis of poor wording in the political world at Nate Silver’s blog. Here’s a sample of his work talking about wording in a Wisconsin opinion poll. (link here).
Conflicts of interest. EVERY study should be considered with full knowledge of the economic benefit that the answers deliver to the sponsor of the study. This IN NO WAY suggests that the answers are wrong merely because the research is sponsored. Rather, it should increase our interest in looking critically at the study before accepting the answers.
So two points of advice…
If you’re publicizing research you’ve conducted, do it right and report with strong background to support it. This increases your credibility and will make it much more likely that people will pay attention to your efforts. (blip.tv or BCG should have done this and their failure to do so is concerning.)
For everybody else, when you’re reading yet one more over-the-top headline (or that headline that supports your belief very strongly)…take a deep breath and ask these important questions to see if the headline claims are even related to truth.
Copyright 2011 – Doug Garnett – All Rights Reserved