Bad Science in the News

WSJ publishes an article written by a journalist that sounds like a mashup of Persuasion Blog stories. The focus: observational studies. The writer notes the frailties of the approach.

But observational studies, researchers say, are especially prone to methodological and statistical biases that can render the results unreliable. Their findings are much less replicable than those drawn from controlled research. Worse, few of the flawed findings are spotted — or corrected — in the published literature. “You can troll the data, slicing and dicing it any way you want,” says S. Stanley Young of the U.S. National Institute of Statistical Sciences. Consequently, “a great deal of irresponsible reporting of results is going on.”

And, the writer notes John Ioannidis.

That partly explains why observational studies in general can be replicated only 20% of the time, versus 80% for large, well-designed randomly controlled trials, says Dr. Ioannidis.

What’s worse is the rising number of published observational studies across a variety of fields. The WSJ commissioned a simple count of this type of method in peer review journals over the past 20 years.

Nearly 80,000 observational studies were published in the period 1990-2000 across all scientific fields, according to an analysis performed for The Wall Street Journal by Thomson Reuters. In the following period, 2001-2011, the number of studies more than tripled to 263,557, based on a search of Thomson Reuters Web of Science, an index of 11,600 peer-reviewed journals world-wide.

The increase in the method is easy to understand. Observational research is the cheapest, simplest, and easiest form of data collection that still can pass peer review. And, I’d argue it passes peer review because without it many researchers could not publish enough to get tenure. Experimental work demands a well developed theory and careful hypothesizing. Observational methods allow a general question or a hunch that can then be confirmed through highly manipulated data. Observational researchers like to argue that they are doing exploratory work that leads the way to new discoveries and theories.

Jan Vandenbroucke, a professor of clinical epidemiology at Leiden University in the Netherlands, dismisses some of the drawbacks of observational studies, saying they tend to be overblown. He notes that even controlled trials can yield spurious or conflicting results. “Science is about exploring the data . . . it has a duty to find new explanations,” he says. “Randomized controlled trials aren’t intended to find any explanations.”

Let me repeat that last sentence from Vandenbroucke.

“Randomized controlled trials aren’t intended to find any explanations.”

That’s a great illustration of a Dysquotation, a statement you make that someone else reproduces which you wish they hadn’t. We may be a bit lost in translation here, but claiming RCTs don’t find explanations is . . . ahhh . . . misleading. Experiments like this provide the strongest possible evidence about a hypothesis. If the professor is quibbling over the meaning of “explanation” then he’s just playing word games with a journalist that he’d never try in a room full of researchers.

Please read this article for its uncommon depth of thinking and consideration of reasoning and evidence. It’s one of the best examples in popular journalism I’ve encountered and fairly presents the limitations and criticisms of observational research.