I Rule: You Cannot Persuade a Falling Apple.
A Falling Apple is a metaphor for Science, Truth, Ultimate Ground of Being, Universal, Eternal, Certain, Unchangeable, and on and on. Stated another way Falling Apples are Laws and Persuasion is Rules.
From this we infer:
1. You Shouldn’t Try to Persuade a Falling Apple. Or don’t bring poetry to a science fight.
2. If You Have Falling Apples, Don’t Persuade. Again, don’t use poetry when you’ve got science.
Working from this Rule and with cheerful assistance from others – most notably any Rule that touches Sincerity – let’s consider how to persuade and not persuade about science. Assume you have scientific information and you want to use it to advance your TACT (maybe a very specific behavior, maybe a more general cause). How should you proceed? Consider this (pdf).
Jakob Jensen ran a simple experiment that exposed readers to science information presented by a journalist in an online news story, just like one of those news stories you read on a daily basis about sun spots, global warming, cancer cures, speed of light, and on and on. Jensen manipulated three variables to create unique conditions, then randomly assigned participants to just one condition.
All individuals (N= 601) in a 2 (hedged vs. not hedged) × 2 (primary scientists vs. unaffiliated scientists) × 5 (message) between-participants experiment were randomly assigned to 1 of 20 conditions.
Hedging means qualifying, nuancing, if’s, and’s or but’sing. It’s the opposite of certainty, consensus, absoluting. Here’s how Jensen did it.
. . . the amount of hedging in the article was varied to create two conditions: hedged and not hedged. Hyland (1996) argued that hedging can be lexical (e.g., single words or phrases like may, could, might) or discourse based (i.e., entire sentences describing limitations of a study). Scientists seem to be more concerned about the latter (e.g., Schwartz & Woloshin, 2004); thus, the present study added or subtracted discourse-based hedging from the manipulations. The not-hedged condition was constructed by adding a single sentence conveying scientific uncertainty: a stock phrase stating that “it was too early to make definitive claims and that more research needed to be done.”
Catch that last faux-hedge. All stories included the familiar “more research is required” pledge. The actual hedge went beyond that old soft shoe and added the “discourse based” qualifications. Now, the second IV.
Second, the source of the hedging was manipulated. The hedging was either attributed to the scientist(s) responsible for the research (the “primary scientists” condition) or to a contrived scientist unaffiliated with the project (the “unaffiliated scientists” condition).
Okay. The stories contain quotes from two scientists. First, the primary scientist who actually did the research. Second, an unaffiliated scientist who is knowledgeable about the research, but did not actually do it. All of this hedging then occurred in . . .
a news article (embedded in a Chicago Sun-Times web page; all news articles appeared to come from the online version of that newspaper).
And, just to dot the “i’s” and cross the “t’s, Jensen used five different stories to control for message effects.
. . . five different cancer stories taken from real news publications.
This creates 20 unique conditions: 2 (hedges) X 2 (scientists) X 5 messages (different cancer stories). Each reader got only one combination, read the story, then rated it on a variety of outcomes, but most importantly, for the trustworthiness of the scientist making a claim. Now, since we’re doing all this work, you’ve got to bet there’s an interaction at least between Hedge (yes or no) and Source (primary scientist or unaffiliated scientist). And . . .
. . . there was a statistically significant Hedging × Source interaction, F(1, 4) = 15.47, p = .01, partial η2= .79. To better understand the interaction, a simple main effects analysis was carried out (see Table 1 for means and standard deviations). The analysis revealed that hedging influenced trustworthiness ratings for participants in the primary scientists condition, F(1, 597) = 7.31, p < .01, partial η2 = .49, but not for those in the unaffiliated scientists condition, F(1, 597) = 0.21, p = .64.3 Participants in the primary scientists condition rated scientists as more trustworthy when they were attributed higher levels of scientific uncertainty.
How about that. Primary Scientists who hedge, qualify, or nuance their results are more trusted by readers. And this only occurs with the Primary Scientists. Those Unaffiliated Scientists brought in for perspective and objective comment don’t need hedging. Here’s a table of means to demonstrate (click to enlarge).
Make sure you get this. When the primary scientist, the source of the scientific information is presented with hedged statements, readers rate that scientist as more trustworthy. Hedging did not affect trustworthiness perceptions of the secondary scientist. Readers laser on the lead source and trust experts who qualify compared to experts who are certain. And realize the size of this Windowpane. An eta squared of 50% is a Large Windowpane, a 25/75 effect which is as obvious as a hammer on your thumb. Readers show wildly great trust for scientists who don’t come across as arrogant, imperious, and certain experts.
Now. We’ve seen this effect before in other research discussed on the Persuasion Blog. Remember this?
Turns out that the combination of Credibility and Certainty generates WATTage. If you are the Master of the Domain, add a little bit of unCertainty to your pronouncements. Hedge, qualify, maybe, kinda, sorta, could be.
Hedged experts generated more thoughtful attention to their claims. And this one.
Just say exactly what you know. If you know something to the decimal, nanosecond, or angstrom, then say it that precisely. If you know something with a confidence interval a mile wide and a day long, then say it that with that range. Say what you know and stick to that. If someone asks for more precision, repeat your original warning and add that is the best science can do right now. Above all, do not exaggerate the claim in the name of communicating risk to stupid people.
Hedging Experts generate enormous persuasion advantages. The Other Guys trust them more, are more likely to think about their claims, and more likely to accept recommendations. By contrast, the stereotyped imperious know-it-all expert generates resistance, skepticism, and doubt.
Pivot now with this information and think about much of the Bad Science I critique on the Persuasion Blog. Focus on, say, climate change. Among the persuasion failures of these advocates – sincerity, baby, sincerity – you can now see and add the failure of certainty. They know without exception, exemption, or excuse that Mother Earth is headed to hell in a handbasket because of human greed and waste and if we don’t do exactly what they tell us to do, hell in a handbasket will be a walk in the park compared to what happens next.
And no one believes them.
Do you understand why this occurs? Jensen’s research (along with the other examples) demonstrates that expertise can backfire on itself. The knowledge expertise possesses can produce a boomerang where Other Guys resist, ignore, or denigrate that knowledge because the expert is certain, absolute, impervious to doubt. Most Other Guys in most instances recoil from those who brook no question, nuance, or alteration. Part of this response is Reactance, part of it is experience, part of it is humility; absolute pronouncements from experts make us nervous, dubious, and suspicious.
Now, Jensen found more than I’m reporting here. Chase down the research and read more about it. He did a first class job in this experiment and it’s worth reading more and ruminating over.
But, take the main point. True expertise is graceful, open, and tolerant. It gives room to alternatives and encourages discussion no matter how naïve, unsophisticated, or crude. And doesn’t that ring true? If you know you have Falling Apples why do you care that Other Guys resist? The truth will bounce off their heads one day.
Jensen, Jakob D. (2008). Scientific Uncertainty in News Coverage of Cancer Research: Effects of Hedging on Scientists’ and Journalists’ Credibility. Human Communication Research, 34, (3), 1468-2958.
P.S. See the next PB post with an example torn from the pages of today’s journals and newspapers.