Today’s demonstration of persuasion science focuses upon persuasion campaigns aimed at changing large groups of people in natural, nonlaboratory setting – in other words, the real world. I’ll spotlight research I’ve worked on because I know it well, it survived peer review for publication in scientific sources, and it has been cited by other researchers who don’t know me from Adam, Eve, or the serpent. It seems to work.
A persuasion campaign is a designed set of messages targeted at a group of receivers living in their natural world aimed at producing changes in thoughts, feelings, and actions a small group of sources wishes to obtain. Persuasion campaigns compete in the natural marketplace of information and use the resources available to other public communicators (politics and government, advertising and sales, public relations, etc.) and compete for the attention of the same citizens. Persuasion campaigns are part of the noisy free speech discourse that forms the framework for American democracy.
Now, do persuasion campaigns actually work? The common sense answer is, well, since people do them they must work, right? That kind of logic drove Wall Street of its most recent cliff, so you realize that simply because people do it, doesn’t mean it works. Lots of people do marriage and a lot of them fail, too. We need to get more systematic to understand this.
Leslie Snyder conducted one of the best summaries of persuasion campaigns focused on health and safety behaviors. In particular she did a meta-analysis which means each campaign reported real data that actually measured change. At the time she published this (2002) she and her team had located 47 comparisons, published in peer review journals.
From those 47 tests Snyder et al. reported an average change, expressed as a correlation, of .09. In Windowpane terms this corresponds to an approximate 45/55 effect, a “small” effect size. And, in this instance, the correlation also corresponds to the absolute percentage difference between treatment and control, thus we have about 9% improvement on average.
Let’s just think about this for a minute.
First, the effect is positive. Persuasion interventions can produce desired change in natural populations living normal life. Second, the effect is not staggering, obvious, huge, Holy Mackerel, but rather is small, meaning it is detectable above the normal variation of living things, but you need a statistician to find it (or invent it!). Persuasion campaigns are not Magic Bullets. Third, since the average is about 45/55, that means some interventions produced larger effects and other smaller. What about that variation?
For me, this is the crucial insight you have to make if you want to understand how to do persuasion campaigns. The “average” campaign produces a small effect which means if you run a campaign you can make a difference, but there is variation around this average which means if you run your campaign badly, you can have worse outcomes. Let’s look at a table that displays the variation in those 47 tests.
Get oriented. The “Effect Size” column offers categories of effects (expressed as the Pearson correlation) that are .05 wide. The “N” column shows how many tests fit in that category. Thus, there were 17 tests that showed effect sizes between .00 (zero, right?) and .05. There were 5 tests between .16 and .20.
In looking at this, the first thing I observe is that is it not anything remotely approaching a normal distribution with that lovely bell shaped curve showing the fat middle and skinny tails. Look, 31 (17+14) of the 47 tests are less than the average! By contrast, only 16 (8+5+3) are above average. Thus, most campaigns have effects that are functionally zero and certainly less than small. Yet, about one third of the campaigns are better than average. Thus, while the “average” effect is small, two thirds of campaigns don’t hit it. And, if you want to be real grumpy about this only the top 8 tests seem to demonstrate powerful interventions; that’s a success rate of 17%. As an executive summary, the headline here is not real encouraging: Sure the “average” is small which means you actually change things, but most campaigns don’t hit that and if you want serious, obvious change you’ve got less than a 1 in 5 chance of succeeding.
How do you hit that higher success rate? My own experience with doing these things and my reading of the research literature leads me to what I call the Standard Model. Let’s look at one set of studies I did on using persuasion campaigns to change nutrition behavior. It’s important to consider the set rather than any one paper, study, or finding because the set reveals the pattern of the Standard Model. (Read more about it on the Primer pages SM1, SM2, and SM3.)
You can read the gory details in the publications, but for this post, please take my word for it. We wanted people to switch from high fat milk to low fat milk (1% or Less!). Across highly similar West Virginia towns, we ran four tests of a treatment versus a control. The treatment and control towns were far away from each other and outside of each other’s (small) media markets. We also drew random samples of people in each community for pre and post testing, so we had a quasi-experimental, pre-post design, repeated four times.
In each treatment, the “persuasive message” was always the same (based on extensive message testing also called formative research). What we varied was the channel that carried the message. We created three types: Ads, PR, and Education. Ads were purchased and clearly identified as advertising placed in local TV, radio, and papers. PR comprised events we staged to draw free local news coverage in TV, radio, and papers. Education was the standard face-to-face community organizing at meetings, churches, associations, etc. Here’s how the treatment towns got different combinations of channels.
Treatment 1 (paid ads, PR, and community education)
Treatment 2 (paid ads and PR)
Treatment 3 (PR and education)
Treatment 4 (paid ads)
The key behavior outcome was switching. At pretest we asked participants in treatment or control to tell us what kind of milk they used. At posttest we asked the same participants the same question and computed “switching” as the difference between pre and post. (This is one helluva difficult way to measure switching. Most folks would dispense with the pretest and just ask at posttest whether you’d switched in the past couple of weeks. If you don’t see the difference between defining switching as our pre post test versus just asking, you are not a good researcher and will someday face embarassing public proof of it.)
Let’s get to the results.
Remember the Snyder meta found an average 9% improvement. Across our 4 tests, our average improvement was 19.4%, twice as large as the meta average. And, more interestingly, there is clear variation in switching depending upon the combination of channels. The combination of ads and PR produced a nearly 30% improvement compared to the other two combinations that produced an average 9% improvement.
And, if you look at the last column, Reception Rate, you see why. We got more reception with the paid/PR combination than with the others. Thus, the Standard Model produced average results that were twice as good (19% versus 9%) as the meta and we could enhance that advantage to a tripling (30% versus 9%) through smart Reception planning.
Of course there’s a lot more going on here.
With Treatment 1 we also collected supermarket milk sales data at three times: pre, post, and six months later. Control milk sales did not vary practically or statistically over the 3 periods. Treatment sales showed a big increase pre to post in low fat milk sales (yeah, baby) that were maintained six months after we ended the campaign (hell yeah, baby). This sales data effect is exactly what you’d expect if people took the Central Route to attitude change based on our campaign messages. Central Route change produces effects that persist over time and that’s what the sales data at six months demonstrate.
Also with Treatment 1 we collected full Standard Model data (Reception to Processing to Response to Behavior) and analyzed it with structural equation models (path analysis). Our message testing indicated that people’s decision to switch was driven by their Attitude (cost, taste, enjoyment) and not their Norms (what other people think they should do). As a result, our campaign messages offerred strong Arguments that focused on Atttiude (e.g. low fat doesn’t cost any more than whole), but no Norm Arguments (your friends drink it and you should, too). So, we expected a path model to show that the Treatment changed Attitude (not Norm) which changed Intention which changed Behavior.
Here’s the model with path regression weights.
- — .31 –> Attitude – .48 –>
Treatment Intention — .56 –> Behavior
- — .0 –> SubNorm — .0 –>
Exactly what we predicted. The treatment messages changed Attitude (regression weight of .31) but not Norm (.0). Then Attitude changed Intention (.48) with no effect from Norm. Finally, Intention changed Behavior (.56). The overall model (MR = .71) explained over 50% of the variance and had a Goodness of Fit index of .999.
Here’s another way to display the Response change.
There was a “large” difference in Intention to switch (3.2 versus 2.2), a “medium” difference in Attitude (41.8 versus 35.5) and a “small” difference in Norm. In most published research, that small Norm difference would be the biggest effect the campaign would produce, but with this Standard Model, it is a piddling effect that is largely due to the strength of the other variables. In other words, if you dropped Intention and Attitude from the analysis and just looked at Norm, it would not even be statistically significant.
Okay, there’s a lot going on in this post and a lot of it is that Stat Geek Speak. Like I mentioned earlier, you can read the details, but for now you have to take my word that I’m accurately reporting the findings. I can understand your suspicion and frankly, I strongly encourage it. Read the original reports.
Now, what’s the practical payoff with all this mumbo jumbo.
First, assuming it is an accurate reflection of the literature, the Standard Model has the strongest proof of life available. It works and we know why it works.
Second, structured, planned, and careful persuasion produces powerful practical change. If you know what you are doing, the Standard Model makes you dangerous. You can change freely choosing people in their natural environment. You don’t need tricks or games or power. Just smart, well structured persuasion.
Third, the Model not only helps planning and execution, but it really helps for evaluation and modification. When you get “bad” results from an intervention, you need to figure out why you are failing. The Standard Model tells you not only what to do, but explains what is going wrong if something does go wrong. Are you really generating Reception? Are those really strong Arguments or are you kidding yourself? The Model provides contours and will show you the cliff you just fell off. Of course, you’re hitting the ground right now, but you know you’ve got a cliff, why it’s there, and what to do to handle it in the future.
Fourth, never forget that the Standard Model applies persuasion to behavior change problems. This is not a marketing, advertising, or macroeconomics model. It aims at changing individual behavior, one person at a time.
Finally, if you think you’ve got something better than the Standard Model, that’s fine with me. But, if you can’t join it, then you should beat it. And, I’d like to see those numbers, please.