Stop the sticks. A recent meta-analysis on Internet based interventions for behavior change finds an effect size that is almost identical to meta results for mass media interventions we discussed from Leslie Snyder’s group and more recently from Blair Johnson’s team. Here’s the key paragraph from this abstract.
Results: We found 85 studies that satisfied the inclusion criteria, providing a total sample size of 43,236 participants. On average, interventions had a statistically small but significant effect on health-related behavior (d+ = 0.16, 95% CI 0.09 to 0.23). More extensive use of theory was associated with increases in effect size (P = .049), and, in particular, interventions based on the theory of planned behavior tended to have substantial effects on behavior (d+ = 0.36, 95% CI 0.15 to 0.56). Interventions that incorporated more behavior change techniques also tended to have larger effects compared to interventions that incorporated fewer techniques (P < .001). Finally, the effectiveness of Internet-based interventions was enhanced by the use of additional methods of communicating with participants, especially the use of short message service (SMS), or text, messages.
That d of .16 is very close to Leslie’s r of .09 and Blair’s finding of a d at .21. Thus, it appears that the average persuasion campaign aimed at behavior change can produce a Small Windowpane effect in that range of 45/55 and that this effect size holds across different media types. And, of course, in a meta analysis of 85 studies there’s a lot more going on than just this headline, so read the paper.
Some observations . . .
1. Media’s not the matter.
Stop looking for outcome differences in persuasion interventions by type of media used for delivery. Unless you have a highly targeted Other Guy who only lives on Facebook or only watches Oprah or only reads Maxim, media is not a crucial variable. You buy media with a checkbook to reach the Other Guy for a behavior change intervention. Media in persuasion is not a theoretical construct of great value. It is simply the practical means for buying Reception. (Now, Reception is a huge deal because if you don’t get that nothing else matters in the Cascade, right?)
Of course, you can make persuasion plays in TV that are different from an iGizmo and you should capitalize on the unique qualities of each medium. However, those kind of message specifics are not the motors of change. Please reread the recent post describing the organ donor intervention from Tyler Harrison and Susan Morgan’s team. Hit the TACT hard and often. Look and feel. Unity of effort. Simplicity. You can waste an enormous amount of effort and resource on a flash doohickey that will have no impact on downstream behavior change.
Use media to buy Reception and provide opportunities for Processing. (Note, for example in the abstract, the enhancement effect for additional channels like SMS or email – that’s repetition, baby, you’re just saying Do The TACT many times through different media. It’s not a media effect.)
2. What’s the distribution of effect sizes?
As I’ve argued from Leslie Snyder’s data, these metas often reveal a decidedly non-normal distribution of effect sizes. Instead of that lovely bell curve that would tend to indicate pure random variation among the different interventions, the curve typically has a lumpy left hand tail with a lot of near zero effects that suddenly drops to a longer right tail with just a few higher impact interventions.
Thomas Webb, Judith Joseph, Lucy Yardley, and Susan Michie report a fairly large Q statistic (896.67, p < .001) for all 85 studies, indicating a serious amount of heterogeneity in the distribution with a 95% confidence interval from .09 to .23. They do not provide a summary table, so I’m reading the tea leaves here and all the potential stupidity that requires, but the Q statistic and the confidence band suggest lumpy left and longer right, just like with Leslie’s meta. Thus, I’m betting that many of the interventions in this meta were essentially busted with maybe a third showing Small to Moderate Windowpane effects.
It sure would be nice to know which studies are producing above average effect sizes, wouldn’t it? Some of the moderator analyses by Webb et al. place a spotlight on heavier use of theory, more repetitions, and more “tactics,” but exactly what that means is not clear. It might be more profitable to run the moderator analysis backward here, by first selecting the larger effects, then working back to the differences.
3. The stronger effect for TpB compared to the Transtheoretical Model or Self Efficacy is not compelling for me. Yes, it is statistically significant and each theory has about a dozen interventions, but TpB is not crushingly better, say a d of .50 or better compared to the other theories coming in at .05. Besides, there’s no compelling reason to prefer one theory over another. If you apply them properly and execute a good intervention, all should work to produce a practical change. You need to design TACTs that are tailored to your theory to make the theory work properly.
What’s missing from this meta is testing anything remotely like the Standard Model or Whatever You Call It for that flow of message through reception to processing to response to behavior change. This meta does not, for example, partition effects by the amount of Reception they generated. We know from Robert Hornik’s work in the 1990s about that. This meta doesn’t look at any Processing differences (attention, number of repetitions with the message, attitude toward the message, etc.). That is vital and to my knowledge (Blair, Leslie, other academic persuasion mavens???) no one has ever done a meta that partitions behavior change effects by various indicators of message processing. Shootfire, as my great-grandfather Will Hains would say, this meta doesn’t partition effects by the various theory components (TpB with easy, fun, and popular or Efficacy, Attitude, and Norm).
We know from Leslie’s work nearly 15 years ago that communication based mediated interventions do produce behavior change. Repeated individual studies, reviews, metas, and meta squareds have demonstrated that effect at roughly the same magnitude. We can now move deeper. If you want to make a contribution to our knowledge, do a meta on the Standard Model (or Your Label Preference Goes Here). And, if you want an extra gold star, try working backwards from interventions with the biggest effects back to the components that drive them. That, I am afraid, would require a very careful reading of the studies and a thorough and thoughtful analysis. You could then code out what you find and test it forward in a standard moderator analysis.
If you’re running a persuasion class, get one of these metas, pull out the top effect sizes, then read the papers in class. Tear them apart. Act them out. Contact the authors and get the materials they used. Then get the bottom effect sizes and repeat the exercise. Tell me what you see.
4. Hey, there weren’t a lot of Health Beliefs Model interventions! Are those guys not keeping up with the new fangled modern inventions like the Internets?