The TpB You’ve Got versus The One You’d Like

Jim Dillard presents a useful and interesting self report survey that employs TpB to understand the behavior of women obtaining an HPV vaccine. The report is useful because somebody at Penn State University could take his results and design a persuasion intervention to encourage vaccines (and these results may be more generally useful to large public universities as well and perhaps most generally to women of this age range.). The report is interesting because it conducts a thoughtful data test of TpB theory. And, if you enjoy reading well done science, the paper is worth your time. Dillard is consistently one of the best persuasion researchers and writers working in the past 30 years and this paper is yet another example of his patience, skill, and thoughtfulness. Having done this affirmation manipulation, now let’s begin with a problem.

Dillard draws a random list of 1800 Penn State undergrad women and invites them through email to complete an online survey about women’s vaccinations. About 10% of these women actually and properly complete the survey with a final total of 174. This is ultimately a convenience sample of respondents, but at least Dillard starts with a fair selection and ever better he actually takes a paragraph to discuss it under Limitations. You see the Problem. This is both a random and a convenience sample. Theory and practice demonstrates that convenience samples can produce results that are little more than imaginary. While I tend to trust the reliability and validity of the results Dillard presents, that doesn’t mean you should. Think about it the way you should always think about convenience sampling in observational studies. Always. As in always wear a condom or always look both ways or always wear a seat belt. Always think about convenience sampled data.

Now, with the single largest problem in the method noted, let’s continue to the useful and interesting stuff.

Dillard surveys the women on standard TpB measures (intention, attitude, subjective norm, and perceived control) with semantic differential scales. He also includes specific belief items for attitude, norm, and control based on a focus group elicitation of Doers (women who had HPV vaccines) and NonDoers (women who hadn’t). The 174 women complete these items and then Dillard analyzes.

He finds that attitude, norm, and control combine to explain 75% of the variance with intention, a Large Windowpane effect and entirely consistent with TpB studies (of course many folks prefer Health Beliefs or Message Framing; here we can call that Dissonance Reduction). Dillard then includes two way interaction terms among attitude, norm, and control and as a block they explain an additional 7% of the intention variance, a Small Plus Windowpane. Finally, Dillard correlates those specific beliefs from the Doer-NonDoer focus group and finds several Medium and Large Windowpane effects in the larger sample of respondents.

Thus, we know that this TpB analysis explains a large amount of intention to act with the lion’s share of impact coming from main effects (attitude – Large beta at .50; norm – Small beta at .17; control – SmallPlus beta at .24) and a smaller, but noticeable amount coming from their interaction. Better still we have several strong belief statements that bear out the focus group. Beliefs around security, parental approval, close friend and boyfriend support directly drive the more general attitude, norm, and control components.

The consistency of these results with other TpB studies tends to argue that the convenience sampling here may not be a source of bad bias. These results just make sense. If I was running an intervention, I might draw another convenience sample (like in large lecture classes) and compare the results. Persuasion never operates in a perfect world and you’ve got to go to war with the army you’ve got rather than the one you’d like to quote that underappreciated persuasion theorist, Donald Rumsfeld. It makes sense that attitude is the largest driver of getting a vaccine for something related to sexual behavior. Hey, it’s your body so other people’s opinions (norms) are nice, but ultimately not more important. And, double-hey, how much control do you need to get a shot?

Let’s get practical from this. Women’s attitudes are the single strongest driver of HPV vaccination with norms and control coming in at much smaller effects. Yet, the specific beliefs that most strongly predict intention are norm beliefs about parents, girl friend, and boy friend approval. I would consider a double barreled persuasion campaign that delivered two different messages to two different groups. For the women who need the vaccine, I’d run an attitude based message with an emphasis upon protection against HPV and feelings of security and confidence about health as the dominant content. For parents and girl and boy friends, I’d run a different message aimed at encouraging them to express support for the vaccine. The attitude campaign aimed at women would motivate their behavior and the norms campaign aimed at their primary relational sources would provide support for that behavior.

Now, let’s go theory and research. Dillard finds that two of the three double interactions between attitude, norm, and control are statistically significant. Combined as a block the three interactions (A x SN, A x PC, SN x PC) add about 7% more variance. So? While it is stupid to do this, if you ran the interaction terms first in this regression, they probably would not be ssd. Furthermore, the effects are Smallish and given the convenience sampling, I’m reluctant to trust the outcome. If we had a true random sample, I’m not sure the effect would obtain. Mere convenience is sufficient to explain these Smallish outcomes for me. Just a few weird cases could produce these interactions with weirdness being another way of saying a convenience sample.

Dillard does make some interesting observations about the role of perceived control in TpB and suggests that control is better understood as a moderator rather than a primary direct impact variable like attitude and norm from the traditional Theory of Reasoned Action. My first impulse is to agree. His data here, plus clear thinking, support this line of reasoning. However, I’ve got experience with excellent data from the Wheeling Walks campaign that showed perceived control alone – neither attitude nor norm – motivated walking behavior. It’s worthwhile to pursue Jim’s thinking here with more research, but for now, I’ll stay conservative and play with TpB both in theory and practice.

The last piece of this report is perhaps the most fertile. Dillard invents an interesting way of measuring beliefs with a formula he calls the Room For Improvement Index (RFII). The formula is a ratio that essentially compares how many people who do agree with an item versus how many should agree. If relatively few people endorse an item that is shown to drive intention, then there is a lot of Room For Improvement and this suggests a good avenue for message design. For example, in the Milk studies, we found that a lot of people thought lower fat milk was a lot more expensive and that expense was strongly related to their purchase behavior. Thus, the RFII on cost was very large. We then designed messages that pointed out lower fat milk did not cost more (who looks at the price of a product they aren’t buying?), we changed Attitude which changed Intention which changed Behavior.

The RFII provides a quick, convenient, and intuitive number that ranges from 0 to 100% with the higher percentage indicating more room to the top. Combine the RFII with the simple correlation between the item and intention (or behavior if you’ve got that) and select items that have high RFII percentages and high correlations. You are essentially targeting the big differences in beliefs between the Doers and the NonDoers with this strategy.

Now, to dangers, risks, and perils. I’ve got enough old guy stat experience in me to know that invented ratio indices are very dangerous things. When you start messing around with your measurement, you can invent yourself into the Land of Oz. The concern here is that artificial top and bottom and how different items may form different distributions under different conditions. If a gearhead like my former office mate at WVU, Tim Levine now at Michigan State, ran a bunch of Monte Carlo demonstrations on theoretical distributions of the RFII under different response formats (1-5 versus 1-15) and different item means, he would probably find some interesting effect and invent a new Greek symbol to describe it. Then maybe Tim and Jim could engage in the Jane, You Ignorant Slut exchange in HCR!

It would be interesting for somebody to take the RFII and figure out the distributional qualities and perhaps how to handle it in difference or association statistics (t-tests or correlations, right?). As I’ve noted many times in the Persuasion Blog, a lot of interventionists obtain poor outcomes in part because of bad messaging. The RFII could provide a simple, reliable (?), and valid (?) quantitative index that screams off the output, yelling, Pick Me!

Past this propeller head quibbling, RFII is an artful rule of thumb that can be used skillfully as long as you keep your thumb in the right location. Use it to find big, obvious, black and white differences and never to find shade, tone, or nuance. Unless you are using Health Beliefs or Framing; you’re doing voodoo anyway, so go ahead and stick your thumb anywhere you want.

Let’s get out of here . . .

From a simple observational convenience sample, we get a lot of ideas. Sure, Dillard warns about the convenience and I double warn to always think about observational studies. With that in mind, we see, Yet Again, another strong illustration of TpB in action with strong effects entirely consistent with both the theory and the literature. We have interesting arguments about interactions in TpB and the role of perceived control. We have that nice set of data on specific beliefs and the RFII to identify promising lines of message development. And, for you researchers types out there, we have a great example of good scientific thinking and writing in Dillard’s paper. Finally, I actually have something nice to say about observational data!

Dillard, JP. (2011). An Application of the Integrative Model to Women’s Intention to Be Vaccinated Against HPV: Implications for Message Design. Health Communication, Jul-Aug;26(5):479-86. Epub 2011 Jun 24.

DOI: 10.1080/10410236.2011.554170

P.S. Dr. Fishbein would have probably appreciated this one.