LaPiere Rides Again

The NYTimes reports on a doubly confusing story: 1) the results of a study that appear to contradict the Rules of Persuasion and 2) that scientific study is apparently being published in the NYT. Here’s how the Times puts it.

The results of a new study to be released on Tuesday by professors at the Annenberg School for Communication at the University of Pennsylvania show that 86 percent of respondents did not want political campaigns to tailor ads to their interests.

The results (pdf) are based on a telephone survey conducted recently aimed at soliciting the Other Guys opinions about political persuasion with an emphasis upon tailoring tactics that deliver a specific persuasion box and play for a specific person. The survey itself is uncommonly strong and demonstrates how you can generate reliable and valid information in observational designs.

The survey was conducted from April 23 – May 6, 2012 by Princeton Survey Research Associates International. PSRAI conducted telephone interviews with a nationally representative, English and Spanish speaking sample of 1,503 adult internet users living in the continental United States. The interviews averaged 20 minutes. A combination of landline (N=901) and cellular (N=602, including 279 without a landline phone) random digit dial (RDD) samples was used to represent all adults in the continental United States who have access to either a landline or cellular telephone.

Gee whiz, when’s the last time you read Nationally Representative and Random Digit Dialing in a Tooth Fairy Tale? At least now we’re in scientific territory where we have some kind of confidence that the data are pretty good estimates, that the Word properly points to the Thing in General Semantics terms. And with this strong method, the researchers conclude that Other Guys really hate targeted persuasion and in very large numbers. That 86% from the quote illustrates the common theme in the survey.

You can read the report from a link in the NYT story for all the reliable and valid data and discover all the variations on the theme that Other Guys Hate Persuasion, but even that headlines gives away the problem with this study. While persuasion is always about the Other Guy and you should be concerned how the Other Guy thinks, in this case what the Other Guy thinks is completely irrelevant to doing persuasion.

Given the phenomenally large mistrust, dislike, and general disfavor a nationally representative sample of Other Guys hold toward persuasion, you’d assume that persuasion would be the dumbest thing a politician could do to win an election. Remember, 86% of Them hate persuasion and tailored persuasion in particular and They claim They are less likely to vote for a candidate who uses persuasion and tailored persuasion.

You can have a fun argument in the NYT and various public debating forums about the pernicious and perverse effects of persuasion on politics, government, and the daily news. But, all that uproar is just uproar over something that does exist – Other Guys hate persuasion – but makes no difference. As a cute example of this disconnect, consider this fact about Facebook targeting.

85% agree (including 47% who agree strongly) that “If I found out that Facebook was sending me ads for political candidates based on my profile information that I had set to private, I would be angry.”

Here’s the persuasion hoot: Facebook does this already.

So, where’s all the public anger from Facebook Nation turned into torchlight parade on Facebook offices and the nearest candidate found to be using Facebook this way? The results of the survey demonstrate the irrelevance of the results! Other Guys have already been exposed to the very thing They hate, yet there’s no clear, compelling, and obvious effect from that hate.

And now we see that all the angst the survey correctly counts in that admirable Nationally Representative sample from Random Digit Dialing represents at best an extremely temporary attitudinal response from the Peripheral Route. “I’m mad as hell and I’m not gonna take it anymore!” Then They hang up, return to Facebook, and smile at a new photo from a Friend. The survey measures a fleeting response to a hypothetical scenario that elicits a strong Peripheral reaction that is clearly disconnected from the Other Guys real world behavior because They’ve actually been in that scenario they hate and They didn’t do anything about it.

This strong method study replicates one of the more infamous attitude and persuasion studies ever published. Hit the rewind button and travel back to 1934. LaPiere traveled across the US in the early 1930s staying at various hotels and restaurants on the trip. Accompanying him was a Chinese couple. LaPiere noted how the personnel at the various establishments treated this, at the time, unpopular minority couple. Later LaPiere contacted each establishment and asked about their policies toward accommodating Chinese people. LaPiere reported some of the most damning data in the attitude and persuasion literature. He found a large disconnect between how people actually treated the Chinese couple compared to their stated policy: While the policy said reject the couple, in practice virtually every establishment accepted the couple!

Still today you can find literature reviews that include the LaPiere study as an indictment of the weak and contradictory nature of attitudes and persuasion. Hey, the attitude embodied in a stated policy is disconnected from actual behavior. How important are attitudes for behavior after all? Of course, that question artlessly overlooks all the problems in the LaPiere study starting with the difference between the personnel who greeted the Chinese couple and the personnel who answered the phone. Shootfire, even Wikipedia gets this wrong today.

LaPiere spent two years traveling the United States by car with a couple of Chinese ethnicity. During that time they visited 251 hotels and restaurants and were turned away only once. At the conclusion of their travels LaPiere mailed a survey to all of the businesses they visited with the question, “Will you accept members of the Chinese race in your establishment?” The available responses were “Yes”, “No”, and “Depends upon the circumstances”. Of the 128 that responded 92% answered No. The study was seminal in establishing the gap between attitudes and behaviors.

Actually the study was seminal in establishing the effect of bad research methodology and poor theorizing in persuasion research. As many scholars have documented, attitudes and behaviors show an incredibly strong correspondence and when they don’t, persuasion theory or research methods explain why. Anyone who asserts a well-known disconnect between attitudes and behavior is making the same mistakes as LaPiere. You can read the gory details (pdf) in this scholarly review of LaPiere’s work.

Now, let’s hit the fast forward button and get back to the present day. See how the NYT scientific survey study reports the same kind of disconnect between attitudes and actions. People report hating persuasion and specifically tailored persuasion, yet they’ve already encountered it many times in daily life and accept it without major complaints. The disconnect between people’s attitudes and actions here strongly suggests that persuasion is a dangerous and stupid move, yet political campaigns are going full throttle with persuasion and tailored persuasion.

Once again, the problem is not a disconnect between attitudes and actions but in theory and methods flaws. LaPiere asked one person for an attitude and another person for a behavior. LaPiere asked policy questions on the phone, but presented politeness and service questions in reality. In this survey, the researchers provided hypothetical situations to Other Guys even including “hypotheticals” that had actually happened to the Other Guys.

Now this disconnect is entirely useful to public discussion, but is only jibber jabber for science. The nationally representative sample drawn with random digit dialing only proves that you can get people to feel and talk one way while behaving another way if you ask the wrong question at the wrong time. You don’t even need science to understand that something’s fishy. Nearly 90% of a sample dislike what they’ve already done on a regular basis and could easily avoid if they wanted? Common sense tells you there’s something wrong in that equation.

But, if you don’t want to think like a scientist, you can ramble on with this really great data and hold symposia, forums, and red-faced shouting arguments over the bar, dinner table, or newspaper. It’s all good for jibber jabber, especially on the New York Times.

Which brings us to the second confusion of this doubly confusing story. Why is this research published on a New York Times server? If you notice the destination of the link in the Times story, you can see the address has a “nyt” element in it. The tone of the story makes it sound like this research has been published in a scientific, peer review outlet, but it’s on some NYT computer. So what, you ask?

By the rules of peer review, most journals will not accept submission of a work that has already been published in another outlet. While the NYT is not a scientific peer review source, it is definitely a publishing outlet. This paper is therefore automatically disqualified for publication with a legitimate peer review source, like Public Opinion Quarterly. Yet, the data are clearly collected with scientific methodology and if the authors had written it up in a slightly different style (not that White Paper format), it would have certainly been well received at many different peer review sources.

Instead we have academics with a proven credential as researchers armed with a strong methodological data set who release it as mere journalism in a commercial, for-profit news outlet. Sure, it’s the Times with all the Cool Table patina, but the NYT ain’t peer review.

As I’ve observed before, the Times is jumping hard into science as content, grinding scientific meat into journalism sausages. And, scientists are cooperating gladly. In so doing everyone is blurring the difference between science and journalism and by my lights not in a good way. When scientists (in the broadest sense of that term) present work that qualifies as scientific in method, but release it in journalism, they are deliberately trashing the distinction between publication and peer review. I think that many public health researchers have deliberately positioned their work in news sources like the NYT as a persuasion tactic to enhance their reputation and to improve their chances of getting the next grant.

Take all that Sitting Is Smoking smoke and mirrors. I’ve detailed the wretched science behind it, but that doesn’t matter to journalism outlets who value the controversy and the public credibility of science. You can shape a political agenda with this information and the political scientists who find their way into the Times have learned a dangerous trick. They can play a doctor on TV to create a brand and burnish a career.

Sure, make science more public, more political. Bring it to all those controversies about guns and gays and sitting and full strength soda pop and whatever issue makes you burn. But, when you take science to the public square, you submit that science not to proven peer review, but to the vote of the citizens who showed up at the polls that day. Today, many of these political scientists operate in an environment where their political party and philosophy of preference runs the Executive Branch and one half of the Congress. What will happen to that political science when the other team wins the White House?

Let’s get out of here on the Rules and rhymes.

You Cannot Persuade a Falling Apple.

You Should Not Try To Persuade a Falling Apple.

Falling Apples Require No Persuasion.

Power Corrupts Persuasion . . . and Science.

LaPiere, R. T. (1934). Attitudes vs. Actions. Social Forces, 13(2), 230-237.