Monthly Archives: March 2011

Sheen Has Klout which Says All You Need to Know about Clout 2.0

While researching another idea I came upon a great blog posting on twitter and its implications in the Case of Charlie Sheen.  Consider these facts.

1.  Charlie Sheen famously fought the Law and the Law won.

2.  Charlie Sheen opened a twitter account.

3.  Charlie Sheen attracted more twitter followers in a shorter period of time than anyone else in the history of . . . twitter.

4.  Charlie Sheen’s twitter account within the first hour received a Klout score of 57.

5.  Charlie Sheen did not tweet once while achieving his Web 2.0 superduperstardom.

Michelle Sullivan noted then:

What does this mean for the credibility of tools like Klout that measure online influence?  It means that they measure influence based exclusively on quantity, and not quality.  It means that they don’t take much else into account (if anything).

Sullivan then continues with a both simple and sophisticated analysis of the implications of this, generalizing past one service like Klout, and pointing to a more thoughtful consideration of measuring Web 2.0.  (Sullivan clearly thinks about persuasion with her head and not her hopes or somebody else’s hype.)

Sullivan puts a glaring spotlight on a recurring measurement and interpretation problem with the technological device in mediated persuasion and communication, especially with Web 2.0.  It’s relatively easy (though extremely expensive and time-consuming) to count noses, ears, eyes, or fingertaps with these devices, but then, what does it mean?  Even Google relies essentially on a counting system that is wildly confounded with sheer size, as if temporary popularity is a Cure for Cancer, the Second Coming, or just a really good five cent cigar.  In many instances mere counting falls into the worst effects from the Wisdom of the Crowds fable.  You never measure impact, influence, persuasion, change, baby, change; you always measure popularity in the worst sense of the term.

Back in the day of your Father’s Oldsmobile, the metric counters with TV, radio, and print had actually gotten pretty good at not only knowing How Many, but also What Effect.  In the wild west of the evolving web, counting is pretty much a mug’s game.  Yeah, Facebook has 500 million users.  And twitter generates billions of tweets weekly.

And?

Remember the Rule.

You Can Count It, But That Doesn’t Mean You Changed It.

 

 

Science of Relational Marketing: Part 3, Compare and Contrast

In prior posts we looked at a Relational Marketing paper from Liu and Gal, first discussing the conceptualization and set up experiments, then key mediation and boundary experiments.  Here’s a quick recap of the paper on Relational Marketing.

Liu and Gal theorize that advice generates closeness and empathy which builds a relationship between client and organization that then produces favorable outcomes for the organization.  They establish the basic connection between advice and outcome with profit or charitable organizations and with purchase likelihood or donations.  They compare advice with two other possible client inputs, expectations and opinions, and find that advice is always superior and that expectations or opinions are often worse than a no-input control comparison.  They also test rival mediators to closeness and empathy, notably a variety of cognitive factors, and find support only for closeness and empathy.  Finally, they establish two interesting boundary conditions.  First, the advice effect works only with the organization that solicits the advice and does not generalize to other, similar organizations.  Second, incentives for providing advice, kill the advice effect.

The Liu and Gal paper illustrates four very large points which I’ll develop under these headings:  Relational Marketing,  Scientific Excellence, Bad Science, and Applied Research.  I want to underscore that these are not issues Liu and Gal raise themselves, but rather observations I’m making from reading their excellent paper.

1.  Relational Marketing

A New New Thing in marketing is the combination of social relationships with Web 2.0 to increase the effectiveness of marketing.  Cultivating and building social, rather than pure business, relationships with customers is presumed to lead to better outcomes – profit, you fools – and can be developed through web technologies like Facebook and twitter.  I cheerfully disdain this line of thinking because:  1) profitizing social relationships or socializing profitable relationships is a Titanic in search of the inevitable iceberg (gee, how about a marriage service that matches lonelyhearts with prostitutes!) and 2) novel uses of Facebook and twitter provide more benefit to Facebook and twitter than you (as the Arab street organizers discovered in 2011).  Of course, I’m an idiot who doesn’t have a Facebook or twitter account, fell in love at first sight with Melanie, and doesn’t have an iPad or iPhone.  You can’t believe a word I hoot.

Consider, then, the Gal and Liu research.  Realize the large practical difference in outcome you get depending upon whether you approach a client for advice, expectation, or opinion.  Who would have guessed that something so prosaic as asking for advice compared to asking for an opinion could produce such large differences?  Who would have guessed that advice can lead to closeness and empathy, thus building a relationship, compared to expectations?  Nowhere in the rah-rah for most Relational Marketing do you see anything this complicated, sophisticated, and, dare I say it, nuanced as this research.  Hey, just get a Facebook account, baby, and you’re doing Relational Marketing, right?

Liu and Gal prove that Relational Marketing can be effective, but you’d better do it right because if you do it wrong (expectations or opinions; paying for advice), the thing will backfire on you like a Mainway toy (YouTube).  Creating a relationship takes more than 140 characters in a tweet and requires the active participation of the client.  Gal and Liu demonstrate that Relational Marketing is less what you do than what you get the Other Guy to do.

Now, consider what happens if you follow their advice and properly employ advice with your clients.  While this research does not float downstream to the next encounter, what happens if you don’t follow the advice?  What happens when you get conflicting advice?  What happens when they see you doing this play with everyone else?  You’ve now got a business problem but not because you are doing bad business, but because you are doing bad relationship.  You cannot repair the problem with a return, a coupon, a new line of goods or services or any other standard business behavior.  You’ve got to handle the relational damage to get back to doing business.

I noted the problem of relationships with persuasion in a post about love.  I quoted from two experts, William Shakespeare and Saint Paul, about the nature of love then contrasted love with persuasion.  I’ll repeat the key point:

Persuasion disrupts, distracts, and dissolves love.  Persuasion surveils your lover to understand how best to make a change.  Persuasion puts your preferences in your lover.  And, even if the your lover benefits from this change, it is still a change you created in his head, heart, or body that he had ignored, dismissed, or resisted.  Persuasion is not love.

Now, simply substitute Relational Marketing for love and you see my concern.

Also realize the limitations of this great research.  It made no money for anyone.  All dependent variables, the outputs, were self reported through the computer and no participant was a genuine customer or client of an organization making a transaction in real time.  There is still that translation from the lab to the field or in this case the floor.  These experiments clearly demonstrate how businesses, whether profitable or charitable, can manipulate a sense of relationship, of connection between client and organization through a specific tactic, advice seeking and receiving.  It does not provide any evidence about a sales agent at Macy’s on a Tuesday morning.  You’ve got to follow my Rule, All Persuasion Is Local, to make that translation.

2.  Scientific Excellence

I point to this as one of the best research papers I’ve read.   If you’re in the theory and research business, please consider this paper as a model of clarity, organization, and intelligence.  The thing is stone cold professional.  We’ll start at the abstract then move to the hypotheses.

This research examines a novel process by which soliciting consumer input can impact subsequent purchase and engagement, namely, by changing consumers‘ subjective perception of their relationship with the organization.  We contrast different types of consumer input and propose that, relative to no input, soliciting advice tends to have an intimacy effect whereby the individual feels closer to the organization, resulting in increases in subsequent propensity to transact and engage with the organization.  On the other hand, soliciting expectations tends to have the opposite effect, distancing the individual from the organization.

And their hypotheses:

H1: Soliciting advice from a customer tends to result in greater propensity to transact with the organization, compared to when advice is not solicited.

H2: Soliciting expectations from a customer tends to result in less propensity to transact with the organization, compared to when expectations are not solicited.

H3: The change in propensity to transact due to giving advice (stating expectations) is driven at least in part by an increased (decreased) relationship closeness the customer perceives with the organization as a result of providing advice (stating expectations).

H4: The change in perceived relationship distance is due to the inherent thought process of advice-giving (stating expectations), which involves taking an empathic (self-focused) perspective towards the advice-recipient.

See how the opening of the abstract and the structure of these hypotheses reveal and explain the whole damn paper; everything works out from these statements such that the middle of the writeup is actually the beginning of the idea.  This is a great example of both excellent thinking and writing.  Gal and Liu figure it out, then express the theory in simple, direct, and clear lines.

Consider how they develop and test these hypotheses.  They employ the same basic data capture technique, that computer survey of online participant panels.  They always randomize people to controlled conditions.  They employ several different conditions to test the hypotheses.  They check that advice function in charitable and then for profit organizations.  They check advice against a no-input control (essentially the Status Quo or Standard Operating Procedure or How We Roll Around Here) and against other reasonable inputs like expectation and opinions.  They test the idea of relationship and relationship formation with a variety of mediators – their theorized mechanism of closeness and empathy against cognitive factors, for example.  Finally, they seek boundary conditions, the negative impact of incentives and the matching limitation.  Realize that any one study is useful, interesting, and confirming (and sometimes correctly disconfirming), but no single one is decisive.  Now, when you add them all together is when you see the power of this research.  It is a marvel of theory development and testing in one paper.

Of course, even this one excellent paper hardly proves decisively the Gal and Liu theorizing.  We need replications, both exact and conceptual.  We need extensions.  If this research is true, what else should be true, too?  This needs to move out farther into actual real world and real time interactions between clients and organizations.  Will advice in the field produce increased sales or donations?  What new variables will arise to surprise an entrepreneur translating this idea into practice?  But realize that all of these extensions are worth pursuing because of the quality of evidence and reasoning Liu and Gal provide.

3.  Bad Science.

Contrast the abstract and hypotheses and their structure with the kind of research I often jollystomp in this blog.  Read the rationale and conceptualization and you see the impoverished, simplistic, and assumed ideas about toxic environments, wicked advertising campaigns, greedy capitalist corporations, helpless consumers, parents, and children who are tricked into getting fat from seeing a 60 second commercial or getting a toy in their Happy Meal.  The ideas are not clearly conceptualized or well operationalized and certainly not presented in that Mediated Relationship model evident in the Liu and Gal work.  All we get is that clichéd Airing Of Grievances and Tale Of Woe as authors write up the butcher’s bill of mortality and morbidity; where’s the theory, the science, the skeptism?

And, of course, the testing is sophomoric, biased, and executed in that Ta-Da!, Look Ma No Hands style that only confirms a juvenile trick.  Just read the sources from these PB posts on failed laws regulating texting and driving, and more recently, on regulating calorie counts on menus to see the difference between good science and bad science and why All Bad Science Is Persuasive.

Liu and Gal convince me with science and need no rhetorical research or sophistical statistics in the attempt.  They know what they are doing, know how to test it, and how to describe it.  They provide a strong example for comparison with all other research or “research” you might encounter.  Look in those new papers for their deviations from the excellent model here.  There is no good reason for any researcher in any field to operate much differently from the Liu and Gal paper, yet you will clearly see that they do.  To the extent that the next paper you read misses this mark, you will probably find instead that All Bad Science Is Persuasive.

4.  Applied Research.

While you see textbook theory development and testing in the Liu and Gal paper realize that it is all in the service of applied research.  They want to do business better.  They do not invent or discover new psychological constructs in this research, but rather use the theory-research approach characteristic of scientific persuasion all to make more money.  If ever you could call research, Applied, without fear of contradiction, this paper is at the top of the list.

Thus, you can do science with practical, commonplace, prosaic, everyday, ordinary, and on and on with the thesaurus entries for applied.  So, why can’t the health, safety, and medicine guys do this, too?

It is common to read bad science in journals with content modifiers in their titles as with Health Psychology or Health Communication.  Somehow the idea that we are applying psychology or communication to a specific area of daily life seems to let researchers and reviewers off the hook for thinking, acting, and writing scientifically.  There’s that terrible Tale of Woe and all those dead or wasted bodies, shattered psyches, and on and on as if sympathy was a key element in Theory Development, Reliable and Valid Measurement, and Proper Statistical Analysis.

Of course, I see the same thing in what are supposed to be “basic” research journals.  Whenever the research is done in an applied area, science standards often disappear.  Why?  I’ve started any number of recent posts on awful studies from Psych Science that “research” political conservatism or global warming, but have to stop because it’s clear that no one is thinking like a scientist when they are working from their feelings.  What’s the point of discussing naked emperors?

Let’s get out of here . . .

This has been a long series from what appears to be just one little publication from Liu and Gal.  I hope I’ve demonstrated that there’s a lot of there, there, with the paper whether you are interested in theory or money.  This paper works that fun seam between science and practice to the benefit of each side.  It’s also a great example of great science.  Just read a few sentences in the abstract and those beautifully designed and expressed hypotheses.  Compare what you read and what you write to that standard and make your judgment.

You learn about practice, science, writing, and excellence from this paper.

Science of Relational Marketing: Part 2, Testing and Summary

We continue today our look at Liu and Gal’s work on Relational Marketing and their theorizing about advice as a customer input, closeness as a mediator, and purchase/donation behaviors as outputs.  In the prior post we learned that advice giving motivates more positive business outcomes compared to no advice or to giving expectations and that this advice giving generates a highly specific attitude toward the organization receiving the advice.  Now, while the prior two studies provided excellent supporting evidence for the beneficial impact of advice, we saw no evidence about the presumed impact of closeness.  That’s crucial for Gal and Liu’s thinking and now we’ll consider their Experiment 3 for that information.

Experiment 3 manipulated whether participants provided their advice, opinions, or expectations for an organization. Two-hundred-fifty-six (256) participants were recruited from an online pool of individuals from throughout the United States and were randomly assigned to conditions.  After reading a description of a new restaurant, Splash!, participants provided the following responses:  Purchase likelihood, subjective closeness and empathy, and perceptions of the valence of cognition, amount of help they provided, difficulty of helping, and the humility of Splash!.  Likelihood of purchase, closeness and empathy measures are the key variables from the study hypotheses, while the perception variables measure Rival Explanations.  Thus, the authors test the alternative proposition that Advice is mediated not by Closeness, but by Cognitive Effort, for example.  And, once again we compare the effect of Advice versus Expectation and the new input, Opinion.

Thus, each participant read the same description of just one organization, a for profit restaurant, then either provide Advice, Expectations, or Opinions about Splash!.  All participants then complete a battery of self report questions that measure the key outcome – purchase likelihood – and several potential mediators of that outcome, including the preferred one of closeness and empathy.  Closeness is measured through the IOS scale.  This uses visual Venn diagrams in an overlapping sequence of circles indicating No to Complete Overlap between participant and organization.  Thus, each participant indicates how close they feel to the organization with that visual portrayal.

Additionally, Gal and Liu analyze the written Advice, Expectation, and Opinion comments and code them for empathy.  Here’s how they do this.

Empathetic Perspective.  We had argued that giving advice gives rise to a subjective feeling of closeness due to the inherent demand of the task, namely, taking an empathetic perspective of the organization.  To provide evidence for this process, we had two coders, blind to the hypotheses and experimental conditions, code the content of input for the degree to which the person took an empathic perspective on a 5-point scale ranging from -2 (―restaurant‘s perspective) to 2 (―individual‘s perspective).

So, we’ve got the usual suspects of input (Advice, Expectation, and Opinion) and output (Purchase Likelihood) and two theory variables of closeness and empathy that measure the mediator.  And, just because Gal and Liu think like scientists, they also include competing mediators that could explain the input-output effect.  Please note how smart it is to include competing mediators here.  While this mediation test is only correlational, it does at least provide important comparisons to consider.  Maybe closeness is not important, but rather something related to cognition?  Liu and Gal allow for disconfirming evidence here.

Let’s break this down sequentially, starting with the purchase likelihood.

Purchase Likelihood. As hypothesized, an omnibus one-way ANOVA revealed a main effect of input solicitation mode on purchase likelihood (F(2,253) = 16.25, p < .001). Planned contrasts using one-tailed tests for directional hypotheses showed that purchase likelihood was higher in the Advice (M = 5.74) than in the Opinions condition (M = 5.05; t(157) =2.73, p < .01, d = .43), and lower in the Expectations (M =4.29) than in the Opinions condition (M = 5.05; t(171) = 2.86, p < .01, d = .44).

No surprise here, but still this is good news.  It confirms the results from the first two experiments.  Advice produces a much more positive outcome on purchase likelihood than either Expectations and Opinions.  Now, why does this effect occur?  The bet is on closeness and empathy.  Advice makes the participant feel more connected and related to the organization.

Subjective Closeness. Correspondingly, an omnibus one-way ANOVA revealed a main effect of input solicitation mode on our measure of relationship closeness, namely the IOS scale (F(2,253) = 9.81, p < .001). Planned contrasts using one-tailed tests showed that relationship closeness was greater in the Advice (M = 3.22) than in the Opinions condition (M = 2.59; t(152) = 2.31, p < .05), and that relationship closeness was lower in the Expectations than in the Opinions condition (M = 2.15; t(159) = 2.00, p < .05).

Again, the results here mirror the results with purchase likelihood.  Advice giving leads to greater feelings of closeness.  Now, this mean difference in closeness between Advice, Expectation, and Opinion is not exactly the same thing claiming closeness mediates the relationship between Advice and Purchase.  We need to run structural equation modeling on this (or path analysis or mediation analysis or regression modeling).  We will start with the simple correlation between participant input (Advice, Expectation, or Opinion) and output (Purchase).  We will then push in between the inputs and the outputs the mediators, closeness and empathy.  Let’s look at a diagram of the model, then the statistics for it.

The top model simply displays the unmediated relationship between input and output.  The more complicated model adds in the mediating path Liu and Gal theorize.  Now, consider the statistical analysis for each model.

There was a significant total effect of input on purchase (β = -.72, t = -5.71, p < .001). Further, the total direct effect (i.e., effect not mediated by the mediators in the model) was significant (β = -.34, t = -2.96, p < .01), as was the total indirect effect, with a point estimate of -.39 and a 95% confidence interval between -.56 and -.20.

Great.  The data for that first, simple model is confirming with a fairly large beta (.72) and then when Gal and Liu pull out the effect due to the mediators, that simple model is still producing a Medium size effect.  All by itself, Advice giving generates a practical, real world effect on purchase likelihood.  Now, it gets a little more complicated with the overall model.

Importantly, an examination of the specific indirect effect through both mediators indicated that the path from input mode to purchase likelihood through both mediators was significant with a point estimate for the effect of -.18 and a 95% confidence interval between -.28 and -.10. The specific indirect effect through perspective taking alone (95% confidence interval from -.17 to .04) and through relationship closeness alone (95% confidence interval from -.29 to .01) were not significant, indicating that neither was an independent mediator of the effect of input mode on purchase likelihood. In summary, when taking account of all variables in the model, the input mode à empathetic perspective à relationship closeness à purchase likelihood path through both mediators is significant, whereas the effect of input mode on purchase likelihood through either perspective-taking alone or closeness alone is not significant. This suggests indeed a multiple-step mediation has taken place.

The headline here is that the full model of input-mediator-output does a much better job at explaining the results than just the input-output model.  And, best of all, the presumed operation of closeness and empathy provide good confirming evidence as the mediators.  Advice works because it builds a relationship between the customer and the organization, stimulating feelings of closeness and a specific understanding and empathy of the customer for the specific organization.

But, of course, there could be Alternative Explanations, right?  What about a more cognitive mediation?  Maybe it’s not the relationship, stupid, but the thinking.  Liu and Gal address that reasonable concern.

Alternative Explanations. Finally, we examined measures to tap into alternative accounts of valence, helpfulness (foot in the door), and brand image.  We did not find differences along any of these dimensions.  Specifically, omnibus one-way ANOVA‘s did not show that participants varied by input form in the degree to which they focused on negative versus positive thoughts (M Advice = 4.95, M Opinions = 5.02, M Expectations = 5.01;F < 1), the degree to which they viewed their input as an act of help (M Advice = 4.51, M Opinions = 4.56, M Expectations = 4.80; F < 1), the degree to which they found providing input difficult (M Advice = 2.85, M Opinions = 2.56, M Expectations = 2.83; F < 1), or in their perceptions of Splash! as humble (M Advice = 4.69, M Opinions = 4.95, M Expectations = 4.72; F < 1) or arrogant (M Advice = 2.38, M Opinions = 2.70, M Expectations = 2.94; F(2,253) = 2.45, p = .09).

Interesting and useful null results!  All of these measures were considered as potential rival explanations as mediators of Advice-Purchase and all fail to show any systematic variation.  Given the absence of mean differences here, there’s no reason to consider the structural equation models.  Nothing is going on, especially compared to the closeness and empathy measures that indicate the relational mediation.

Now, we are closing the circle on Liu and Gal’s theory.  Once again Advice leads to very different and more positive outcomes than Expectation or Control.  Again, Expectations produce the worst outcomes even compared to doing nothing at Control.  What’s new here is the closeness and empathy variables.  They too are sensitive to that Advice function and it also fits well into a structural equation model.  That diagram shows that Advice producing Closeness and Empathy makes an important difference and that either Advice or Closeness or Empathy alone has less impact than all together.

At this point, we’ve got a great demonstration of theory construction and testing.  Best of all, the evidence confirms a particular kind of input, Advice, generates a strong sense of relationship which in turn produces a favorable business outcome.  Further, we’ve got evidence that other inputs, Expectations and Opinions, produce worse effects such that not all kinds of Relational Marketing are equal.

But, now Gal and Liu take another experimental step to test not the theory, but the practice.  Many organizations use incentives with their customers and clients.  What happens to the Liu and Gal relational theory when you pay your participants?

Experiment 4 had a 2 (Input Form: Advice vs. Opinion) × 2 (Compensation: None vs. Compensated) between subject design and was performed online.  Two-hundred-three (203) participants were recruited from an online subject pool of individuals from throughout the United States and were randomly assigned to conditions.  They read about Thai Kra,  a small Thailand-based manufacturer of seaweed snacks.  The company was interested in American consumers‘ input before possibly launching their seaweed snacks in the United States.

Participants in the Compensation conditions were informed that in return for their input Thai Kra had bought them an extra raffle entry, doubling their chances of winning an Amazon.com gift certificate.  Participants in the No Compensation conditions did not receive any information about compensation. All participants then read the same description of the company.  The key outcome was purchase likelihood.

So, we’ve got the same input, Advice, along with a comparison of Opinion.  Now, we’ve added incentive in the form of Compensation, that additional Amazon gift certificate bumped up by Thai Kra.  If you know anything about incentives, you know that they can have perverse effects.  Instead of obtaining that common sense hydraulic of Bigger Reward makes Bigger Effects, you can actually kill desired outcomes.  What happens here with relationships?

A planned contrast using a one-tailed test  showed that among participants that were not compensated for their input, there was a greater likelihood of trying Thai Kra‘s seaweed snacks when they provided advice (M = 4.81) than when they provided opinions (M = 3.50; F(1, 100) = 10.83, p < .01, d = .66), consistent with all previous experiments.  In contrast, among participants that were compensated for their input, there was a similar likelihood of trial regardless of whether they provided advice (M = 3.89) or opinions (M = 4.00; F < 1).

Interesting, isn’t it?  We’ve shown that Advice produces feelings of closeness and empathy which builds the relationship which then makes positive outcomes like purchase or donations more likely.  Relational Marketing can work!  But, now when organizations provide incentives as part of the relationship, bang, you kill the Advice effect.  Think about that.

Let’s put this paper back together again.

Liu and Gal theorize that advice generates closeness and empathy which builds a relationship between client and organization that produces favorable outcomes for the organization.  They established the basic connection between advice and outcome with profit and charitable organizations and with both purchase likelihood and donations.  They compared advice with two other possible client inputs, expectations and opinions, and found that advice was always superior and that expectations or opinions were often worse than a no-input control comparison.  They also tested rival mediators to closeness and empathy, notably a variety of cognitive factors, and found support only for closeness and empathy.  Finally, they established two interesting boundary conditions.  First, the advice effect works only with the organization that solicits the advice and does not generalize to other, similar organizations.  Second, incentives for providing advice, kill the advice effect.

Please mull over this post and the prior one.  There are many implications to this paper and I’ll detail my observations in tomorrow’s third and final post.

Science of Relational Marketing: Part 1, Concept and Testing

I’ve found a great paper that illustrates several large ideas within its excellence. Liu and Gal demonstrate the power of good science for understanding new ideas and in so doing also highlight the many failures bad science stimulates. From one somewhat simple report on Relational Marketing, I’ll develop a series of ideas and will need three posts to handle it. Today we’ll look at the first two experiments from Liu and Gal; tomorrow we’ll explore the last two experiments from their report; and the day after that we’ll pull back for a wider view of this research and its many lusters.

Liu and Gal investigate what I’ll call Relational Marketing in their JCR paper, Bringing Us Together or Driving Us Apart: The Effect of Soliciting Consumer Input on Consumers‘ Propensity to Transact with an Organization. I don’t know Liu and Gal from Adam, Eve, or the serpent, but this is excellent work and I recommend a close reading of it to anyone interested specifically in Relational Marketing, persuasion in general, excellent scientific writing, or how to think well.

This research examines a novel process by which soliciting consumer input can impact subsequent purchase and engagement, namely, by changing consumers‘ subjective perception of their relationship with the organization. We contrast different types of consumer input and propose that, relative to no input, soliciting advice tends to have an intimacy effect whereby the individual feels closer to the organization, resulting in increases in subsequent propensity to transact and engage with the organization. On the other hand, soliciting expectations tends to have the opposite effect, distancing the individual from the organization.

Observe immediately that this research aims at testing the impact of socializing profitable relationships through concepts of advice and expectations, of intimacy and closeness. Underline that it deconstructs the presumed simple main effect of Web 2.0 (social media for socializing profit) into different parts and suggests some kinds of social relating may function differently from others, which is not exactly that cheerleading Facebook4Profit!!! exhortation.

Gal and Liu argue that the relationship between a customer and a business must generate feelings of closeness and intimacy. Thus, no matter how a business engages customer input, if this does not produce closeness, then the relationship will not produce a positive business outcome.

Now. How do you generate this closeness? Liu and Gal start with advice. They advise that when a business asks a customer for advice, this must make the customer look at the business in a more relational way especially compared to offering expectations (what I expect you to do) or attitudes (how I evaluate what you do) about the business.

They then test various customer inputs (provide advice, expectations, or opinions), the presumed mediators (intimacy and closeness), under various conditions (profit or nonprofit corporations; paid or volunteer; donation types), with different measures (simple self report; virtual cash contributions) in 4 lab experiments.

All experiments employ a cover story that the study authors are working with organizations (profit or charitable) to help them with their missions. Participants are from samples of online volunteers who are then randomly assigned to one condition that manipulates hypothesized variables. They read information about that organization at a computer station and respond to all elements through the keyboard. Finally, participants are rewarded with a raffle entry for gift certificates. Let’s detail each study.

Experiment 1 had a 2 (Input Form: Advice vs. Control) × 2 (Input Recipient: Building Hope vs. Preemie Promise) × 2 (Donation Recipient: Building Hope vs. Preemie Promise) between subject design and was conducted online. Three-hundred-fifty-two (352) participants were recruited from an online pool of individuals from throughout the United States and were randomly assigned to conditions. People read descriptions of the organization (Building Hope for domestic violence victims or Preemie Promise for premature infant care).

In Control, participants just read the description, while in Advice, after reading the participants were asked: We are interested in what advice you might have for our organization? After entering any advice they had, participants were thanked.

Now, after this, all participants were given a chance to read the description of the other organization. They were then offered an opportunity to donate any raffle winnings to either the first organization they’d read about or the second one. This tested whether people were sensitive to that Advice manipulation or whether they were just generally more charitable.

Here an example of an organization description.

Building Hope

Building Hope was founded in 2007 by Elana Lee, a former battered woman, to help aid other battered women and children.

Family violence is the number one crime and cause of injury to women in the U.S. and is believed to be the most common, yet least reported crime in the country. Building Hope aspires to become a model for the nation by introducing innovative programs at shelters designed to help women and children become more self-sufficient.

Building Hope has established a team of dedicated volunteers that give generously of their time and resources to make a difference in the lives of women and children who have been victims of domestic violence.

Got it? You read about an organization, provide your advice, read about another organization, then can make a donation to just one of them. We then analyze the donations in a 3 way ANOVA of Input (Advice or Control), Advice Organization (Hope or Promise), and Donation Organization (Hope or Promise). Here’s a graphic of the outcomes.

Making no decision based on graphic data, we look at the tests. And, indeed, the triple interaction is significant: (F(1,344) = 6.72, p = .01). Bang. Sampling variability is not a plausible rival explanation here since these results are well outside of random variation that could arise from merely randomizing to conditions. This triple is important because it fits the theory Gal and Liu believe and quite specifically. Now, we can look at specific directional hypotheses, right?

Focusing on those giving input to Preemie Promise, consistent with H1, participants giving advice to Preemie Promise donated more to Preemie Promise (M = $3.52 than participants who merely read about Preemie Promise (M = $2.55; F(1,89) = 4.38, p < .05, d = .44). In contrast, consistent with H1a, participants that gave advice to Preemie Promise did not differ in the amount they donated to Building Hope (M = $2.41) from participants who merely read about Preemie Promise (M = $2.79; F < 1).

So, with the specific organization, Preemie Promise, we get that predicted specific donation effect. Participants contribute to a charity that solicited their advice, but not a different charity. It looks like advice is necessary. Now, what about the donations for Building Hope?

A similar pattern was observed among participants providing input to Building Hope. In particular, consistent with H1, participants giving advice to Building Hope donated more to Building Hope (M = $3.69) than those that merely read about Building Hope (M = $2.53; F(1,88) = 6.34, p < .01, d = .53). However, consistent with H1a, participants who gave advice to Building Hope did not differ in the amount they donated to Preemie Promise (M = $2.55) from those who merely read about Building Hope (M = $2.52; F < 1).

Thus, asking for Advice motivated more donations, but only for the organization that solicited the Advice. Control people who never provided advice gave equally to both organizations. Advice people, however, gave only to advice seeking organizations. So, advice motivates a better outcome – the donation. And see that the effect sizes are that Medium Windowpane, 35/65 range.

Experiment 1 provides confirming evidence that Advice, compared to a no request Control, generates a better outcome in the form of greater donations. Further, we see that this effect is specific to the organization that makes the request for Advice and does not stimulate a general motivation to help a similar organization. This looks like a relationship effect whereby the Advice giving makes the participant feel more connected specifically to the source who requested and received the Advice.

From a General ELM perspective, Advice appears to function as a WATTage switch that causes participants to think about the organization more carefully and effortfully and especially along relational lines. In some respects Advice is like Forewarning, Role Playing, or even Cognitive Tuning. It activates a thoughtful response from the participant and lets each person discover their own Arguments, then permits them to express those Arguments when they type in the Advice. They really think about this and take the Central Route.

Further supportive, but not conclusive, evidence of this high WATT processing is found in that differential donation outcome. People donate to the Advice requesting organization, but not a similar organization. That’s a fairly particular, unique, and specific response and characteristic of a Central Route attitude. You could also train a Cued response to be this specific, but I’d expect that to take several trials rather than this one shot performance. Furthermore, you’d need some kind of trigger related to the Cue when you made the donation request.

So, we’ve got a good start here on understanding Relational Marketing. One tactic, Advice, appears to generate better outcomes, shows a nice interaction with donation source (Same versus Different), and fits a good theory in the General ELM. Now, would this effect generalize from a charitable organization to a commercial, for-profit organization? And, more interestingly, what would happen if we solicited a different kind of input from the participant, say Customer Expectations?

Experiment 2 has 3 conditions (advice, expectations, and no-input). One hundred thirty-one (131) participants were recruited from an online subject pool of individuals from throughout the United States and were randomly assigned to conditions. They all read a description of a for profit business EcoGym and were then asked to give Advice, Expectations, or in Control just read the description. Everyone then self reported likelihood of purchasing a membership for this business, described as:

EcoGym is a new “green” concept in fitness clubs. Our goal is to develop an ecologically friendly gym from the ground up. We intend to reduce our energy consumption by building our gym to allow in natural lighting and by using high quality insulation materials to reduce energy consumption from heating and cooling. The materials we intend to use to decorate the gym will include natural woods and fibers. Moreover, the gym will incorporate solar panels for energy generation and fitness equipment, such as treadmills and exercise bikes, will convert members’ exercise power into electric power to operate the gym. The gym will also include a cafe featuring all-natural and organic products, such as healthy smoothies and energy bars.

Hey, The Lean Green Machine!

Now, the data.

An omnibus one-way ANOVA revealed a main effect of input form on purchase likelihood (F(2, 128) = 7.62, p < .001). Planned contrasts using one-tailed tests for directional hypotheses showed that participants in the advice condition expressed a greater likelihood of purchase (M = 4.29) than participants in the control condition (M = 3.60; t(84) = 1.78, p < .05, d = .38), who in turn expressed a greater likelihood of purchase than participants in the expectations condition (M = 2.77; t(87) = 2.20, p < .05, d = .47).

Once again, Advice produces the better outcome, this time for a commercial rather than charitable organization. And the effect is that Medium Windowpane, 35/65. Notice, too, that Expectations produce the lowest likelihood of purchase even compared to the mere reading Control condition. Clearly there is a large practical difference between Advice and Expectations for the consumer. While Liu and Gal do not report it, the difference between Advice (M = 4.29) and Expectation (M = 2.77) must be a near Large Windowpane effect, 25/75.

Up to now Liu and Gal have demonstrated that a customer input, Advice, produces better outcomes for both a charitable and a commercial organization on both self report (purchase likelihood) and donation (raffle winnings). They’ve also demonstrated that Advice does not appear to stimulate a general positive attitude, but rather a specific positive attitude that is unique to the requesting organization. This interaction is crucial to Gal and Liu’s theory about Relational Marketing. Advice giving appears to generate a connection between organization and client that is different from Expectations, for example.

Now, many researchers would quit here having delivered such positive and confirming data, but Liu and Gal take an important next step. They want to test and document the impact of closeness and intimacy. They argue that perceptions of closeness mediated the effect of advice, that in essence, advice giving and receiving generates feelings of closeness in the advice-giver and that drives outcomes like purchase likelihood or donations. In the next post we’ll look at how Gal and Liu test this along with a fourth study that provides an interesting boundary condition.

Computing versus Computering

Our world means computers, physical devices that perform binary operations at the speed of electricity and someday maybe at the speed of biological cells.  In the beginning, computers meant Computing.  After the Fall, it means Computering.

The mere addition of an er, the er as a vocal hesitation combined especially with “like,” the er of Ur, the primitive, the basic, the earth, changes WATT from Thinking for Computing to Tapping for Computering.

er . . . low WATT.

er . . . iGizmo.

er . . . WATtap.

er . . . like.

Spring Break 11 – New York City

With our beloved Mountaineers out of the NCAA basketball tournament, we decided to console ourselves with a quick vacation to Manhattan.  We took JetBlue out of Pittsburgh and had a great trip except for that annoying video screen in front of us that never went off.  So we jury-rigged a solution.

A new use for barf bags!

I am sick to hell and gone with the massive overuse of screens in public.  Airline terminals have become the worst settings creating information overkill in an environment where timely information is crucial.  They violate my Persuasion Rule:  Never Always Be Closing.  I’m looking for larger barf bags!

We stayed at the Warwick at 6th Avenue and 54th Street, just a few blocks south of Central Park.  As I found out while on hold on my hotel room telephone, the Warwick was built by William Randolph Hearst in 1927 for his Hollywood friends, and that both Cary Grant and the Beatles stayed there for their visits.  It’s a nice older hotel that’s a bit tight on space and amenities, but that location cannot be beat.  The restaurants and the shopping in midtown are unsurpassed.

We ate at Ma Peche, Il Tinello, and Benoits for our big dinners and for lunch either caught dogs from street vendors or ate at Fusia Asian on Lexington and 56th.  Two big thumbs up for Fusia.  Great lunch value:  fabulous food, excellent service, and great prices.  They were very busy for both lunches we ate there, but amazingly enough, they remembered us on our second visit and treated us like friends.  One of our servers commented on the fact that Melanie ordered the same dish twice and that I switched mine.  Pretty sharp service people.  And great noodle dishes.

Ma Peche was the belle of the ball for our dining experience.  It is done in that nouveau GenX slacker style.  While we had the address we nonetheless had a difficult time finding the door because the eatery is located in the Chambers Hotel and the signage for both places is modest to say the least.  We wandered up and down 56th Street looking for it until by dumb luck we saw the Ma Peche sign.  We were a bit early and the service crew seemed unsure what to do with us, so we wandered into the bar and struck up a conversation with a bartender who gave us the lay of the land.  Some of these hip eateries operate under an extremely causal communication system where no one wants to bother you, but if you ask them anything you can get an expert rundown on everything.  It’s a bit different than places like Il Tinello.

Ma Peche has a cool room.  Here’s shot of Melanie at our table.

I was up in the bar shooting down to the dining room floor.  The room felt both small and large.  And, man, they are cooking fools at Ma Peche.  I regularly fall in love with a good cook, so I wanted to marry them all in the kitchen.  Get the foie gras with huckleberries.  I’m not much of an organ meat fan, but this was like buttered bacon with tart fruit.  I’m still dreaming about it.  And, the Brussels sprouts, for crying out loud, are to die for.  The chef runs conflicting tastes against each other in ways that complement and combine flavors.  I had a fish dish that essentially used the fish just to carry a bunch of flavors.  Amazingly good cooking.

After dinner we walked off a few calories with a bundled stroll around Central Park and up Fifth Avenue.  Along the way we encounted the Apple Store, just opposite the Plaza Hotel.  That fabulous plexiglass exterior you see is just a faux entrance.

The actual store is below the street and fills thousands of square feet.  Tonight it was stuffed with Apple Fanboys and Fangirls.

Il Tinello is a fabulous northern Italian place with old style service and dining.  Right out of a movie with a great table presentation of antipasto.  Lots of older men in tuxes moving quickly to assist.  It’s just that old familiar upscale style in contrast to the Ma Peche laid back slacker with grungy beards, flannel, and long hair.  We love Il Tinello as just a great overall experience, especially for the room, the ambience, and the service.  The food is excellent, but not at the same degree of difficulty as with Ma Peche.  They are both great places, but for different reasons.

I had an orange sorbet for dessert that was served in a style I’d never seen.  They sliced a top off an orange, hollowed it out, filled it with the sorbet, then put it in a freezer.  The taste was incredibly bright and orange tarty.  And cold, almost as cold as that iron pipe you licked once on a frozen day as a child.

Melanie had a blast on 40th Street in the Garment District.  She walked into an old basement feeling fabric shot and immediately told me to take a picture.

She ended up touching twice every bolt and remnant in the place.  The sales staff fell in love with her and consulted with her on everything like she was a fashion designer.  I’ve learned quite a bit from Melanie about fabric, mainly the hand, and I can distinguish quality and value pretty well, but not like her expertise.  I was afraid when I left her that they would hire her and she’d never come back to WVU.

I moseyed up 7th Avenue from the Garment District until I found Rudy’s Guitar Shop on 48th.  Rudy’s is a small store with a range of vintage and high end guitars.  I went upstairs and met Rob in the acoustic section.  I could only admire the merchandise since almost all were rightys and I never mastered the Jimi Hendrix style of lefty playing righty upside down.  I am always extremely aware of my left handedness in a guitar shop.

As I’m aging my hand reach and flexibility is diminishing, so I cannot comfortably play my Rodriguez classical.  I’ve been looking for a narrow gauged one for over a year and found my solution with Rob at Rudy’s.  He had a Cordoba Cadet model hanging on the wall.  It’s a 3/4 cut and aimed at young beginners; it looks like an old Martin parlor guitar.  I strummed the strings and was knocked out by the tone from such a small guitar.  Fabulous bass and treble sounds with tremendous volume and clarity of sound.  Rob sent it to the shop to reverse the strings, bridge, and nut and also to file off rough fret edges.

We also spent a great deal of our time just schlepping the city.  We found this fun installation of roses on Park Avenue.

And, of course, Central Park.  Melanie poses with a rock.

And, here’s a nice background and skyline shot in the Park looking back toward Central Park South.

Even with the cool and wet weather, we had a fabulous time in New York.  The Warwick is location, location, location plus history.  And, those midtown eateries can’t be beat.  Oh, did I mention the shopping?  And, as always, you can trust Melanie’s opinion.

Thumbs up for Manhattan!

Think-Aloud Measures Cognition?

Imagine that . . .

. . . Google is considering a change in its page rankings for search that will be implemented in the next few days.  Blogs on persuasion will be blocked on all Google searches because Google has determined that such persuasion blogs Do Evil and since Google is committed to Do No Evil, it cannot in good conscience permit innocent people to stumble into persuasion blogs that Do Evil.  Charlie Sheen agrees with this policy change and thinks that it is high time for Google to block websites on persuasion that Do Evil, making the Internet safer for him.

Now, before we continue, would you please say aloud (or write down) all the thoughts that occur to you about this?

. . .

In this imaginary (please) illustration, we see how the typical Think-Aloud measurement of cognition is done.  You give people a task and then have them literally Think-Aloud (or sometimes instead they write).  Sometimes you have them Think-Aloud during a task. Sometimes you have them Think-Aloud after the performance.  Sometimes and especially in persuasion, you have them do a Thought Listing which means they write rather than speak their thoughts.  The point of the protocol is to gather a relatively objective, though indirect, measure of cognition made available through self report.  These statements of cognition are then analyzed depending upon what you are studying.

Of course, if you are a skeptic, you should be concerned that Think-Aloud is not a passive measurement process, but actually is a kind of cognition itself that interferes with or alters thinking.  Thus, people who Think-Aloud will perform differently than people who just do the task, but don’t Think-Aloud.  It is precisely this concern that Fox, Ericcson, and Best explore in their recent meta analysis of Think-Aloud studies.  If you are interested in this method, I highly recommend a close reading of this report.  If you trust the authors, here’s their summary:

The general goal of this article was to examine when, where, and how concurrent verbalization of thinking can be elicited with minimal reactivity (cf. Boring’s, 1950, criticism of Watson’s, 1920, failure to give a clear distinction of admissible verbal reports). In particular, we assessed Ericsson and Simon’s (1980, 1993) theory that eliciting verbalizations with a think-aloud procedure only minimally interferes with the structure of mediating processes for studies in which stable objective performance can be elicited. Our meta-analysis, based on 94 independent data sets with almost 3,500 participants, showed that think-aloud verbalization produced a mean effect size that did not differ significantly from zero.

Thus, Think-Aloud appears to be a relatively objective measure rather than some kind of introspection that significantly alters cogntion.  Now, there’s a considerable amount of nuance, exception, and elegant interaction in this meta – there’s more going on than a blog post and I recommend reading the entire paper.  For most of us, the headline is important enough.  I’d like to offer one big scientific way and in one small persuasion way.

The big comparison with this paper is to the recent Daryl Bem psi report.  It’s fun to note that the effect size with Think-Aloud, which is considered to be zero and therefore indicative of no effect, is about the same size as Bem’s psi effect on knowing the future.  Sure, there are huge differences between the theory and research between Think-Aloud and psi, but realize that both cases come down to the same effect size as summarized from a larger number of experiments.  When do you say that the numbers say zero which means that nothing is going on?  I leave it to you to decide how the same number can mean both accept and reject the existence of an effect.  As a canny old Fed used to put it, where you stand depends upon where you sit.

The small comparison with this paper is for its direct implications to persuasion research.  Many ELM and HSM studies employ the Thought Listing analog to Think-Aloud.  Participants simply write down “all the thoughts that occurred to you.”  These writings are then unitized (broken into single idea units), then coded into categories like Argument Relevant, Cue Relevant, or Irrelevant and Positive, Negative, or Neutral, then analyzed for their relationship to WATTage, Argument, or Cue, and to attitudes, beliefs, intentions or other cognitions.  (Here’s the standard paper (PDF) for thought listing with ELM.)

These thought listings often reveal an interesting and predictable pattern of results that depends on WATTage.  High WATT participants typically show a very strong correlation between Argument Relevant thoughts and attitude change while with Low WATT participants there’s typically no correlation between those thoughts and attitude change.  This is exactly what should happen according to theory.  That Long Conversation in the Head (as measured with Thought Listing) should directly mediate attitude change.  Under High WATT conditions, the Long Conversation is with Argument relevant thoughts, so the more of them you generate, the greater the attitude change.  And, since Low WATT folks don’t engage that Long Conversation on issue relevant thinking, their Thought Listings are not related to attitude change.

While this Fox et al meta did not directly test Thought Listings, it is difficult for me to see how spoken versus written Think-Aloud/Thought Listing should produce contradictory outcomes.  Thus, this meta not only confirms the original conceptualization behind Think-Aloud, it also supports Thought Listing.

Of course, all of this is balderdash until those clever fMRI magneticians prove it!

Fox, M. C., Ericsson, K. A., & Best, R. (2011). Do procedures for verbal reporting of thinking have to be reactive? A meta-analysis and recommendations for best reporting methods. Psychological Bulletin, 137(2), 316-344.

doi:10.1037/a0021663

Framing (Again) with Mammography

You may recall the disappointing results from a meta analysis by Dan O’Keefe and Jakob Jensen on the effects of message framing.  They found an average effect of r = .039, and given that a Small effect is r = .10 or Windowpane of 45/55, let’s be kind and say the framing effect is not zero.  But, considering that framing is based on Nobel-prize winning Prospect Theory and the basis of the Obama Nudge, you’d expect something a bit more than that.  Today we’ll look at a new study on framing that finds better effects for the tactic than 0.039.

First, a quick recap on framing.  Message framing is a persuasion tactic that provides information against different backgrounds.  We can say, for example, that health tests like mammograms are helpful either because:  1) the test can detect cancer early when it is easier to treat (gain frame) or 2) if you don’t test, you may not detect cancer early when it is easier to treat (loss frame).  In both message frames, the same claim is made:  Get a mammogram.  According to theory, loss frames should be more effective than gain frames, especially against a powerful risk like cancer.  Yet, the O’Keefe and Jensen meta found that functional zero effect, despite the theory and a Nobel prize for the theorists.

Alex Rothman has been working on framing and breast cancer for a long time and with a team of colleagues continues to explore how framing may be employed to better effect.  In this study, they had women waiting in a clinic complete a battery of self report items, then view a persuasive video arguing for mammogram testing.  Women were randomly assigned to either a gain-frame or a loss-frame version of the video.  All the factual claims in the videos were the same; they just varied on the gain or loss frame.  After viewing the videos, the women were tracked for three months, then asked whether they had taken a mammogram screening test in the prior three months.  That response is the key self report outcome.  Here’s the crucial analysis.

Overall, loss-framed messages led to a significantly higher rate of screening compared to gain-framed messages (odds ratio 2.26). Among women who viewed a loss-framed message, 37% reported screening at follow-up. However, among women who viewed a gain-framed message, only 24% reported a screening at follow-up.

Hey, this is much better news than the average effect from the meta.  We see 37% getting screened within 3 months in the loss-frame versus 24% in the gain-frame.  That 13 point difference translates into a Small Windowpane effect, a bit more than 45/55 and is noticeably better than the meta average of 49.9/50.1 effect.

Furthermore, when the two frames were analyzed with perceived susceptibility, the adjusted effect size was even larger.  Here’s the graph on the interaction effect of frame and susceptibility.

What to make of this?

First, this effect size is obviously larger than the meta expectation.  Rothman and various colleagues have been working at framing and specifically with breast cancer for nearly 15 years and it appears they know how to do it better than other people.  I’ve been reading Rothman’s work for a long time and had a chance to work with him.  He’s not lucky.  He’s good.  He knows how to do this and if you want to do framing, talk with him.

Second, these encouraging positive results do not change the overall conclusion from the meta even a little bit.  Add this result to the total from O’Keefe and Jensen and the average goes, maybe, from .039 to .040.  One strong result does not move the average very much.  We still have all those weak or reversed findings in the research literature.  Sure, maybe some of those other studies were poorly done, probably in executing clean and theory strong messages for each frame type.  But, it’s hard to run that argument as an explanation for all the bad news in the meta.  Even if we could agree on what were poorly implemented framing messages, the effect sizes would still be less than Small.

Third, the Rothman team puts a strong focus on the interaction between frame and that perceived susceptibility.  Consider this extended quote from their Discussion.

This study provides the first direct test of the relationship between risk beliefs and health message framing in the promotion of cancer screening behavior, assessed in our study by self-report at a 3-month follow-up.  We found perceived susceptibility to breast cancer determined the extent to which loss-framed messages encouraged women to obtain a mammogram. Women who perceived a higher susceptibility to breast cancer were significantly more persuaded to obtain a mammogram by the loss-framed message than the gain-framed message.  This finding is of particular importance given research that shows higher perceptions of susceptibility to be associated with reduced screening rates (Han et al., 2007; Lerman & Schwartz, 1993).  Thus, our findings support the usefulness of loss-framed messages in the promotion of mammography, particularly among women who perceive a high susceptibility to breast cancer.

This is quite a change in theorizing and also quite a leap based upon just one favorable outcome.  Framing and Prospect Theory are built as a main effect here.  In principle it is theorized that framing has a main effect all by itself.  It don’t need no stinkin’ interaction with any other stinkin’ variables.  With this study, the Rothman team is proposing a major shift in theory.  Now framing isn’t a main effect.  Further, I have trouble making this leap based on just one positive outcome.  And, finally, while the interaction improves outcomes, the main effect of framing was still present (that 37% versus 24%).  So, what is it?  Main effect or interaction?

Finally, and stop me if I’ve mentioned this before, but no one wants to think Dual Process with Framing whether with General ELM or even Special HSM.  Any attempt to make Framing a main effect persuasion variable is doomed to failure by my thinking because it misses the complexity of persuasion as expressed in those beautiful and well established Dual Process Models like the ELM or HSM.  Prior work (PDF) from Smith and Petty provided very good evidence of how framing works within an ELM blueprint.  Their studies demonstrated that framing is most commonly operating as a WATTage switch that controls Argument scrutiny and that Long Conversation in the Head.  Loss frames pull on that Bad Is Stronger Than Good effect we’ve discussed before and tend to produce high WATT processing.  Thus, framing is only as good as the Arguments within the frame.  Perhaps, we could generate higher screening rates, for example, with better Arguments?

Such thinking, of course, frames framing in another frame!

Let’s get out of here.

We’ve got a beautiful theory that is struggling with ugly facts.  With nearly 20 years of research behind it from a variety of teams, the empirical outcomes are at serious variance with theory predictions.  Some results are clearly more positive and theory consistent, but in the main, not so much.  There’s no doubt that Prospect Theory has strong support, but the framing variation is just not working real well in this application.  It can be, for me at any rate, explained under a larger persuasion theory and as a factor in the ELM we see the loss frame as an example of the negativity effect and its utility as an elaboration moderator in most applications.

I also need to underline that I am underwhelmed with the performance of the Kahneman and Tversky model in all its variety.  Yes, there are those really interesting cognitive heuristics, but how the theory explains them and the extensions it suggests simply produce puny effects in the real world, especially when you try to use it for practical persuasion.  Clearly, I’m an Army of One on this which explains in part why the Nobel Committee didn’t ask my opinion a few years ago.  The research evidence for a variety of Dual Process Models (as noted here) strikes me as clearly more theory consistent than the System 1 and 2 approach from which framing flows.

I’m running a comparative advantages argument here and am not asserting that framing, prospect, or that S1-S2 approach is wrong, bad, illegal, unethical, and on and on.  I just find the evidence for other approaches to be stronger and their application to the kind of problems we consider in the Primer and this Blog to be more useful.  It’s just easier to make money or behavior change with General ELM.

P.S.  In the interests of full disclosure I’m happy to report that I’ve had several extended professional/personal interactions with Alex Rothman.  He’s one of the best guys I’ve encountered in my life.  Friendly, cooperative, smart as hell, hard working, and a great sense of humor – he’s just the kind of person you want as a colleague, neighbor, or friend.  I always appreciate reading his work and thinking carefully about it even if we might see things differently.

 

Pop Science

From David Brooks at the NYT.

What sorts of people are good at reading emotion?  . . . Taste may play a role, too. For the journal Psychological Science, Kendall Eskine, Natalie Kacinik and Jesse Prinz gave people sweet-tasting, bitter-tasting and neutral-tasting drinks and then asked them to rate a variety of moral transgressions. As expected, people who had tasted the bitter drink were more likely to register moral disgust, suggesting that having Cherry Coke in the jury room may be a smart move for good defense lawyers.

Brooks is a public social scientist writing opinion and perspective columns for the NYT.  He frequents peer review journals, ponders their wisdom, then distills that science into public comment.  Like above.

Kinda neat, that observation about defense lawyers and Cherry Coke.  Except that the research he cites provides absolutely no support for his insight and in fact contradicts it.

If you read the research report from Eskine, Kacinik, and Prinz you’ll discover that their taste manipulation only produced differences among people sipping a bitter drink.  There was no difference in ratings of moral disgust for those drinking a neutral- or sweet-tasting drink.  One might argue that prosecuting attorneys should require quinine water and vinegar in the jury room – that inference would connect to the key finding in the Eskine et al. study.  But given that there was no effect for sweet tastes, any recommendation would be a guess.

So what?  A public opinion columnist can’t read the methods and results section and scans the intro and discussion only superficially.  Hey, lots of folks make it through grad school doing that.  And, not a few build literature reviews the same way for their peer review papers, and yet again, not a few reviewers do the same thing when considering a paper for publication.

Read that Eskine et al. paper.  It’s tied in to the Embodiment Effect, wherein the exterior changes the interior or the body changes the mind.  People completed an established measurement task that required them to read six scenarios describing different kinds of moral transgressions in daily life.  They then rated their “moral disgust” or how bad they believed the scenario to be.  Participants were randomly assigned to three different drinks while doing this moral task – a bitter, sweet, or neutral (water) drink.  They were given a cover story to divert their attention about the drink.

They were told that we were exploring the effects of motor interference (specifically arm-hand movements) on cognitive processing, and we therefore directed them to drink a beverage during a moral-judgment task to instantiate this movement in a natural way.

Okay.  We direct their attention to the motor movement and away from the taste of the drink.  Now, of course, everyone will still taste their drink, but will at least have the distraction of that cover story.  (It would have been nice to do a debriefing on whether participants thought that taste was important in the study, but if Eskine et al. did this, they didn’t report it.)

Here’s the graph of moral disgust ratings by the three taste groups.  A higher score means the reader found the vignette to be more objectionable.

Results revealed a significant effect of beverage type, F(2, 51) = 7.368, p = .002, ηp2 = .224. Planned contrasts showed that participants’ moral judgments in the bitter condition (M = 78.34, SD = 10.83) were significantly harsher than judgments in the control condition (M = 61.58, SD = 16.88), t(51) = 3.117, p = .003, d = 1.09, and in the sweet condition (M = 59.58, SD = 16.70), t(51) = 3.609, p = .001, d = 1.22. Judgments in the control and sweet conditions did not differ significantly, t(51) = 0.405, n.s.

If you are going to comment on the implications of this research, you need to understand this paragraph.  Only participants sipping the bitter taste showed different ratings from the other two groups who did not differ.  Thus, we see what we can ironically call the Brooks Effect – moving one’s eyes over text without comprehension then offering explanations and inferences.  It’s a necessary skill for FauxItAlls as when Malcolm Gladwell expertly discussed the arcane mathematics of the Igon which true propellor heads more correctly understand as Eigen, a horse of an entirely different meaning.

Let’s consider this paper like persuasion scientists rather than guys writing for money on a deadline.  Is it possible that researchers in the past have looked at this?  And, what are the psychological processes at work here?

Realize that this is a conditioning experiment.  People are exposed to six stimuli that varying on their moral qualities.  Associated with those stimuli are positive, negative, or neutral stimuli (the drinks) that elicit strong reflexive responses.  Ding-Dong, right?

Now, let’s get in the Wayback Machine and look at a study from the 1930s.  I’m quoting myself from the Primer chapter on Classical Conditioning.

“Professor Greg Razran conducted an interesting study of classical conditioning with political slogans.  He gave a small group of 24 adults a list of political slogans contemporary for the times (the late 1930s).  Consider:

America for Americans!

Workers of the World Unite!

No Other Ism but Americanism!

Down With War and Fascism!

He had the participants rate the slogans on a 7 point attitude scale.  Then over the next several days, he exposed each participant to these slogans under 3 different conditions:  1) while eating a free lunch, 2) while smelling foul odors, and 3) a neutral condition.  He made sure that a particular slogan only appeared in one condition and he repeated this pairing of condition with slogan several times.  After these exposures to the “persuasive communication” (free lunch, foul smell, or neutral), he then had the participants rerate their attitude toward the slogans.”

The human senses here are smell and taste rather than taste alone and given that Eskine et al. are approaching their problem from with the Embodiment Effect and make no special claims about taste compared to smell and also cite smell studies in their rationale, the Razran study here is relevant.  It is in the same ball park, testing the classical conditioning of the reflexive response to smell and taste with symbolic and semantic information (the slogans).  What did Razran find?

Not too surprisingly, Razran found that people changed their attitudes towards the slogans depending upon the “persuasive communication” condition.  If the slogan was associated with the free lunch, their attitude toward it improved from pre to post test.  If the slogan was associated with the foul smells, their attitude became more unfavorable, and finally for slogans associated in the neutral condition, there was no change.  Razran also asked each participant to try and recall the condition that each slogan had been paired with in the persuasive testing.  They couldn’t do any better than chance guessing.

Notice that Razran made his classical conditioning Embodiment Effect work with both positive and negative conditions, while Eskine et al. could only find the effect with the negative taste.  Why?  Eskine et al. glide by their failure without comment when prior research has already demonstrated that you can condition the Embodiment Effect in the direction of the foundational S-R relationship.  Maybe their sweet drink wasn’t sweet enough?  Maybe the bitter drink was too bitter.  The Goldilocks problem!  But, doesn’t this lack of effect weaken the results of their study and make one wonder?  It’s not consistent with well established findings.

Stay with this as we move to a secondary hypothesis from the Eskine et al. study.

In addition, we wanted to test the relation between political views and sensitivity to disgust.  The former variable was of interest because politically conservative individuals seem to rely more on sensory information (Haidt & Hersh, 2001) and show greater sensitivity to disgust (Inbar, Pizarro, & Bloom, 2009) than do individuals with liberal views; we wanted to test this claim using our taste manipulation.  We hypothesized that if conservatives are indeed more sensitive to disgust, then the taste manipulation should affect their moral processing more strongly than the moral processing of liberals.

Consider this as an interesting extension of Embodiment Effects.  If I can move your exterior to change your interior, then maybe I can make that interior change vibrate over to related interiors like political philosophy.  Interesting.

Following the moral-judgment task, participants were given an unrelated language distracter task, in which they described their language background and rated sentences for their imageability.  Participants were also asked to provide some basic demographic information and indicate their political orientation as either conservative or liberal.

Let’s recap here.  We’ve got a true experimental design with participants assigned randomly to one of three conditions.  They do some tasks.  We analyze those outcomes within each randomly assigned condition.  But, with this new independent variable of political preference, we do not have a random assignment.  People indicate their political preference after the randomized drinking task.

A 2 (political orientation: conservative, liberal) × 3 (taste: bitter, sweet, control) between-subjects ANOVA was conducted on moral judgments to determine whether political orientation influenced judgments within each taste condition. There was a significant main effect of taste, F(2, 38) = 9.741, p < .001, ηp2 = .339, which reflected the same difference between the bitter condition and the control and sweet conditions that we found in our one-way ANOVA.  Simple-effects analyses of political orientation in each taste condition showed that conservatives’ moral judgments were marginally different from liberals’ moral judgments in the control condition (M = 51.81, SD = 15.83, and M = 66.74, SD = 17.49, respectively), F(1, 38) = 3.979, p = .053, ηp 2 = .095.  No other comparisons approached significance (see Fig. 2).

Read that paragraph again.  They run an ANOVA with two independent variables, taste (manipulated) and political preference (measured).  They find a statistically significant main effect for the randomized variable of taste which mirrors the disgust rating.  Note they do not report a main effect for politics or an interaction effect for Taste By Politics.  Politics has no effect either alone or in interaction.  The results disconfirm their original hypothesis about moral disgust and political preference.

Yet.

To further test our hypothesis about whether disgust affects conservatives’ and liberals’ judgments differently, we divided subjects into two groups: the disgust group (bitter condition) and the nondisgust group (sweet and control conditions combined).  We then conducted two contrast analyses, one for conservatives and one for liberals, to directly compare judgments between the disgust and nondisgust groups. Conservatives’ judgments were significantly harsher in the disgust group (M = 84.94, SD = 4.69) than in the nondisgust group (sweet condition: M = 56.60, SD = 17.00; control condition: M = 51.81, SD = 15.83), t(16) = 4.473, p < .001, d = 2.21. Conversely, liberals’ judgments did not differ significantly between the disgust group (M = 76.67, SD = 9.47) and the nondisgust group (sweet condition: M = 64.72, SD = 14.07; control condition: M = 66.74, SD = 17.49), t(22) = 1.703, n.s.  This suggests that liberals are less likely to recruit extraneous sensoriperceptual information during moral processing than conservatives are.  Taken together, these results suggest that physical disgust helps instantiate moral disgust, and that these effects are more salient in individuals with politically conservative views than in individuals with politically liberal views.

(Are you still reading this or have you zoned out with the Brooks Effect?)

What gives here?  The test Eskine et al. proposes for their hypothesis about disgust and conservatism fails.  No main effect.  No interaction effect.  So, when the results don’t fit the hypothesis, change the results!  By the generally accepted standards of research analysis, the failure of the first 2 X 3 ANOVA (taste, political preference, and interaction) to find a positive result for either the main or interaction effect with political preference means that these results reject the experimental hypothesis and retain the null.  Further testing is not warranted because of the dead ANOVA.  Yet, the reviewers allow Eskine et al. to move this over there and that over here and dipsy-doodle, abracadabra, conservatives are disgusting!

All together we have a shaky study that cannot replicate well established findings from Classical Conditioning, misapplies the standards of analysis, and concocts patterns of results to fit an expectation which means the paper is pretty much a fairy tale from a persuasion science perspective.

But, it is useful for conservative New York Times columnists!

The New New Thing and Persuasion

Persuasion is an old thing, preceding the world’s oldest profession if you believe the Old Testament and Genesis.  What changes most about persuasion is the receiver.  Following PT Barnum’s Law, new ones come along every minute.  Yet people, especially smart people who make their living doing some kind of persuasion, are always looking for a New New Thing.  In our lifetime, the zeitgeist is computational which means technology, no math, God forbid, just that really cool WATtap dance.

An inescapable feature of technology is the constant stream of the New New Thing.  Contrast that with other areas; if you’ve spent anytime on a working farm, for example, it is damn difficult to make annual changes that work better than last year’s effort, so the New New Thing in the fields is as apt to produce starvation as it is bounty.  With technology, you can add a new button, link, or latest Pantone color and not do much worse than last year and perhaps even do a bit better.  Given that almost all human behavior is over determined with multiple causes, it is hard to know that the New New Thing made any difference at all.  Especially when you don’t count, which, oddly enough, is something those Computational Zeitgeist guys don’t do.

I read persuasion New New Things, and find myself suppressing shouts of laughter even though I’m sitting home alone without a webcam or microphone attached to my computer; who can possibly observe me?  All the groovy lingo, the poetry, baby, of the New New Thing.  So hip-hop.  So urban.  So Max Headroom, except, Max is no longer a New New Thing, just an unusually perceptive Old Thing.  People are so gullible for the appearance of novelty.  Let me illustrate.

The smartest persuasion guy I know is Rich Petty.  Rich invented the ELM when he was an undergrad at UVa as part of a term paper assignment he wrote in a psych seminar.  He then met John Cacioppo at The Ohio State University and they combined to develop the idea from an interesting class paper into a mature scientific concept.  A long time ago I once asked Rich what he thought was the most important variable to manipulate for persuasion.  He didn’t hesitate:  Novelty.  Even before Michael Lewis coined up the New New Thing, Rich had it figured.  Hotter than the sexiest boy or girl, more compelling than a fixed fight, football game, or soccer match, people roll over for the New New Thing.  If Rich ran with it 20 years ago, the rest of us are so far behind we can’t even find the dust.

Look especially for the combination of technology, marketing, and social media.  Hollywood cannot match those folks for the practical Phd:  Piled Higher and Deeper.  Don’t get me wrong.  I admire the inSincere persuasion play and won’t take food off another person’s plate.

But, you need to think.  When you find yourself deep in the New New Thing, you need to realize that you are writing checks to anOther Guy and if you don’t have Other Guys who are going to write bigger checks to you, why are you in the New New Thing?

The VC boys and girls behind the Usual Gang of Suspects in Social Media are currently cashing your checks.  If you are sitting at the feet of these Masters, hoping to apply their models to your persuasion, I’ll bet green money, you won’t get any ROI.  And, baby, I don’t have a button, link, or color for you to follow.  I’ve just got that Old Old Thing called persuasion.