Prove It!

Epistemology Is Actually Fun

“It is the mark of an educated person to look for precision in each class of things as far as the nature of the subject admits.” Aristotle, Nicomachean Ethics, Book I, section iii, 1094b.

If you spend anytime reading popular presentations on persuasion (Peter Piper picked a peck of pickled peppers) one of the most obvious attributes you observe are assertions of impact most often accompanied by percentages and exclamation points.

Plump profits 63%!

Slash band-aid usage 82%!

Swell satisfaction 33%!

Sure, you learn to accept gracefully the exaggerations of the marketing department, but you still buzz with good vibrations.  “It won’t be 63%, but, Golly, it might just work!”

But Are You Sure?

When you achieve good outcomes, you don’t look for rival explanations or carefully controlled conditions.  Hey, it worked, so who cares?  And, when it fails, IT fails, not you.  So, you buy the latest Persuasion Plays for Professionals! and execute ProPlay44! as described and diagrammed on page 44.  And, shoot fire, something good happens.  Must be the book, right?

An old guy like me recalls that great sneaker campaign with Michael Jordan and Spike Lee when they both were All That.  Go to YouTube and search up some of those Nike ads.  Jordan as Jordan soared in a compelling demonstration of grace, power, and skill while Lee as Mars Blackmon shouted, “It’s the shoes!  It’s the shoes!”  Today they’d shoot it with LeBron James and next it will be some kid named Wang Tao, but you can see it in your head.

Its the Shoes

Now, really . . . the shoes?

In your head, you know that there’s something special about Michael, LeBron, or Wang Tao that you don’t have and that’s the difference, but in your heart you might feel that just maybe if you got those shoes . . . the same thing happens with persuasion advice.

Hey, focus.  Both skill and science drive persuasion knowledge.  You should see a clear and compelling proofs for all the principles expressed and exemplified in this Primer.  It’s just that persuadin’ ain’t that easy and it takes more than percentages and exclamation points to prove it.

What we need to do is to test the shoes.

Persuasion Science . . . Really

I’ve had the opportunity to advise and consult with a wide variety of organizations that in some way use persuasion to be successful. The first thing that struck me was how incredibly certain many folks were of some communication tactic, campaign, or intervention. They would describe to me the New Thing they were using to get more customers or make larger sales or obtain more compliance and my first thought was, “You’ve got to be kidding. If you’re really doing that, you’ll get killed.” Rather than blurt out that blunt disconfirmation I’d restrain myself and ask, “How do you know it works? What are your metrics?”

“Metrics” is a good word to use with people who are not academics or scientists. It can mean exactly the same thing as Research and Quantitative Methods 101, but without all the jargon and math. Fundamentally it means, show me the money.

Someone would then briefly describe how “sales over 12 months” improved or “customer traffic volume momentum” increased or complaint calls to 800 numbers dropped. In other words, metrics.

I would then note that simply because some metrics went up or down at roughly the same time they were using the New Thing, that doesn’t mean it was the shoes, baby. Average daily temperatures might have also gone up at the same time. Would anyone argue the New Thing caused that?

At that point somebody would make a joke about Global Warming then shift the topic to something else.

Part of the disconnect can be explained in the simple dichotomy between a scientific approach versus a Darwinian approach. As a scientist, I think it is possible to generate new ideas, test them scientifically, and then implement them successfully. The opening quote from Aristotle captures this point of view (and a lot more). Recall:

“It is the mark of an educated person to look for precision in each class of things as far as the nature of the subject admits.”

This suggests at least two things. First, if you think well, you should be able to figure it out before you act. Second, you’ve got to accept a margin of error dependent upon the thing you’re trying to figure out. If the problem is mathematical, then your margin of error is in the decimal places, but if the problem is human behavior, then your margin of error is quite a bit wider. So simply because a problem will not allow the precision found in the thousandths column doesn’t mean that a person can’t find a scientific solution.

A Darwinian approach would instead come up with the next Big Idea, then fling it into the world and let the survival of the fittest rules apply. Thus, if you survive, it must be the New Thing you’re using, right? My problem with a Darwinian approach to understanding effectiveness is that most people and their business haven’t been around for the millions of years a Darwinian argument requires. Thus, if you think your New Thing works because you’re still alive and in business you are using a metaphor to understand your success and not reality. (And there are some scientists who would argue that evolution is pretty lame science because it does not allow for serious experimentation and replication much like weather and climate science and the current hoopla over Global Warming. Kinda difficult to randomly assign different species to different environments or different climates to different planets. I’m not saying there is no science in evolution or climate studies, just that the science is not so great because it is impossible to experiment.  Notice this compelling advantage: You can do experiments in persuasion on a meaningful basis where you can’t with species or climates, so in my book persuasion is a stronger science. Take that, Real Science! Back to the opera.)

Darwinian persuaders, therefore, look at their survival and argue back to past behavior concluding, “It’s the shoes, baby, it’s the shoes.” While it is possible for the shoes to do some magic, isn’t it also possible that there’s Something Else going on? Maybe there’s some skulking third variable we’re overlooking like the superb physical talent and grinding work ethic of an outstanding athlete when we claim, “It’s the shoes!” A Darwinian approach cannot easily tease out these possibilities. A scientific approach, however, can.

Two Questions To Ask While Testing the Shoes

Whenever you want to know if something “works” you are asking two questions.

Did the shoes really cause the performance or might there be rival explanations?

If it appears the shoes did cause the performance in this case, will it generalize to other conditions?

Side bar: In scientific parlance the first question is about internal validity and the second is about external validity. I was trained in this terminology, have taught and used it for many years, and I still don’t understand why the terms “internal” and “external” are used. We’ll see this a lot. Scientists are simply awful at naming things, but hey, they invented the heat pump, remote control, and the Internet, so let’s cut them a break.

Both questions are important and one without the other leaves you hanging. If the shoes do work, but only for LeBron or Michael – in other words, the effect doesn’t generalize past these two guys – then who wants the shoes besides two of the greatest athletes of all time? By contrast, if the shoes don’t work, who cares if they don’t work anywhere anytime anyplace for anyone? Scientists care a lot because such failures are good for theory development, future research, and applications for new government grants, but for the rest of us, failure is just failure.

How do we find the effect, rule out rival causes, and determine generalizability?

The Four Forces Of Science

Science typically employs four forces to answer these questions: chance, comparison, control, and counting.  We use these forces to understand the difference between the Old Thing and the New Thing.  The Old Thing is all that stuff your Dad taught you and now that you’ve arrived on the scene, you’re ready to demonstrate how much smarter you are than Dad and you’ve got the New Thing.  Now, Prove It!

Chance or Roll the Dice

Chance, or “randomization” in research parlance, is easy in the lab and difficult in the real world. Randomization is the selection of objects such that each object has an equal chance of being selected and that the selection of one object has no effect on the selection of another. If you have a classic two group experiment with a treatment group (the New Thing) and a control group (the Old Thing), when you randomize everyone has the same chance of getting in the treatment group or the control group.  Say, for example, you’re working in a group developing new sneakers and if the Boss’s niece is in charge of the new “shoes” you might be tempted to look at your study volunteers and say, “hey, all you tall athletic people come over here and you old, fat, and sick people go over there.” Then you give the athletes the “shoes” and the infirm get the competitor shoes. Guess which group does better? Sure, the Boss’s niece is proud and happy and has a glowing report about your job fitness, but is it the “shoes” or something else?

Randomization helps solve intentional and unintentional bias. You might assign each volunteer a number from the Table of Random Numbers then have the “evens” get your shoes and the “odds” get the competitor shoes.

The rule is easy: Randomize everything.

To the extent that you can and do randomize everything, you are at least neutralizing all the rival explanations, leaving only the “shoes” versus something else. While one proper test never proves it conclusively, it sure goes a long way to ruling out other causes.

Randomization sounds like a fairly simple minded approach and one wonders how it could have any practical impact. Unfortunately, you can see the value of randomization or more properly the pain and confusion when randomization is not used. Consider, for example, some of the raging questions in society today about climate change, crime, and health.

Some people ardently believe that human activity has changed global climate, perhaps irrevocably, while others acknowledge that the weather has changed, but think human activity has nothing to do with it. And, worse still, the science seems to support both positions, plus many stops in between. One huge stumbling block in our scientific understanding drops in our path because we cannot randomize anything in our studies of climate. That is, we cannot randomly select samples of planets just like Earth, then randomly assign different patterns of human activity, and then sit back, measure what happens and draw some pretty good inferences. Everything in climate studies is based on a sample size of 1 (our lovely planet, Earth) and simply observes what naturally occurs rather than using the powers of randomization. No one would argue that climate study is not scientific, but because we can’t use randomization effectively, the scientific method of study is weak and leads to contradictory information.

Consider now, crime. If you look at crime statistics, particularly the murder rate, over the past 50 years, you see a clear rise from the 1950s into the 1960s that continues through the 1980s, then begins to level off, then falls quite rapidly through the 1990s with the decrease still occurring in the new millenium. There’s some spotty evidence now (2006) with some crimes in some cities that this decrease may have bottomed out and we’ll now see an upturn.  (Hey, it’s now 2011 and we’re still waiting for the big reversal.)  What caused this big increase from the “Leave It to Beaver” 1950s into the “Age of Aquarius” 1960s? Then, what caused the huge decreases that began with the first President Bush and continued through the second President Bush and now President Obama?

If you read the expert literature on this, you get many answers. Some argue that the crime rate follows the demographic bulge of the Baby Boomers. When they were young, they were good little kids, then they went through that adolescent rage period followed by the inevitable domestication process (graduation, steady job, marriage, mortgage, kids) and the even more inevitable aging process (don’t even ask me about the degradation of my body). Others will point to the rise and fall of the American drug culture and the wars over that profitable underground economy. Some will look at police policy, particularly the “broken windows” theory that suggests if you crack down on petty crime (like breaking windows), you’ll head off bigger crimes before they can start. Who’s right? Hard to say, again, in part, because we cannot use randomization effectively. The good experiment would be to randomly assign people to communities and communities to different treatments like drug use and police policies, let this cook for 50 years, then see what we’ve got.

If you think about it, in most cases, it is easier to use randomization with almost anything related to persuasion. Whether your work is in a lab in a highly controlled environment or in the field with lots of noise and uncontrollable outside forces or in a practical real world situation where you’re trying to compete in a real world, you can use randomization effectively, especially compared to a lot of natural and physical science.

Comparison

So, you say that the PowerPersuasionPlay increased sales by 43%!

Compared to what? Sales from a year ago? Sales since Friday? Sales from some other number you pulled out of your hip pocket? The concern here is the outcome comparison.

And your PowerPersuasionPlay compared to what alternative? “Want fries with that?” Dumb silence? A wink and a smile? The concern here is alternative explanation comparison.

There’s always a temptation to test your “shoes” against some silly alternative or some silly outcome. Lets have our treatment group get the newest version of the shoes while the control group will . . . run barefoot . . . wear sandals . . . original 1955 Chuck Taylor high tops . . . or worse still, no control group, no alternative comparison. Or there’s a tendency to cherry pick the outcomes we look at to measure our impact. Without naming any names, I’ve consulted with several different and very large concerns who invested a lot of time, money, and personnel on some very bad projects that were made to look better because they cherry picked the outcomes. Hey, didja know that since we’ve added our new “New Thing,” sales of napkins have increased 34%. Hubba-hubba.

Good science always looks for the toughest comparisons to test your New Thing. Get hard headed. Compare your “shoes” to the best competition you can find. Measure the outcomes that are truly critical to your success whether it is measured with sales or souls. Typically the best way to find a tough comparison is to ask someone who competes with you to devise the “other” option. Competitors love our weaknesses and will diligently seek the alternatives that makes us look bad.

The whole point of science is to find what works and why to the best standard our puny minds can devise. The point is not to reassure yourself or the boss or anyone else that things are just fine and there’s no need to think about what we’re doing, just keeping driving toward that light at the end of the tunnel. It has been my experience that doing science typically makes you feel uneasy, uncertain, and uncanny even when all the news is good. Science almost always gives you bad news, surprising news, unexpected news. If you are sitting around a table looking at any kind of evaluation study of something your team is doing and everyone is happy and smiling, you’re probably missing something important. And the easiest way to delude yourself is to make bad comparisons.

Control

If you order two hamburgers at Mickey Ds and one tastes great and the other doesn’t, you’ve found a control problem. Anytime there’s variation in a process you’ve got a potential control problem. Control is a really big deal in science.  It means ensuring each research participant gets the same Thing, whether it’s the Old Thing or the New Thing, every time without variation.  If the test varies from person to person, you’ve lost control.

A great illustration of the control problem arises in the “lifestyle” factors in mortality and morbidity.  Right now, we’re trying to understand the role that lifestyle behaviors like diet and exercise play in our health.  There’s some pretty good evidence that people who eat a “better” diet or get “more” exercise will live longer and feel healthier.  But, when you look more carefully at the evidence you see a lot of studies with virtually no control or very poor control over these factors.  The biggest hassle here is getting an accurate and reliable measurement of something like “diet” or “exercise.” Typically, we use self reports from people and ask them to describe or estimate what they eat or how they exercise.  Even if people know the truth and can report the truth accurately, we still have no control over what “treatment” group they are in.  This is called selection bias and it simply means that when you don’t control the New Thing, other forces are operating.  We might see that people who report “more” of any “exercise” live longer, but since we didn’t assign the activity or the amount, we’re stuck in a chicken or egg dilemma.  Do healthy people exercise more and live longer or do people who exercise more live longer and healthier?  When we can’t control the application of the New Thing, we’ve always got that problem.

Another good illustration of the control problem shows in the current raging arguments over global warming.  We’ve already looked at the randomization problem with understanding global warming and human causes in it.  You can’t randomly assign planets to climates or even randomly assign different human activities to different climates and planets.  We’ve only got this one case, Earth, so randomization is logically difficult.  Well, not only do we have the randomization problem, we’ve got a control problem.  The hypothesized human activates that cause global warming have occurred without any scientific manipulation.  Lots of people operating in loose groups have done a lot of different things over the past one hundred years.  None of that activity was “controlled” in anything remotely approaching a “scientific” sense of the term.

Okay, so does this mean that there is no science with diet and exercise or global warming? Of course not.  That’s not the point.  It’s just that the science isn’t great, but rather has a lot of holes in it because we lack control over the New Thing.  This lack of control doesn’t mean that eating more fruits and vegetables has no value or that getting more exercise has no value or that human activity has no impact on global climate.  It just means we need to be a lot more tentative in our conclusions.

Quick review here: Control addresses how the Things get made, assigned, and used.  When the researcher controls who gets the New Thing, how much, and how often, typically using randomization, then we’ve got good control in our experiment and we can feel pretty confident about drawing conclusions from the data.  However, as we lose control over the application of the New Thing, we need to become more thoughtful, more wary, and more provisional.  It doesn’t matter whether the New Thing is a new persuasion tactic, a new diet plan, or just a new shoe.  When you have control of the test, the data are better.

Counting

If you think you change something then you can count it.  If you believe you can do something that makes the world better or even worse, you should be able to quantify that outcome, that result, that change on a simple counting scale.  If you can’t count it, it doesn’t count.

Consider the opposite of this claim.  You want to defend instead this proposition: I’ve got a New Thing that I know beyond reasonable doubt produces a desired change in other people at my command; but, I can’t quantify any of this.  I can’t even divide the “change” into two groups of “Did Change” or Did Not Change” much less have shades in between.

That’s crazy.  If you can do something that changes other people, you should be able to count it, even if only with that “Did or Did Not” category system.

If you can count something that means you can explain it to someone else and they can count it and get the same number you get or at least close to it.  If you can’t count it that means you’re probably operating in a universe of private meaning where, hey man, it’s something that’s just gots to be true, but I can’t explain it to you.  That’s fine on the street or in a bar, but if you’ve got time, money, and people riding on the proposition, you need to grow up and learn to count.

Now, usually when numbers appear on the battlefield some people throw their hands up in the air in surrender as if the enemy has brought up the heavy artillery and rather than face annihilation by quantification, just wave the white flag right now.  If you don’t like numbers, you can still use quantification to understand persuasion or global warming or anything that makes claims about change in reality.  I’m not kidding.  Even if you can’t count past ten without taking off your shoes, you can still use quantification to assess the science of claims.  Here’s how.

First, we’ve got to get in the WayBack Machine and time travel back to a smarter and simpler time.  We’re going to use an approach first described by Professor Robert Rosenthal in the 1970s.  He called his method the Binomial Effect Size Display (BESD) demonstrating once again the facile skill scientists possess when it comes to naming things.  (Can you imagine the words we’d be using today if Adam had been a scientist rather than just a guy?)  I call it the Windowpane Display which is at least transparent.  Think about a window.  Imagine that it is divided into four equal panes.  Easy to visualize, right?

Now, let’s put some labels on our window.

We’re doing an experiment and we’ve got two groups.  The treatment group will get the New Thing while the control group will get something else, the Old Thing.  We’ll randomly assign our participants to only one condition.  To make the math tidy, we’ll do this experiment with 200 people, so we put 100 in each group.  Now, after we give each person their Sauce, we then observe them to see if they changed the way we thought they should.  We’ll make the answer to this question easy with only two possibilities: Yes, they changed or No they didn’t change.  Here’s a crude, but not vulgar, graphic of the Windowpane.

WINDOWPANE WITH TREATMENT and OUTCOME

Pretty simple so far.  We’re testing the  New Thing against the Old Thing.  We have 100 people randomly assigned to each group.  We then see how the people change either into Yes or No.  Now, let’s fill in each of the four little windowpanes to demonstrate different scenarios.  We’ll start, as we often do in science, with failure.  Assume that the experiment blows up and that our New Thing produces nothing better or worse than the Old Thing.  We’ll call this the No Effect condition, because, the treatment had, well, no impact, influence, no effect.  It looks like this.

NO EFFECT

We see here that we’ve got 50 people in each little windowpane.  Let’s read each row.  We started with 100 people in the treatment condition who got the New Thing and when we observed them we found that 50 of the 100 changed and 50 of the 100 didn’t change.  We also started with 100 people in the control condition who got the Old Thing and when we observed them we found 50 of the 100 changed and 50 didn’t.  No effect.  Nada.  Zip.  The New Thing is not different from the Old Thing.

Quick detail: I’ve deliberately set up the failure, also known as the null, condition to be 50/50.  If you’re thinking ahead you realize that failure would also occur if both groups were 10/90 or 30/70 or even 90/10 just so long as both groups have the same percentage.  I’m calibrating the No Effect example to be 50/50 because it will make other scenarios a lot easier to grasp quickly and will require fewer mental gymnastics to get.  If you’re a propeller head stats maven you know that this is an incredibly simpleminded demonstration and that things are just a little bit more complex, tut, tut.  Good for you.  Now go off to a corner by yourself and invert a matrix using pencil and paper and leave the rest of us sitting on the floor taking off our shoes.  Again, the 50/50 No Effect helps with the learning.  Back to counting our toes.

Now, let’s create an example where we start to get differences.  Let’s assume that Something Happens when people get the New Thing and it looks like this.

SMALL EFFECT

We now see on the rows and the columns, a 45/55 effect, a 10 point difference.  In social science parlance, this 10 point difference is called a “small” effect as popularized by Jacob Cohen in his work on power analysis and effect sizes.  Make sure that you “see” the impact of the treatment.  Notice in this example that more people who get the New Thing showed the desired change (read the row) compared to people who got the Old Thing (read their row).

A small difference of 10% doesn’t sound like much, but consider the practical effect.  If you compare the batting averages between “poor” Major League Baseball players and “great” MLB players, the statistical difference works out to a “small” effect size.  Here’s a forced example that scales the comparison for 1,000 at bats.  (Yes, I know that nobody gets 1,000 Abs in a season, but you don’t want to do the math for seasonal data, and it doesn’t matter.  Why would I lie about this or be wrong in print?)

If you read down each column, you should spot that .100 (10%) difference between hitting skill level.  If you compute the proper crosstab statistic, a phi, the value is .113, which is another way of saying “small effect.”  Thus, while a .320 average is an All-Star difference compared to a .220 average, statistically this is small.

MEDIUM EFFECT

Now, our row values are 35 and 65.  A medium effect is a 30 point difference.  That sounds somewhat impressive, a 30 percentage point difference.  Think about this medium effect another way.  Notice that 65 is almost twice as large as 35 (okay it is 186% larger and not exactly 200% – you’ll never invert a matrix by hand if you keep interrupting me).  Expressed another way, a medium effect means that you’re getting almost twice as much change in the treatment group compared to the control group.  A medium effect is getting to be pretty obvious.  Think how obvious a “large” effect must be.  It looks like this.

LARGE EFFECT

The row values here are 25 and 75, a 50 point difference.  Now the rate of difference is three times with the Treatment producing a 300% increase over the Control.  That’s big.  That’s obvious.  Take a quick scan now and review the four Windowpanes, No Effect, Small Effect, Medium Effect, and Large Effect.  See the numbers change.

[Sidebar:  Check out the Windowpane chapter for a Visual Example of Effect Sizes.  Scroll down until you see an orange and a blue jar!]

The point of this demonstration is to show that you can think with numbers in a practical and efficient way without having a statistician in the room.  Anyone can handle the windowpane approach with numbers.  Just have a clear definition of Changed? (Yes or No) and a clear definition of the Group (Treatment or Control).  Then just count and look for percentage differences.  A 10% difference is small, 30% is moderate, and 50% is large.  And, realize that while “small” may be hard to detect, it can definitely make big practical effect (you often don’t have to outrun the bear, just one other guy).

Does Science Always Do Science?

Afraid not. Scientists are sometimes prime offenders of the basic forces. Right now the Western World is on a scientific health kick and just about anyone with a lab coat, a pill, and a bar chart can save the world and make a lot of money. You might have picked up the paper or pointed your browser at a news aggregator and read the very disappointing headlines about increased breast cancer in women who took Hormone Replacement Therapy (HRT). HRT was supposed to be the New Thing for women entering menopause. If you’re an older girl or knew one well during that time you understood the benefits of that pill. What a delight. A pill that reduced those annoying menopausal symptoms and had no side effects!

Except that the health and medical community hadn’t done a very good job testing HRT. In fact, if you read the old research on it, you might come away wondering just whose niece or nephew was in charge because there was precious little randomization, very poor comparison, and lousy control in most of the testing. Nonetheless, what followed was millions of women taking the pill like good little girls until somebody finally did a pretty good randomized controlled trial and boy did they get a shock. Where all the experts were convinced that not only would HRT help with the annoying symptoms, it would also have a protective effect on other health outcomes. Well, we found out that HRT did have an effect on other health outcomes, but it wasn’t as expected. We found that women taking HRT were more likely to get breast cancer. Surprise! Nowadays, doctors don’t hand out HRT pills like Wrigley’s Spearmint gum any more and instead everyone involved does a careful and thoughtful individual analysis.

Someone else can write the book on scientific failures and then when that book is completed they can write the book on all the failures in business or government policy or defense armament development or in educational systems or any other area of human effort where the folks in charge acted like the idiot niece. A little bit of randomization, control, and comparison would have gone a long way to finding rival explanations before the Big Wipeout hit.

So, science is not some automatic guarantee of truth detecting. In fact, science is a lot like Winston Churchill’s famous observation about democracy, “It has been said that democracy is the worst form of government except all the others that have been tried.” Science can be the worst form of truth finding except all the others that have been tried. Like democracy, it’s just hard to do well.

Doing Science Well Or Prove It!

Whenever you encounter any claim about persuasion you can test that claim against any standard, test, or value you like, but you might want to include the two questions and the four forces of science as part of the drill.

Ask first, did the New Thing really cause the performance or might there be rival explanations?

Then ask, if it appears the New Thing did cause the performance in this case, will it generalize to other conditions?

To answer these questions look for chance, comparison, control, and counting. If the source of the claim is silent, evasive, or nervous about the questions and the forces, you’ve got good reason to look a little more closely at the claim. Throughout the Primer, I am careful to present information about persuasion that has done the best job at answering the two questions with a lot of the four forces. Also note that while I am an academic scientist and have published peer review research on persuasion, I never cite my own work as the sole example of evidence for or against some persuasion claim. I’m not featuring this point because I’m nervous about the credibility of the Primer – with the quality of information out on the Web and the effectiveness of search engines, you can easily and quickly test virtually every statement in the Primer online yourself immediately. I am alerting you on this point because I want you on your toes, actively thinking about and questioning the concepts in the Primer. Look for flaws, errors, stupidities, and foolishness. Just keep in mind Aristotle’s admonition:

“It is the mark of an educated person to look for precision in each class of things as far as the nature of the subject admits.”

Limits Of Science

In this chapter I’ve presented the basic principles of a scientific approach to the problem of Prove It!  I’ve also argued that you can’t expect the same kind of science to operate on all kinds of situations noting, for example, that one can apply more of the principles of scientific research methods to persuasion than to the study of evolution or climate change. Yet, very few people, especially scientists, would consider persuasion to be a more “scientific” field than evolutionary biology or climatology.

Even when the principles of science are appropriately applied, I’d still caution against too much confidence. While the practical benefits of science are fabulous and no one would argue against the wise application of science, I think it is going wildly too far to trust science as the sole or primary means of understanding the world and human nature. It has been my unfortunate life experience to work with people who place entirely too much faith in their intelligence and scientific skill. And, I’ve observed the same defect in myself. Appeals to intelligence and especially today, science, can be a siren song for vanity, pride, and arrogance. Once again Aristotle seemed to get it right when he observed: “Thus a master of any art avoids excess and deficit, but seeks the intermediate and chooses this.” (Nicomachean Ethics, Book II, section vi.)

References And Recommended Readings

Campbell, D. & Stanley, J. (2005). Experimental and quasi-experimental designs for research. Houghton & Mifflin.

Cohen, J. (1977). Power analysis for the behavioral sciences. Revised Edition. Orlando, FL: Academic Press.

Kerlinger, F. & Lee, H. (1999). Foundations of behavioral research, 4 th ed. Wadsworth Publishing.

Everyone has their favorites in any field. Each of these books is quite old which in my opinion makes them more valuable because they have stood the test of time. Cohen got his work on power analysis and effect sizes started in 1962. The Campbell and Stanley monograph appeared first in 1963. The first edition of Kerlinger was 1964. The Kerlinger book is actually quite a bit of fun especially considering that in its hardback version the book could be classified as a deadly weapon if someone struck you with it. Professor Kerlinger clearly has a sense of humor and expresses it even in something as dry and parched as research methods. His work is also among the best examples of technical writing I’ve ever read and I have a great weariness of knowledge, my little pretties. If you don’t get Cohen, Kerlinger, or Campbell and Stanley, you don’t get science. That’s okay. I don’t get socialism, veganism, or celibacy. It’s a big world and there are many paths to perfection.