Category Archives: Health

all things morbid and mortal

The Illusion Lingers

You’ll just have to take my word for this part.

I changed physicians recently because my former doc thought that I had a disease from deer ticks.  We live in a heavily wooded area with lots of deer and I was feeling what I thought was the aches and pains of over-exercising an aging body.  My physician knew otherwise.  She had uncovered evidence of an epidemic from deer ticks, yet the authorities were not taking her seriously, like she was one of the good guys in the X-Files.  I found a new physician, changed my workout routine, and my aches and pains disappeared.  The deer, and presumably their ticks, remain.

On my last visit to my new physician, I noticed what you’ll now have to accept as true based solely upon my testimony.  She recently moved locations from a nice small suburban mall building to a newly renovated former beauty and spa building.   As I entered the very nice building for the first time I noted a small red sign by the door that tells Drug Reps to not park in the main parking lot.  Then, as I proceeded into the entrance, I observed that it is actually a double entrance – you go through two sets of doors.  The area between the two sets of doors is arranged with chairs as an exterior waiting room for, you guessed it, Drug Reps.  Another sign directs them to sit there.

Thus, the first message my new physician sends to her clients is about Drug Reps.  And, she makes sure that if any are present that you will see them before you see her.  Now, my physician is a very sweet and nice woman who is polite, thorough, and absolutely clueless about social perception, persuasion, and impression formation.  I’ve only had two face to face interactions with her, the first when I changed physicians and the second when I had a regular check up.  Both times she advised me to take a statin for cholesterol even though I am healthy.  Both times she offered to write a prescription for a brand name drug rather than a generic.

Knowing nothing about me and thinking only about the proven effects of drugs, a reasonable reader should note two concerns.  First, statins, the cholesterol lowering wonder drugs, have no benefit for healthy people.  Second, generics work as well as branded drugs and cost considerably less.  Yet, my sweet physician is recommending drug actions for me that I don’t need, won’t work, and will cost more money than necessary, if the drugs were necessary.  And, by the way, if you’re a Drug Rep, park over there and sit here.

Let me tie this little example to the broader case.  The lede makes the point.

A dozen pharmaceutical companies have given doctors and other healthcare providers more than $760 million over the past two years – and those companies’ sales comprise 40 percent of the U.S. market.

The source of this claim is from a public watchdog group, Pro Publica, which is assembling a database of pharma reports on these transactions.  Realize we are not talking about payments for prescribing pills.  These payments go for other services beyond writing script.  Now, payments from pharmas to physicians for a variety of services is as old as pills and people, but these transactions were never reported.  New Federal law requires this starting in 2013 and some pharmas have gotten ahead of the curve (as you should with bad news – Inoculation, baby) and are posting these documents early which is how Pro Publica got them.

How can we understand this $760 million payout for consulting, speaking, research and expenses to physicians?  Well, there are approximately 700,000 physicians in the US.  Each year, it appears, these reporting pharmas gave about $380 million.  That works out to about $500 per physician.  And, of course, this is data from pharmas with 40% market share.  It’s not unreasonable to assume that the unreporting pharmas with the remaining 60% of market paid out at a similar rate, so as a rough estimate, the average physician gets about a thousand dollars a year from pharmas in direct payments for services beyond simply prescribing pills.

If this was a 1950s story about illegal union activity among dockworkers and stevedores, you’d call this a kickback.  You’d make the movie in black and white with Marlon Brando, call it something like On The Waterfront, and make a classic.  Here we just call it Health Care Reform We Can All Believe In.  Docs get paid to prescribe drugs, more for branded and less for generic, in direct payment from insurance companies.  And, they also get paid by pharmas in the form of payments for research, speaking, and consulting.

I’ve already narrated my one experience with consulting for a pharma here.  It was essentially a paid vacation that got called work.  Not illegal by any means, but certainly not serious work either.  Melanie accompanied me, enjoyed herself enormously on a white beach while I worked in a beautiful air-conditioned resort hotel conference room, and, yes, they also served prawns on the buffet line.  It’s a dirty job, but when she’s happy, you’re happy.

This is a commonplace example of physician services that pharmas buy.  Whether in San Juan or the nicest restaurant in town, whether as consultant, speaker, or evaluator, the pharmas pay the freight for an enjoyable bit of effort.  Not exactly a seminar with pizza and pop – pop is a poor pairing with prawns; try the Chardonnay instead.

Now, let’s make a big pivot.  If you spend anytime reading health and medical research journals, you know how sensitive the community is to money.  You might recall Marion Nestle’s triumphant dismissal of a study because they got funding from a commercial food company.  That’s a standard response.  Hey, if commercial sources are paying for the research, the research is biased.  Nestle’s axiomatic, knee-jerk, Ding Dong response is emblematic, common, and widespread in the health and medical community.

However, when physicians or researchers get paid by commercial sources to consult in San Juan in February, it’s okay.  No bias here.  You can’t buy a physician for a grand.

Of course, as a persuasion maven you know that’s true.  You can’t buy a physician for a thousand bucks.  You own them.

The persuasion plays here are so obvious they are comical.  Pharmas employ small incentives with the Other Guys.  These incentives have many different functions.  Sometimes the incentives work as consequences that produce immediate positive affect and benefits (San Juan, the prawns, and Melanie in her little blue bikini).  Sometimes the incentives produce insufficient justification for the physician’s unethical action which produces dissonance and the drive for reduction which usually resolves into innocence, charity, and increased script writing.  Sometimes the incentives buy reception, processing, and response as the pharma Cascades the physician into a new drug.  Of course, this isn’t funny, because your health and wallet are the punch line.

Now, walk with me to the entrance of my physician’s office.  See that red sign telling Drug Reps to park there.  Then enter that little exterior vestibule with seats and a sign telling Drug Reps to sit here.  Then meet my sweet, polite, and thorough doc who wants to write script for a drug you don’t need, won’t work, and costs more.

But, she wasn’t bought.  Even she will tell you that.

There’s a Difference between Persuasion, and Smoke and Mirrors; with Persuasion the Illusion Lingers.

P.S.  It appears that the pharma guys are getting their groove back since the bad times after the 1999 party.  Brilliant persuasion.  Now, invent a pill that makes my knees work like they did when I was 40, not even 20, 40 would be good.  And, if the side effects include an erection lasting longer than four hours, that’s just the price somebody’s gotta pay for good knees.  Let’s party like it’s 1999 (YouTube).

P.P.S.  I stopped running so flat footed and got up on my toes and the balls of my feet.  No more joint problems.  Even in my shoulders and elbows.  Considerably less back pain.  Of course, I look like a ballet dancer prancing down the road.  No wonder everyone waves at me.  Jeté, Etienne, jeté!

 

 

CBS Finds the Cure for Heart Disease

Stop the sticks!

CBS News reports an amazing breakthrough with stem cell therapy for heart disease. Read all about it or better still tolerate the 30 second ad and watch it.  Scott Pelley practically fell over himself in the report and the gorgeous Dr. John LaPook provided the depth, nuance, and insight only an MD can offer on experimental clinical trials.

Are we in sweeps week?

You could take LaPook’s word for it. Consider his credential.

He has done extensive work in the field of medical computing, including helping to develop an electronic textbook of medicine and writing a medical practice management software package that he sold in 1999 to a company that was later acquired by Emdeon Corporation, the parent company of WebMD.

Or you can chase down the science and read it for yourself. Here’s the description of the clinical trial. And here’s what appears to be the presentation slides (pdf) the lead researcher used at the American Heart Association meeting. Noticeably absent is any peer review publication on the trial.

Here’s the only data on the trial I can find and its from the CBS News report.

Milles was one of 25 volunteers in this small study. Seventeen got stem cells and eight control subjects got standard heart care. All the stem cell recipients had their heart attack scars reduced most dramatically — on average almost 50 percent — damaged muscle replaced by new healthy heart tissue. The eight control subjects saw no improvement. Ken Milles had better than 50 percent improvement.

I speculate that “50 percent improvement” may be a ratio, in other words a relative improvement compared to something else. If true (and I cannot be sure), that is a Small Windowpane effect size. While this is an experimental design with participants randomly assigned to treatment or control, the small sample size of 25 people would make this difference not even statistically significant. I appreciate the randomization and the difficulty of the intervention, but if this is truly a RR of 150% with 25 cases, the results fail. If the results, however, mean an absolute 50 percentage point improvement, that’s a Large Windowpane effect – wow – but still probably not SSD and the results still fail. In this case, the tests of statistical significance are important because sampling error is truly a rival explanation and it appears even a Large Windowpane is still within sampling error.

This is weak science and while the persuasion may be good for CBS News ratings, it is ultimately very bad for the general public. There is nothing in this study to encourage anyone about anything except for a handful of researchers. LaPook should know the limitations of this research and he played the charade on TV providing cover for what is extremely preliminary work. I have no idea how the actual research team contributed to this CBS fantasy, but they clearly were presented as the Cure.

Don’t misunderstand. This is good basic research and is well worth pursuing. Hyping the outcomes of such small studies especially before peer review publication, however, is extremely bad science and dangerous persuasion. It encourages fantasy beliefs about health interventions that benefit only the folks with a megaphone.

Gee. I wonder why US expenditures on health are approaching 20% of GDP?

White Coat Numerical Persuasion

Here’s a nice article that discusses fun with numbers in health and safety.  It employs the writing conceit of the real person with a fake name to protect the identity of someone.

Consider the case of Susan Powell (not her real name), a nurse’s assistant now in her 50s. She had been healthy all her life, but when she turned 45, she decided to see a primary-care doctor. Susan ate healthy foods and was physically active, but she was a bit overweight, and her blood tests showed that she had high cholesterol. Her doctor prescribed a statin drug and asked her to come back in a month.

The article then follows the pseudonymous Powell through the numerical jungle to determine for herself whether she should take the pill.  It demonstrates what I call sophistical statistics whereby you play the guitar to make the numbers sound better than they are.  The article uses a hypothetical with a drug that reduces the risk of heart attacks and employs that relative risk ratio so favored in medicine.  30%!

What’s interesting to me about the article is its focus on patients and their problems with numbers.  People just can’t count very well and this article explains it.  Yet, who’s the person who starts the problem in the first place?  Not Susan Powell, but her physician.

If you read the article, you’ll note that the physician is persistently portrayed as pushing the pill with those Sophistical Statistics.  While the article focuses upon the numerical challenges for patients, left unremarked is the numerical performance of the expert in this equation, that unnamed physician.   Occupational group does not enable greater math skill among physicians even though you and they may think so.  As I’ve constantly documented in detail in this Blog, the medical community is more dangerously challenged in this ability precisely because they are the trusted expert who unfortunately can’t count much better than our Susan Powell.  They just think they can.

If Susan Powell makes the smarter decision based on a proper understanding of the numbers in this story, then what should you make of her physician?

Bubble Bubble Toil and Trouble

The trend continues:  Health screening tests do not save lives.

Why this change?  The answer, for the most part, is that more information became available.  New clinical trials were completed, as were analyses of other sorts of medical data.  Researchers studied the risks and costs of screening more rigorously than ever before.

Sounds nice, but it is not true.  The information on the mortality effects of screening was never strong, clear, and definitive with most of the early publication in the form of my beloved Tooth Fairy Tales from Observational Research.  Rigor is one of those Sophistical Statistic terms which sounds both technical and tough while possessing no standard definition.  Look at that early observational work on mammograms, for example, and the statistical analysis is complicated, intricate, and elaborated; certainly a kind of rigor, but always and forever, observational convenience data.  A little good epi news, a new technology, and a lot of fear makes the rush to screen obvious from a persuasion perspective, but never from a scientific one.

Scientists cautioned caution in this race, but no one listened.  No one wants to die and people with good intention, peer review publication, and a new screening machine were here to help with the fear.  Along with way, good science with randomization, control, comparison, and simple counting always showed that screening had no obvious benefit along with obvious, but miscounted, costs.  No one read that science, so we had to wait for reality to deliver the consequences that the cheer leaders and early adopters ignored.

Now, whenever a good scientific study with randomization appears and it shows no positive effect, everyone is willing to listen because reality has been scaring, maiming, and killing for the past 20 years.  You cannot persuade a falling apple and the knife cuts a false positive the same way it does a tumor.  You cannot persuade a falling apple and a tumor detected is not a tumor cured.  You cannot persuade a falling apple and a tumor detected early is not much different from a tumor detected late.

We simply do not understand falling apples as well as the Tooth Fairies told the tale.  Or is it a meme?  Narrative?  Frame?

A tale by any other name tells as incomplete.

 

 

The Tea Ritual as Science

You may have read the disappointing news about Vitamin E for men.

Researchers studying vitamin E supplements as a way to reduce men’s risk of prostate cancer found they actually had the opposite effect, increasing the risk slightly, according to a study funded by the National Institutes of Health. The follow-up, however, which tracked the health of about half the trial’s original 35,000-plus participants, found a 17% increase in prostate cancer, compared with men who took a placebo. For every 1,000 men, 76 who took vitamin E supplements got prostate cancer, compared with 65 men who took placebo.

These results from a randomized controlled trial of over 35,000 men who took a pill that was either a placebo or Vitamin E. In essence, this is a two group experiment, a t-test with over 35,000 degrees of freedom. If you have only the most rudimentary training in statistics you remember that the Degrees of Freedom are important in interpreting the numbers and that 35,000 is a helluva lot of dfs. And that means the test is more sensitive than a MLB relief pitcher, NHL goalie, NFL kicker, or just about anyone in the NBA. Consider.

Compared with the placebo (referent group) in which 529 men developed prostate cancer, 620 men in the vitamin E group developed prostate cancer (hazard ratio [HR], 1.17; 99% CI, 1.004-1.36, P = .008)

A 17% difference between treatment and control and the p value is less than .008. Good grief, a Small Windowpane is 50%, so the finding here is one quarter of Small. A very sensitive t-test, indeed. Pull the shades and turn down the music. Perhaps a cup of chamomile tea?

If you read the earlier, positive research on vitamin E you find pretty much the same effect size going in the Death-free direction with about the same sample and effect sizes. Huge numbers of participants and piddling differences. Yet, because the researchers can shout Statistical Significance they can fool themselves and you into thinking something just happened. I still have a photocopied (remember those good old days?) Vitamin E study in my file drawer published in the New England Journal of Medicine that extolled the virtues of the supplement with Meir Stampfer and Walter Willett both signing off on the observational study. Hey, the Chaji Masters at the Harvard School of Public Health performed the ceremony.

Whether experimental or observational, the effects are so small that we find ourselves in ritual as science. Don’t think about the implications of the act, just act by the tradition. It’s obvious that no one is getting the consequences of their bad behaviors when high reputational researchers are getting money to prove all sides of the question. Just call the ritual, science, and the persuasion machine keeps humming along.

May I pour you another cup of tea?

the Write Stuff

I’ve found a great article that illustrates the tension between persuasion and science.  It reads like parts of the Rationale and Discussion sections of a contemporary research article, yet is published in the Wall Street Journal by a beat journalist, not a scientist.  Melinda Beck is a master of that persuasive style that attracts attention and builds a case.  She launches it into the public arena for comment and discussion, an exemplar of free speech in action.  As journalism, it’s great, but the interesting part here is how much the story reads like a typical health and safety paper from a peer review scientific source.

 

Beck offers a review of the literature on the relationship between drinking and cancer.  While every statement in her review may be accurate in the sense that it is a sentence or phrase from a peer review publication, the review as a whole is inaccurate because it does not reflect a review of all the evidence.  She weaves together strings of quotations and statistics that pointedly build a case of increasing risk for cancer with increasing drinking.  She ignores or downplays any disconfirming research on this hypothesized link; ignores or downplays any disconfirming research on other benefits of drinking, CVD for example; and ignores or downplays any weaknesses in the supposedly confirming research she cites.  She functions more like an attorney who selects some evidence and omits other evidence to make the strongest case against the defendant.

The most amazing element to me is that this is how most peer review articles on a health topic sound.  If you read outlets ranging from the New England Journal of Medicine and the Journal of the American Medical Association to Health Psychology and Health Communication, you see the same style of selective reporting of evidence, strengths and weaknesses, and standards of judgment that all aim at Biased Processing:  A conclusion in search of congenial arguments.

Scientists sound more like journalists or advocates in that they all seek to make a case rather than pursue generalizable knowledge.  You could take Beck’s article on the cancer perils of drinking and drop many of her sentences and paragraphs word for word into a contemporary health peer review paper and you’d never spot the raccoon.  You could argue that Beck is simply copying the scientific style as a method of improving her journalism, but that reasoning won’t bear scrutiny.  If the scientific approach sold newspapers, we would have been reading it a long time ago and the New York Times would be on par with Science, Nature, and Communication Monographs at the Cool Table.  Science doesn’t sell.  Persuasion does.

Is alcohol the cancer risk Beck’s proposes?  Consider only the evidence she herself cites:  all Small Effects, all Observational Research, all Biased Samples.  That’s the best scientific case she can offer for the claim of cancer risk from drinking.  Beck, as a journalist and perhaps advocate, does not seriously consider the weaknesses in the evidence and seems to think a lot of bad evidence makes for a good argument as if a large patchwork made up of colorful tissues is a good quilt.  As a journalist, she can be forgiven and indeed applauded for driving a point of view.  But, scientists drive the same way.

Think about this comparison.  How is it reasonable, effective, and healthy when scientists are writing their research in the same style as a mainstream journalist writing on deadline trying to attract eyes and ears?

All Bad Science Is Persuasive.

 

P.S.  At least Beck’s article ends with a nice graphic that compares and contrasts alcohol’s presumed risks and benefits more clearly than most peer review science.  Kudos on the close.

 

The FDA Breaks the Rules and the Law

I said it won’t work and now it’s illegal, a consideration I hadn’t considered! One way or the other, those sincere persuaders at the FDA miss the mark and can blame their failed persuasion on somebody else . . . maybe the bankers. Yeah. The bankers did it.

Recall the FDA’s new warning labels for cigarette packaging. Here’s a shot to refresh your memory.

If you know anything about persuasion, theoretic or applied, you know that warning labels are not the Magic Bullets the health and medical community so adores. They are as likely to elicit reactance as they are to get favorable Central Route attitude change with the attendant and opposite potential outcomes. If You Can’t Succeed, Don’t Try . . . or else join the FDA. Now, there’s another reason the new labels won’t succeed.

They’re probably illegal.

The court blocked the Food and Drug Administration from requiring cigarette makers to put large, graphical warning labels on their packaging. The mandate likely violates the tobacco industry’s First Amendment rights, the court said.

Annoying nitpicker that he is, the judge cites numerous legal standards in this case, none of which it appears anyone at the FDA knew about.

The tobacco warnings don’t meet any of those standards, Judge Richard Leon said. He said the warnings are clearly an appeal to emotion, rather than a cold, factual communication. Arguments in the lawsuit clearly suggest that “the Government’s actual purpose is not to inform, but rather to advocate a change in consumer behavior,” Leon said in his ruling.

You should read the ruling if you are serious about persuasion, science, and government. The judge stomps the FDA repeatedly on my Rule: All Bad Science Is Persuasive. He reveals the twisted dynamic science and persuasion can create. Zealots will bend outcomes to fit existing biases and not seek arguments then follow them dispassionately to conclusion. He also proves a warning I offered a long time ago about those enraptured with the Nudge: What happens when you mix persuasion plays into rule, regulation, or law? This judge deems it likely illegal.

You may have noted my qualifiers of “probably illegal” and “likely illegal.” It stems not from any problems in the judge’s finding, but from the legal strategy the tobacco companies are pursuing. They have two cases. The first, this one, sought, and received now, an injunction to delay the FDA implementation. The second, still pending, is on the merits of the FDA’s actions. The injunction seeks to avoid immediate harm while the second case seeks a judgment on the legality of the FDA’s actions. For the injunction, the judge has only to make a ruling about the likelihood the tobacco companies can succeed in the other case. He provides reasoning and evidence to suggest that, yes indeed, the FDA has serious legal problems.

Of course this judge is a tool of the tobacco interests or a tool of the conservative right or just a tool, except . . .

Finally, as part of its preliminary benefits analysis, the FDA estimated that “the U.S. smoking rate will decrease by 0.212 percentage points” as a result of the Proposed Rule,9 75 Fed. Reg. 69,543 (emphasis added), a statistic the FDA admits is “in general not statistically distinguishable from zero.”

The FDA computes that their new warning label regime will reduce smoking by 0.212 percentage points. Three decimal accuracy! But not SSD! Even if the judge is a tool, consider that factual disclosure from the FDA in court. How can you think like this and call yourself reasonable, much less scientific? Your own scientists offer an officiously precise and truly trivial effect size, note that it would probably not be SSD, yet still think they’ve done the job as ordered by Congress?

Many tobacco control advocates see Big Tobacco as Big Evil. In so characterizing, these advocates blind themselves to the law of Laws and the rule of the Rules. The political appointees from the Obama Administration running the FDA hate tobacco companies so much that considerations of competence and Constitutionality are not only irrelevant, but ignored.

P.S. The FDA needs to hire either those Berkeley physicists or the Harvard epidemiologists on making 0.212 percentage points SSD. It’s easier than you think and when you’ve got the Fed’s printing press, money isn’t a problem.

How Breast Cancer Is Like Climate Change

We recently took a look at how some people do research on Climate Change. We noted that you cannot execute gold standard experimental science on Climate Change because the nature of nature does not permit it. We then looked at an observational approach that would be useful. Recall my fishing net and arrow shooting metaphor. Then we determined that no one is doing this yet, but rather instead are accumulating huge databases of convenience data. From these biased samples, researchers offer a kind of science revealed in this quote:

B) the application of a quality control and “correction” framework to deal with erroneous, biased, and questionable data . . .

I asserted that only God can de-bias a biased dataset, but that clearly does not stop some people’s goddish aspirations. At Berkeley some physicists believe they can turn bias into a random or representative sample. And today, so too, with epidemiologists at Harvard.

Consider the recent news reported in JAMA that proves alcohol as a carcinogen, specifically as a proven cause of breast cancer. Researchers used data from the massive and ongoing Nurses Study that is tracking the lifestyle and health of over 120,000 nurses since 1976 with a biannual self report survey. From this huge database they focus upon the effect of alcohol consumption on breast cancer. The researchers report that heavy drinkers have a 150% higher risk of breast cancer compared to nondrinkers.

More importantly, they pursue an interesting biological explanation for this outcome by looking at different characteristics of the cancer cases related to hormones and suggest a plausible mechanism of effect. An editorialist underscores this idea.

The association between alcohol use and increased risk of breast cancer is not a novel finding, but the report by Chen et al provides more detail about the risks associated with different patterns of consumption. Chen et al and other investigators suggest that alcohol probably acts through the modification of the hormonal milieu.

So. We’ve got alcohol as a proven cancer causing agent in breast cancer, plus proof of the plausible biological pathway.

All of this is true the same way that all of the research on Climate Change is true. It begins with a huge, biased dataset that researchers then debias to divine the truth of nature. No experiments. No random sampling. Just convenience sampling of observations that are then de-biased.

Think like a scientist.

1. The Nurses Study is a convenience sample of biased data. People select in and out at their option, not under researcher control. The occupational group was chosen for obvious convenience reasons. Why not teachers? Or secretaries? Or WalMart-ish associates? Why not women working in the home? Why not unemployed women? As with the Berkeley climate database, a big, biased dataset is still biased and you cannot change that.

2. Since 1976 over 30% of the sample has dropped out. They started with 122,000 participants and report results from 85,000 now. Did those 40,000 people drop out for the same reasons and with the same impact on the data? What happens when you compare results from different time periods with different sample sizes and composition? Does any one doubt that you’d get SSD among all the various comparisons you could make here and that there would be interesting and contradictory findings?

3. Self reports of behavior contain small measurement errors. People cannot exactly and consistently exactly report their alcohol consumption. With effects this small even measurement error is a plausible Rival Explanation.

4. Adjusting the data reduces the error term and makes trivial effects more statistically significant and artificially changes effect sizes. Thus, a math trick makes the effect larger, not the liquor.

5. The adjusted effect size is a Small Windowpane. At 45/55, it is barely detected above random variation. And it is highly adjusted as we’ll see.

6. The tests for the plausible biological pathway are not even statistically significant even though the editorialist seems to believe them as truly true. Really. Just like the weather report.  Here’s the key quote from the research.

Because one potential mechanism for alcohol’s effect on breast cancer risk involves hormonal effects, we examined the association by ER/PR status of the tumor (TABLE 3). For this analysis, we excluded 1620 cases with unknown ER status, PR status, or both.  Alcohol consumption seemed to be more strongly associated with risk of ER-positive status, PR-positive status, or both, but the P value for interaction was not significant.

The presumed hormonal effect based on a huge sample is not even statistically significant, but smart, trained, and motivated editorialists misunderstand this.  Here’s his key quote, again.

The association between alcohol use and increased risk of breast cancer is not a novel finding, but the report by Chen et al provides more detail about the risks associated with different patterns of consumption. Chen et al and other investigators suggest that alcohol probably acts through the modification of the hormonal milieu.

When a hypothesis is interesting, novel, and plausible, but the data aren’t even statistically significant in a sample of 85,000 cases, it appears you can call that hypothesis “probable” in JAMA.  Why confuse the science with facts when you know you are right?

7. Since there is no randomization in these data, tests of sampling error, those tests of statistical significance, are not warranted and are deceptively applied. Smart people constantly misunderstand the meaning of statistical significance not realizing that it quantifies a potential Rival Explanation – sampling error.

Simply put: The de-biased effects are Small and the Rival Explanations are plausible. These data do not support the hypothesis that alcohol causes breast cancer any more than any Climate Change database supports the hypothesis of a change in global temperature.

Now, let’s pivot from the bad science that aims at persuasion to simply science that aims at understanding. Just take the absolute outcomes from this study. The raw truth of the data is expressed in this Table which is just their reformatted Table 1.

 

Grams of alcohol per day

0

0.1-4.9

5-9.9

10-19.9

> 20

Totals

cancers

1669

3143

1063

1091

724

7680

cases

18967

377030

11559

10212

6192

84630

percent

.0879

.0833

.0919

.1068

.1169

.0908

 

Take a minute and think about it. The columns have alcohol consumption in grams. The > 20 Column translates into 2 or more drinks a day. The rows show the number of breast cancers in each drinking category and the number of cases with the percent row simply the number of cancers divided by cases. Finally, note the last column provides the totals and the grand average. Let’s start with that grand average.

The average incidence of breast cancer in this biased, convenience sample is .0908 or about 9%. Around this mean we find a range from a low of 8.3% to a high of 11.7%. If you do a simple test for differences between proportions for the non drinkers and the heavy drinkers (8.8 versus 11.7) you get a highly statistically significant difference (z = 10.74, p < .000000001!!!!)  but an h effect size of .096. A Small Windowpane for h would be .2, so this obtained difference is less than half of a Small Windowpane. Small effect sizes are barely detectable over random variation and this is half of that. Sure, it is SSD out the wazoo, but that’s purely a function of that huge sample size and given that the sample is not random, statistical significant testing is unwarranted and misleading. The raw effect size is trivial even if it is true.  And, is this trivial effect even true?

Think more about this. Couldn’t you get that much of a difference because of . . .

. . . convenience sampling?

. . . bias in the drop outs?

. . . error in the self report measurement?

Think about this analysis another way.  Compare the nondrinkers to the next category, that 0.1-4.9 grams per day (about a drink a week).  Note there’s a quantitative difference between 8.79% and 8.33%.  Sure, the difference is only 0.46%, but the statistical test reveals z = 2.77, p < .006, a highly significant difference.   Does it matter that the h effect is a miniscule 0.016?  This comparison meets the same standards of judgment the researchers use for their other tests.  So, why aren’t they saying that nondrinking causes cancer?

Remember that the researchers reported a 150% increase in breast cancer while the data we looked at in the Table show 120%. Why the small discrepancy? Recall again that I looked only at the headline comparison – more drinking, more cancer – but the researchers actually tested a more complex model – more drinking plus all those confounders, more cancer. When you “adjust” or “de-bias” or “correct” the headline data with all those other factors, it serves to artificially increase the effect size.

Please note the verbal trick here. Headline a simple hypothesis – More Drinking Causes More Cancer! But, don’t test those data. Add in other variables, like:

Additional covariates in the model were chosen to represent possible confounders and commonly accepted breast cancer risk factors and included menopausal status, age at menarche, parity, age at first birth, body mass index, family history of breast cancer in a first-degree relative, breastfeeding, cigarette smoking, and self-report of benign breast disease. All variables except age at menarche and breastfeeding were updated from follow-up questionnaires. For postmenopausal women, terms were also included for age at menopause, type of menopause, and duration and type of hormone therapy use.

Find the effect size for drinking + adjustments = cancer, but report the result as drinking = cancer.  While researchers act as if “adjusting” a convenience sample makes it a random or representative sample, they are only playing math games to artifically inflate SSD and effect sizes.

Finally, realize all the false precision in this report and in most of those observational fairy tales.  Researchers report things like alcohol consumed in grams per day and breast cancer cases down to decimal places as if they are counting all the pennies in the bank and reporting exact population values.  All of these numbers are just estimates of Drinkingness or Cancerness or Temperatureness or whatever the construct under consideration.  The numbers are never the Thing Itself, but only an estimation of the Thing Itself.  Since we cannot directly and correctly measure Nature as She is, we can only estimate and then make decisions based on those estimates.  This research literally eats the menu, thinking it is real food.  When you realize we are dealing with estimates and not true, exact, scientific values, you realize the obtained effects are not real in themselves, but indicators and that those indicators are weak.

When you think about this, you realize the bad science here and you also see the good persuasion. The researchers look precise, objective, and deep when they are just persuasive, saying one thing while meaning something else. It also helps when you get an editorialist who asserts claims you left unsupported. These two articles literally talk out of both sides of the mouth. The original research describes the hormonal hypothesis, but then reports nonsignificant effects. The editorialist also recounts the hormonal hypothesis, but then miscounts the data to imply that hypothesis is supported. If you do not read Methods and Results you miss this and are left with a very different and mistaken conclusion.

This study, taken as a whole, offers no support for the claim that drinking causes breast cancer. No matter how you define the effect, it is small and arises from a data collection that encourages belief more in Rival Explanations than the ones provided by the researchers or the editorialist.

See the bad scientific similarities between two very different areas of study, Climate Change and Breast Cancer. In both instances, better science is available, but no one is doing it. They instead assert the ability to do what I claim is impossible. When anyone says she can de-bias the data that means she is the Queen of Tomorrow who also should be able to corner the stock market, pick Presidential winners, and run the table at Vegas. If anyone knows how to find truth in a convenience sample of data, the world is her’s for the taking, easy, ripe, and luscious to quote a charming thief.

Now. What’s a persuasion maven to do with today’s lesson?

If you are in the numbers business, you’ve got a great and enduring model of how to do sophistical statistics. All the tricks are hiding in plain sight. Just read without your ruby slippers, Dorothy.

If you are in the health business, hire these guys as consultants and away you go. You might design some kind of whiz bang decision maker app that allows women to make drinking decisions in a bar. Might even be useful for physicians. If you’ve read any pop press quotes from the clinicians, they are confused as hell over this research and an overly complex piece of software that essentially and always tells them to do nothing new would be a big seller. Get a recommendation quote from one of the study line authors or maybe that editorialist. Be careful about how you offer the money, however. Ethical considerations are important here.

If you are in the Lifestyle Police, get your drum and bugle. Harvard provides all the cover you need. Avoid any echoes of Carry Nation and Prohibition. Or, what the hell, we haven’t had a good Constitutional Amendment fight in a long time. Repeal the 21st Amendment with the 28th Amendment and reinstate the 18th Amendment! Think of the fund raising you could do on this. It could run longer than the ERA fight. Hey, we’re creating jobs here, folks! Better than Obama, Bernanke, or Congress proposals.

But, see the Health Bubble continue to inflate. And see the strain, how hard you have to work now to convince people to give you money to save their lives. You’ve got to sell trivial outcomes as science. The Bubble is so big and so taut right now, it’s really hard to make it bigger. It takes the Crimson reputation and gunfighter fast fingers pulling the statistical trigger. It also helps that no one reads the Methods and Results.

This is what you get when you try to turn bias into truth.  Rumpelstiltskin tried to spin flax into gold and Harvard tries to turn wine into cancer.  The persuasion lessons from these fairy tales are more interesting than the scientific ones.

P.S. Here’s a reader exercise. Some folks on this research team employed their skills looking at vitamin supplements many years ago. Using Observational Research they proved beyond the shadow of anyone’s doubt that vitamin supplements protected health, particularly with cancer. Of course, later large scale randomized controlled trials have decisively and repeatedly disconfirmed this proof. It’s your tax dollars at work. They’ve gotten millions of grant dollars and have been consistently and repeatedly proven wrong when somebody does just plain science rather than that scientific science.

P.P.S. What’s going on at Harvard? All these trivial outcomes puffed up like peacocks. Remember one of them proved that drinking pop makes kids into killers. A 109% RR. And remember that bad news with HRT. And now vitamins. As the Harvard don once put it, those who cannot remember the past are condemned to repeat it.

 

 

Persuasive Technology

You don’t need to get philosophical about it at all. Energy is the difference between slowly starving off the land or living the vida loca 2.0. Without energy we’re back in the trees or caves or, worse still, the farm, so using resources effectively is at least rational without requiring any values, banners, or causes. Tony Fadell, who helped design the iPod, has a new tool for energy effectiveness called Nest.

. . . his post-Apple debut is a home thermostat. Yes, a thermostat. What’s different is that Nest looks like the kind of slick gadget that you’d want to display on your coffee table. Its brushed metal fascia reflects the colors around it, and the sky blue digital display is elegant, yet high tech. It’s designed to go on your wall and replace that old-fashioned, mercury-filled Honeywell model we all grew up with. But Nest has a rotating push-in dial (sound familiar?) that makes you want to touch it — even though after it learns your habits you may never have to. “I wanted it to be something that draws attention,” Fadell told me over lunch before the launch. Nest certainly does. But it’s more than just a pretty face. It’s smart, too.

Okay, so now it’s Nest, an iGizmo that learns the rhythms of your life and adjusts the temperature in your house without you doing anything except buying it. And, it will reduce energy use! It only costs $250.

Now. Consider this.

Nest has built-in Wi-Fi so that it can connect to a home network. It can then be set or adjusted remotely from anywhere, including an iPhone or Android-based smartphone. To warm up or cool down the house before you get there, simply tap a few settings from your phone before you leave the office — or while you’re on the way there.

Simply tap? Isn’t that what you could be and should be doing right now with your thermostat? Don’t you already have the needed technology to modulate the temperature and energy consumption in your house right now? Whether with smart meter or smart feet, you can change energy usage in your house.

Why do you need to spend $250 on something that merely looks like an iGizmo and does what you’re already doing? As near as I can tell the only benefit here is that you can adjust your thermostat from yet another remote control device in your house. If you happen to have it in your hand or close by, you can save your body heat, but otherwise, you’d have to get up and look for the Nest just like you do with all the other remote controls when you could just walk over to the thermostat.

So, what we have then is persuasive technology or a device that does pretty much what you are already doing, but more attractively or quickly or famously or in TpB terms, easy, fun, and popular. I’m not sure if this is WWSD, but he’d like the metal fascia thingy. Form follows function used the be the mantra, but now its form is the function, the persuasive function.

Is Marc Andreessen in the neighborhood?

P.S.  Let’s do the Macarena (YouTube).  Form is function, but with a beat!

 

P.P.S.  Oh.  Not Mr. Andreessen.  Mr. Gore.

 

Soda Pop Kills!!!

Really.

Results. Adolescents who drank more than five cans of soft drinks per week (nearly 30% of the sample) were significantly more likely to have carried a weapon and to have been violent with peers, family members and dates (p<0.01 for carrying a weapon and p<0.001 for the three violence measures). Frequent soft drink consumption was associated with a 9–15% point increase in the probability of engaging in aggressive actions, even after controlling for gender, age, race, body mass index, typical sleep patterns, tobacco use, alcohol use and having family dinners.

This from a professor from the Harvard School of Public Health and a colleague, so you know the result is not only true, but perhaps even persuasive.  We can now put soda on the shelf with divorce and bad moms as proven killers in America.

A 9-15% increase?

A Small Windowpane would be 50% so this is about one quarter of Small.  And, yet, these ratios are statistically significant at .01 to .001!  Of course, when you have over 1800 cases drawn at convenience, you’ll have plenty of Cohen power so that even a 9% effect will be SSD and you’re off to the presses at Injury Prevention.  So, we have an effect that is barely detectable above random variation drawn from a convenience sample and now you need a concealed carry license for Coca-Cola?

Listen to this.

“It was shocking to us when we saw how clear the relationship was,” he told AFP in an interview.  But he stressed that only further work would confirm — or disprove — the key question whether higher consumption of sweet sodas caused violent behaviour.

This from Harvard prof, David Hemenway.  I’m not sure what research and stat methods book they use at Harvard, but it appears to be the classic, Persuasion Double Talk:  Talking Out of Both Sides of Your Mouth.  Yeah, we’ve got a shockingly clear relationship, but we need more research.  Do I smell a grant application in the air?

Yeah.  We need to do more research.  More research on the peer review oversight both at the journal, Injury Prevention, and whatever funding source approved this Observational Tooth Fairy Tale.

Hey, kids, you are living in the midst of the next great bubble.  Nobody thinks when they read Health Science.  They just believe the Discussion section and researcher interviews in pop press with words like shocking and huge and significant.  And then everyone opens their wallet and spends more money or effort on Expert Advice while getting no benefit from it.  Some day people are gonna wake up and smell the Tulip Bulbs.

She looked over at Tyler sprawled in his dad’s Barcolounger, a spent six pack of Pepsi scattered on the floor around him.  Tiffany shivered with fear.

She’d read David Hemenway’s research, shockingly clear research.  She knew that Tyler was spoiling for a fight and that she was the only one in the room.

As Tyler rose from the lounger, Tiffany wished her parents had divorced so she might have some of the inner rage divorce creates in kids.  Maybe then she could defend herself against the impending soda fueled assault.  But – and she scowled at the thought – her parents loved each other and her; they didn’t even smoke or eat cured meat!

She knew she was at the mercy of Tyler’s uncontrollable, but predictable, rage with nothing but science to protect her.  Tyler burped a long, wet belch and Tiffany closed her eyes tightly.

“When will Harvard save me?” she thought and then the room went black.