Flawed meta-analysis reveals just how limited the evidence is mapping meditation into specific regions of the brain

The article put meaningless, but reassuring effect sizes into the literature where these numbers will be widely and uncritically cited.

mind the brain logo

“The only totally incontrovertible conclusion is that much work remains to be done…”.

lit up brain not in telegraph article PNG

Authors of a systematic review and meta-analysis of functional neuroanatomical studies (fMRI and PET) of meditation were exceptionally frank in acknowledging problems relating the practice of meditation to differences in specific regions of the brain. However, they did not adequately deal with problems hiding in plain sight. These problems should have discouraged integration of this literature into a meta-analysis and the authors’ expressing the strength of the association between meditation and the brain in terms of a small set of moderate effect sizes.

The article put meaningless, but reassuring effect sizes into the literature where these numbers will be widely and uncritically cited.

An amazing set of overly small studies with evidence that null findings are being suppressed.

Many in the multibillion mindfulness industry are naive or simply indifferent to what constitutes quality evidence. Their false confidence that “meditation changes the brain*” can be bolstered by selective quotes from this review seemingly claiming that the associations are well-established and practically significant. Readers who are more sophisticated may nonetheless be mislead by this review, unless they read beyond the abstract and with appropriate skepticism.

Read on. I suspect you will be surprised as I was about the small quantity and poor quality of the literature relating the practice of meditation to specific areas of the brain. The colored pictures of the brain widely used to illustrate discussions of meditation are premature and misleading.

As noted in another article :

Brightly coloured brain scans are a media favourite as they are both attractive to the eye and apparently easy to understand but in reality they represent some of the most complex scientific information we have. They are not maps of activity but maps of the outcome of complex statistical comparisons of blood flow that unevenly relate to actual brain function. This is a problem that scientists are painfully aware of but it is often glossed over when the results get into the press.

The article is

Fox KC, Dixon ML, Nijeboer S, Girn M, Floman JL, Lifshitz M, Ellamil M, Sedlmeier P, Christoff K. Functional neuroanatomy of meditation: A review and meta-analysis of 78 functional neuroimaging investigations. Neuroscience & Biobehavioral Reviews. 2016 Jun 30;65:208-28.

Abstract.

Keep in mind how few readers go beyond an abstract in forming an impression of what an article shows. More readers “know” what the meta analysis found solely based on their reading the abstract , relative to the fewer people who read both the article and the supplementary material).

Meditation is a family of mental practices that encompasses a wide array of techniques employing distinctive mental strategies. We systematically reviewed 78 functional neuroimaging (fMRI and PET) studies of meditation, and used activation likelihood estimation to meta-analyze 257 peak foci from 31 experiments involving 527 participants. We found reliably dissociable patterns of brain activation and deactivation for four common styles of meditation (focused attention, mantra recitation, open monitoring, and compassion/loving-kindness), and suggestive differences for three others (visualization, sense-withdrawal, and non-dual awareness practices). Overall, dissociable activation patterns are congruent with the psychological and behavioral aims of each practice. Some brain areas are recruited consistently across multiple techniques—including insula, pre/supplementary motor cortices, dorsal anterior cingulate cortex, and frontopolar cortex—but convergence is the exception rather than the rule. A preliminary effect-size meta-analysis found medium effects for both activations (d = 0.59) and deactivations (d = −0.74), suggesting potential practical significance. Our meta-analysis supports the neurophysiological dissociability of meditation practices, but also raises many methodological concerns and suggests avenues for future research.

The positive claims in the abstract

“…Found reliably dissociable patterns of brain activation and deactivation for four common styles of meditation.”

“Dissociable activation patterns are congruent with the psychological and behavioral aims of each practice.”

“Some brain areas are recruited consistently across multiple techniques”

“A preliminary effect-size meta-analysis found medium effects for both activations (d = 0.59) and deactivations (d = −0.74), suggesting potential practical significance.”

“Our meta-analysis supports the neurophysiological dissociability of meditation practices…”

 And hedges and qualifications in the abstract

“Convergence is the exception rather than the rule”

“[Our meta-analysis] also raises many methodological concerns and suggests avenues for future research.

Why was this systematic review and meta-analysis undertaken now?

A figure provided in the article showed a rapid accumulation of studies of mindfulness in the brain in the past few years, with over 100 studies now available.

However, the authors systematic search yielded “78 functional neuroimaging (fMRI and PET) studies of meditation, and used activation likelihood estimation to meta-analyze 257 peak foci from 31 experiments involving 527 participants.” About a third of the studies identified in a search provided usable data.

What did the authors want to accomplish?

Taken together, our central aims were to: (i) comprehensively review and meta-analyze the existing functional neuroimaging studies of meditation (using the meta-analytic method known as activation likelihood estimation, or ALE), and compare consistencies in brain activation and deactivation both within and across psychologically distinct meditation techniques; (ii) examine the magnitude of the effects that characterize these activation patterns, and address whether they suggest any practical significance; and (iii) articulate the various methodological challenges facing the emerging field of contemplative neuroscience (Caspi and Burleson, 2005; Thompson, 2009; Davidson, 2010; Davidson and Kaszniak, 2015), particularly with respect to functional neuroimaging studies of meditation.

Said elsewhere in the article:

Our central hypothesis was a simple one: meditation practices distinct at the psychological level (Ψ) may be accompanied by dissociable activation patterns at the neurophysiological level (Φ). Such a model describes a ‘one-to-many’ isomorphism between mind and brain: a particular psychological state or process is expected to have many neurophysiological correlates from which, ideally, a consistent pattern can be discerned (Cacioppo and Tassinary, 1990).

The assumption is meditating versus not-meditating brains should be characterized by  distinct, observable neurophysiological pattern. There should also be distinct, enduring changes in the brain in people who have been practicing meditation for some time.

I would wager that many meditation enthusiasts believe that links to specific regions are already well established. Confronted with evidence to the contrary, they would suggest that links between the experience of meditating and changes in the brain are predictable and are waiting to be found. It is that kind of confidence that leads to the significance chasing and confirmatory bias currently infecting this literature.

Types of meditation available for study

Quantitative analyses focused on four types of meditation. Additional terms of meditation did not have sufficient studies and so were examined qualitatively. Some studies of the four provided within-group effect size, whereas other studies provided between-group effect sizes.

Focused attention (7 studies)

Directing attention to one specific object (e.g., the breath or a mantra) while monitoring and disengaging from extraneous thoughts or stimuli (Harvey, 1990, Hanh, 1991, Kabat-Zinn, 2005, Lutz et al., 2008b, Wangyal and Turner, 2011).

Mantra recitation (8 studies)

Repetition of a sound, word, or sentence (spoken aloud or silently in one’s head) with the goals of calming the mind, maintaining focus, and avoiding mind-wandering.

Open monitoring (10 studies)

Bringing attention to the present moment and impartially observing all mental contents (thoughts, emotions, sensations, etc.) as they naturally arise and subside.

Loving-kindness/compassion (6 studies)

L-K involves:

Generating feelings of kindness, love, and joy toward themselves, then progressively extend these feelings to imagined loved ones, acquaintances, strangers, enemies, and eventually all living beings (Harvey, 1990, Kabat_Zinn, 2005, Lutz et al., 2008a).

Similar but not identical, compassion meditation

Takes this practice a step further: practitioners imagine the physical and/or psychological suffering of others (ranging from loved ones to all humanity) and cultivate compassionate attitudes and responses to this suffering.

In addition to these four types of meditation, three others can be identified, but so far have only limited studies of the brain: Visualization, Sense-withdrawal and Non-dual awareness practices.

A dog’s breakfast: A table of the included studies quickly reveals a meta-analysis in deep trouble

studies included

This is not a suitable collection of studies to enter into a meta-analysis with any expectation that a meaningful, generalizable effect size will be obtained.

Most studies (14) furnish only pre-post, within-group effects for mindfulness practiced by long time practitioners. Of these 14 studies, there are two outliers with 20 and 31 practitioners. Otherwise the sample size ranges from 4 to 14.

There are 11 studies furnishing between-group comparisons between experienced and novice meditators. The number of participants in the smaller cell is key for the power of between-group effect sizes, not the overall sample size. In these 11 studies, this ranged from 10 to 22.

It is well-known that one should not combine within- and between- group effect sizes in meta analysis.  Pre-/post-within-group differences capture not only the effects of the active ingredients of an intervention, but nonspecific effects of the conditions under which data are gathered, including regression to the mean. These within-group differences will typically overestimate between-group differences. Adding a  comparison group and calculating between-group differences has the potential for  controlling nonspecific effects, if the comparison condition is appropriate.

The effect sizes based on between-group differences in these studies have their own problems as estimates of the effects of meditation on the brain. Participants were not randomized to the groups, but were selected because they were already either experienced or novice meditators. Yet these two groups could differ on a lot of variables that cannot be controlled: meditation could be confounded with other lifestyle variables: sleeping better or having a better diet. There might be pre-existing differences in the brain that made it easier for the experienced meditators to have committed to long term practice. The authors acknowledge these problems late in the article, but they do so only after discussing the effect sizes they obtained as having substantive importance.

There is good reason to be skeptical that these poorly controlled between-group differences are directly comparable to whatever changes would occur in experienced meditators’ brains in the course of practicing meditation.

It has been widely appreciated that neuroimaging studies are typically grossly underpowered, and that the result is low reproducibility of findings. Having too few participants in a  study will likely yield false negatives because of an inability to achieve the effects needed to obtain significant findings. Small sample size means a stronger association is needed to be significant.

Yet, what positive findings (i.e., significant) are obtained will of necessity be larger likely to be exaggerated and not reproducible with a larger sample.

Another problem with such small cell sizes is that it cannot be assumed that effects are due to one or more participants’ differences in brain size or anatomy. One or a small subgroup of outliers could drive all significant findings in an already small sample. The assumption that statistical techniques can smooth these interindividual differences depends on having much larger samples.

It has been noted elsewhere:

Brains are different so the measure in corresponding voxels across subjects may not sample comparable information.

How did the samples get so small? Neuroanatomical studies are expensive, but why did Lazar et al (2000) have 5 rather 6 participants, or only the 4 participants that Davanger et had? Were from some participants dropped after a peeking at the data? Were studies compromised by authors not being able to recruit intended numbers of participants and having to relax entry criteria? What selection bias is there in these small samples? We just don’t know.

I am reminded of all the contentious debate that has occurred when psychoanalysts insisted on mixing uncontrolled case-series with randomized trials in the same meta-analyses of psychotherapy. My colleagues and I showed this introduces great distortion  into the literature . Undoubtedly, the same is occurring in these studies of meditation, but there is so much else wrong with this meta analysis.

The authors acknowledge that in calculating effect sizes, they combined studies measuring cerebral blood flow (positron emission tomography; PET) and blood oxygenation level (functional magnetic resonance imaging; fMRI). Furthermore, the meta-analyses combined studies that varied in the experimental tasks for which neuroanatomical data were obtained.

One problem is that even studies examining a similar form of meditation might be comparing a meditation practice to very different baseline or comparison tasks and conditions. However, collapsing across numerous different baselines or control conditions is a common (in fact, usually inevitable) practice in meta_analyses of functional neuroimaging studies…

So, there are other important sources of heterogeneity between these studies.

Generic_forest_plot
A generic forest plot. This article did not provide one.

It’s a pity that the authors did not provide a forest plot [How to read  a forest plot.]  graphically showing the confidence intervals around the effect sizes being entered into the meta-analysis.

But the authors did provide a funnel plot that I found shocking. [Recommendations for examining and interpreting funnel plot] I have never seen one like, except when someone has constructed an artificial funnel plot to make a point.

funnel plot

Notice two things about this funnel plot. Rather than a smooth, unbroken distribution, studies with effect sizes between -.45 and +.45 are entirely missing. Studies with smaller sample sizes have the largest effect sizes, whereas the smallest effect sizes all come from the larger samples.

For me, this adds to the overwhelming evidence there is something gone wrong in this literature and any effect sizes should be ignored. There must have been considerable suppression of null findings so large effects from smaller studies will not generalize. Yet, the authors find the differences between small and larger sample studies encouraging

This suggests, encouragingly, that despite potential publication bias or inflationary bias due to neuroimaging analysis methods, nonetheless studies with larger samples tend to converge on similar and more reasonable (medium) effect sizes. Although such a conclusion is tentative, the results to date (Fig. 6) suggest that a sample size of approximately n = 25 is sufficient to reliably produce effect sizes that accord with those reported in studies with much larger samples (up to n = 46).

I and others have long argued that studies of this small sample size in evaluating psychotherapy should be left as pilot feasibility studies and not used to generate effect sizes. I think the same logic applies to this literature.

Distinctive patterns of regional activation and deactivation

The first part of the results section is devoted to studies examining particular forms of meditation. Seeing the apparent consistency of results, one needs to keep in mind the small number of studies being examined and the considerable differences among them. For instance, results presented for focused attention combine three between-group comparisons with four within-group studies. Focused attention includes both pre-post meditation differences from experienced Tibetan Buddhist practitioners to differences between novice and experienced practitioners of mindfulness-based stress reduction (MBSR). In almost all cases, meaningful statistically significant differences are found in both activation and deactivation regions that would make a lot of sense in terms of the functions that are known to be associated with them. There is not much noting of anomalous brain regions being identified by significant effects There is a high ratio of significant findings to number of participants comparisons. There is little discussion of anomalies.

Meta-analysis of focused attention studies resulted in 2 significant clusters of activation, both in prefrontal cortex (Table 3;Fig. 2). Activations were observed in regions associated with the voluntary regulation of thought and action, including the premotor cortex (BA 6; Fig. 2b) and dorsal anterior cingulate cortex (BA24; Fig. 2a). Slightly sub-threshold clusters were also observed in the dorsolateral prefrontal cortex (BA 8/9; Fig. 2c) and left midinsula (BA 13; Fig. 2e); we display these somewhat sub-threshod results here because of the obvious interest of these findings in practices that involve top-down focusing of attention, typically focused on respiration. We also observed clusters of deactivation in regions associated with episodic memory and conceptual processing, including the ventral posterior cingulate cortex (BA 31; Fig. 2d)and left inferior parietal lobule (BA 39; Fig. 2f).

How can such meaningful, practically significant findings obtains when so many conditions mitigate against finding them? John Ioannidis once remarked that in hot areas of research, consistency of positive findings from small studies often reflects only the strength of bias with which they are sought. The strength of findings will decrease when larger, more methodologically sophisticated studies become available, conducted by investigators who are less committed to having to get confirmation.

The article concludes:

Many have understandably viewed the nascent neuroscience of meditation with skepticism (Andresen, 2000; Horgan, 2004), burecent years have seen an increasing number of high-quality, controlled studies that are suitable for inclusion in meta-analyses and that can advance our cumulative knowledge of the neural basis of various meditation practices (Tang et al., 2015). With nearly a hundred functional neuroimaging studies of meditation now reported, we can conclude with some confidence that different practices show relatively distinct patterns of brain activity, and that the magnitude of associated effects on brain function may have some practical significance. The only totally incontrovertible conclusion, however, is that much work remains to be done to confirm and build upon these initial findings.

“Increasing number of high-quality, controlled studies that are suitable for inclusion in meta-analyses” ?…” “Conclude with some confidence…? “Relatively distinct patterns”?… “Some practical significance”?

In all of this premature enthusiasm about findings relating the practice of meditation to activation of particular regions of the brain and deactivation of others, we should not lose track of some other issues.

Although the authors talk about mapping one-to-one relationships between psychological states and regions of the brain, none of the studies would be of sufficient size to document some relationships, given the expected size of the relationship, based on what is typically found between psychological states and other biological variables.

Many differences between techniques could be artifactual –due to the technique altering breathing, involving verbalization, or focused attention. Observed differences in the brain regions activated and deactivated might simply reflect these differences without them being related to psychological functioning.

Even if the association were found, it would be a long way to establishing that the association reflected a causal mechanism, rather than simply being correlational or even artifactual. Think of the analogy of discovering a relationship between the amount of sweat while exercising in concluding that any weight loss was due to sweating it out.

We still have not established that meditation has more psychological and physical health benefits than other active interventions with presumably different mechanisms. After lots of studies, we still don’t know whether mindfulness meditation is anything more than a placebo. While I was finishing up this blog post, I came across a new study:

The limited prosocial effects of meditation: A systematic review and meta-analysis. 

Although we found a moderate increase in prosociality following meditation, further analysis indicated that this effect was qualified by two factors: type of prosociality and methodological quality. Meditation interventions had an effect on compassion and empathy, but not on aggression, connectedness or prejudice. We further found that compassion levels only increased under two conditions: when the teacher in the meditation intervention was a co-author in the published study; and when the study employed a passive (waiting list) control group but not an active one. Contrary to popular beliefs that meditation will lead to prosocial changes, the results of this meta-analysis showed that the effects of meditation on prosociality were qualified by the type of prosociality and methodological quality of the study. We conclude by highlighting a number of biases and theoretical problems that need addressing to improve quality of research in this area. [Emphasis added].

 

 

 

Jane Brody promoting the pseudoscience of Barbara Fredrickson in the New York Times

Journalists’ coverage of positive psychology and health is often shabby, even in prestigious outlets like The New York Times.

Jane Brody’s latest installment of the benefits of being positive on health relied heavily on the work of Barbara Fredrickson that my colleagues and I have thoroughly debunked.

All of us need to recognize that research concerning effects of positive psychology interventions are often disguised randomized controlled trials.

With that insight, we need to evaluate this research in terms of reporting standards like CONSORT and declarations of conflict of interests.

We need to be more skeptical about the ability of small changes in behavior being able to profoundly improve health.

When in doubt, assume that much of what we read in the media about positivity and health is false or at least exaggerated.

Jane Brody starts her article in The New York Times by describing how most mornings she is “grinning from ear to ear, uplifted not just by my own workout but even more so” by her interaction with toddlers on the way home from where she swims. When I read Brody’s “Turning Negative Thinkers Into Positive Ones.” I was not left grinning ear to ear. I was left profoundly bummed.

I thought real hard about what was so unsettling about Brody’s article. I now have some clarity.

I don’t mind suffering even pathologically cheerful people in the morning. But I do get bothered when they serve up pseudoscience as the real thing.

I had expected to be served up Brody’s usual recipe of positive psychology pseudoscience concocted  to coerce readers into heeding her Barnum advice about how they should lead their lives. “Smile or die!” Apologies to my friend Barbara Ehrenreich for my putting the retitling of her book outside of North America to use here. I invoke the phrase because Jane Brody makes the case that unless we do what she says, we risk hurting our health and shortening our lives. So we better listen up.

What bummed me most this time was that Brody was drawing on the pseudoscience of Barbara Fredrickson that my colleagues and I have worked so hard to debunk. We took the trouble of obtaining data sets for two of her key papers for reanalysis. We were dismayed by the quality of the data. To start with, we uncovered carelessness at the level of data entry that undermined her claims. But her basic analyses and interpretations did not hold up either.

Fredrickson publishes exaggerated claims about dramatic benefits of simple positive psychology exercises. Fredrickson is very effective in blocking or muting the publication of criticism and getting on with hawking her wares. My colleagues and I have talked to others who similarly met considerable resistance from editors in getting detailed critiques and re-analyses published. Fredrickson is also aided by uncritical people like Jane Brody to promote her weak and inconsistent evidence as strong stuff. It sells a lot of positive psychology merchandise to needy and vulnerable people, like self-help books and workshops.

If it is taken seriously, Fredrickson’s research concerns health effects of behavioral intervention. Yet, her findings are presented in a way that does not readily allow their integration with the rest of health psychology literature. It would be difficult, for instance, to integrate Fredrickson’s randomized trials of loving-kindness meditation with other research because she makes it almost impossible to isolate effect sizes in a way that they could be integrated with other studies in a meta-analysis. Moreover, Fredrickson has multiply published contradictory claims from the sae data set without acknowledging the duplicate publication. [Please read on. I will document all of these claims before the post ends.]

The need of self-help gurus to generate support for their dramatic claims in lucrative positive psychology self-help products is never acknowledged as a conflict of interest.  It should be.

Just imagine, if someone had a contract based on a book prospectus promising that the claims of their last pop psychology book would be surpassed. Such books inevitably paint life too simply, with simple changes in behavior having profound and lasting effects unlike anything obtained in the randomized trials of clinical and health psychology. Readers ought to be informed that these pressures to meet demands of a lucrative book contract could generate a strong confirmation bias. Caveat emptor auditor, but how about at least informing readers and let them decide whether following the money influences their interpretation of what they read?

Psychology journals almost never require disclosures of conflicts of interest of this nature. I am campaigning to make that practice routine, nondisclosure of such financial benefits tantamount to scientific misconduct. I am calling for readers to take to social media when these disclosures do not appear in scientific journals where they should be featured prominently. And holding editors responsible for non-enforcement . I can cite Fredrickson’s work as a case in point, but there are many other examples, inside and outside of positive psychology.

Back to Jane Brody’s exaggerated claims for Fredrickson’s work.

I lived for half a century with a man who suffered from periodic bouts of depression, so I understand how challenging negativism can be. I wish I had known years ago about the work Barbara Fredrickson, a psychologist at the University of North Carolina, has done on fostering positive emotions, in particular her theory that accumulating “micro-moments of positivity,” like my daily interaction with children, can, over time, result in greater overall well-being.

The research that Dr. Fredrickson and others have done demonstrates that the extent to which we can generate positive emotions from even everyday activities can determine who flourishes and who doesn’t. More than a sudden bonanza of good fortune, repeated brief moments of positive feelings can provide a buffer against stress and depression and foster both physical and mental health, their studies show.

“Research…demonstrates” (?). Brody is feeding stupid-making pablum to readers. Fredrickson’s kind of research may produce evidence one way or the other, but it is too strong a claim, an outright illusion, to even begin suggesting that it “demonstrates” (proves) what follows in this passage.

Where, outside of tabloids and self-help products, do the immodest claims that one or a few poor quality studies “demonstrate”?

Negative feelings activate a region of the brain called the amygdala, which is involved in processing fear and anxiety and other emotions. Dr. Richard J. Davidson, a neuroscientist and founder of the Center for Healthy Minds at the University of Wisconsin — Madison, has shown that people in whom the amygdala recovers slowly from a threat are at greater risk for a variety of health problems than those in whom it recovers quickly.

Both he and Dr. Fredrickson and their colleagues have demonstrated that the brain is “plastic,” or capable of generating new cells and pathways, and it is possible to train the circuitry in the brain to promote more positive responses. That is, a person can learn to be more positive by practicing certain skills that foster positivity.

We are knee deep in neuro-nonsense. Try asking a serious neuroscientists about the claims that this duo have “demonstrated that the brain is ‘plastic,’ or that practicing certain positivity skills change the brain with the health benefits that they claim via Brody. Or that they are studying ‘amygdala recovery’ associated with reduced health risk.

For example, Dr. Fredrickson’s team found that six weeks of training in a form of meditation focused on compassion and kindness resulted in an increase in positive emotions and social connectedness and improved function of one of the main nerves that helps to control heart rate. The result is a more variable heart rate that, she said in an interview, is associated with objective health benefits like better control of blood glucose, less inflammation and faster recovery from a heart attack.

I will dissect this key claim about loving-kindness meditation and vagal tone/heart rate variability shortly.

Dr. Davidson’s team showed that as little as two weeks’ training in compassion and kindness meditation generated changes in brain circuitry linked to an increase in positive social behaviors like generosity.

We will save discussing Richard Davidson for another time. But really, Jane, just two weeks to better health? Where is the generosity center in brain circuitry? I dare you to ask a serious neuroscientist and embarrass yourself.

“The results suggest that taking time to learn the skills to self-generate positive emotions can help us become healthier, more social, more resilient versions of ourselves,” Dr. Fredrickson reported in the National Institutes of Health monthly newsletter in 2015.

In other words, Dr. Davidson said, “well-being can be considered a life skill. If you practice, you can actually get better at it.” By learning and regularly practicing skills that promote positive emotions, you can become a happier and healthier person. Thus, there is hope for people like my friend’s parents should they choose to take steps to develop and reinforce positivity.

In her newest book, “Love 2.0,” Dr. Fredrickson reports that “shared positivity — having two people caught up in the same emotion — may have even a greater impact on health than something positive experienced by oneself.” Consider watching a funny play or movie or TV show with a friend of similar tastes, or sharing good news, a joke or amusing incidents with others. Dr. Fredrickson also teaches “loving-kindness meditation” focused on directing good-hearted wishes to others. This can result in people “feeling more in tune with other people at the end of the day,” she said.

Brody ends with 8 things Fredrickson and others endorse to foster positive emotions. (Why only 8 recommendations, why not come up with 10 and make them commandments?) These include “Do good things for other people” and “Appreciate the world around you. Okay, but do Fredrickson and Davidson really show that engaging in these activities have immediate and dramatic effects on our health? I have examined their research and I doubt it. I think the larger problem, though, is the suggestion that physically ill people facing shortened lives risk being blamed for being bad people. They obviously did not do these 8 things or else they would be healthy.

If Brody were selling herbal supplements or coffee enemas, we would readily label the quackery. We should do the same for advice about psychological practices that are promised to transform lives.

Brody’s sloppy links to support her claims: Love 2.0

Journalists who talk of “science”  and respect their readers will provide links to their actual sources in the peer-reviewed scientific literature. That way, readers who are motivated can independently review the evidence. Especially in an outlet as prestigious as The New York Times.

Jane Brody is outright promiscuous in the links that she provides, often secondary or tertiary sources. The first link provide for her discussion of Fredrickson’s Love 2.0 is actually to a somewhat negative review of the book. https://www.scientificamerican.com/article/mind-reviews-love-how-emotion-afftects-everything-we-feel/

Fredrickson builds her case by expanding on research that shows how sharing a strong bond with another person alters our brain chemistry. She describes a study in which best friends’ brains nearly synchronize when exchanging stories, even to the point where the listener can anticipate what the storyteller will say next. Fredrickson takes the findings a step further, concluding that having positive feelings toward someone, even a stranger, can elicit similar neural bonding.

This leap, however, is not supported by the study and fails to bolster her argument. In fact, most of the evidence she uses to support her theory of love falls flat. She leans heavily on subjective reports of people who feel more connected with others after engaging in mental exercises such as meditation, rather than on more objective studies that measure brain activity associated with love.

I would go even further than the reviewer. Fredrickson builds her case by very selectively drawing on the literature, choosing only a few studies that fit.  Even then, the studies fit only with considerable exaggeration and distortion of their findings. She exaggerates the relevance and strength of her own findings. In other cases, she says things that have no basis in anyone’s research.

I came across Love 2.0: How Our Supreme Emotion Affects Everything We Feel, Think, Do, and Become (Unabridged) that sells for $17.95. The product description reads:

We all know love matters, but in this groundbreaking book positive emotions expert Barbara Fredrickson shows us how much. Even more than happiness and optimism, love holds the key to improving our mental and physical health as well as lengthening our lives. Using research from her own lab, Fredrickson redefines love not as a stable behemoth, but as micro-moments of connection between people – even strangers. She demonstrates that our capacity for experiencing love can be measured and strengthened in ways that improve our health and longevity. Finally, she introduces us to informal and formal practices to unlock love in our lives, generate compassion, and even self-soothe. Rare in its scope and ambitious in its message, Love 2.0 will reinvent how you look at and experience our most powerful emotion.

There is a mishmash of language games going on here. Fredrickson’s redefinition of love is not based on her research. Her claim that love is ‘really’ micro-moments of connection between people  – even strangers is a weird re-definition. Attempt to read her book, if you have time to waste.

You will quickly see that much of what she says makes no sense in long-term relationships which is solid but beyond the honeymoon stage. Ask partners in long tem relationships and they will undoubtedly lack lots of such “micro-moments of connection”. I doubt that is adaptive for people seeking to build long term relationships to have the yardstick that if lots of such micro-moments don’t keep coming all the time, the relationship is in trouble. But it is Fredrickson who is selling the strong claims and the burden is on her to produce the evidence.

If you try to take Fredrickson’s work seriously, you wind up seeing she has a rather superficial view of a close relationships and can’t seem to distinguish them from what goes on between strangers in drunken one-night stands. But that is supposed to be revolutionary science.

We should not confuse much of what Fredrickson emphatically states with testable hypotheses. Many statements sound more like marketing slogans – what Joachim Kruger and his student Thomas Mairunteregger identify as the McDonaldalization of positive psychology. Like a Big Mac, Fredrickson’s Love 2.0 requires a lot of imagination to live up to its advertisement.

Fredrickson’s love the supreme emotion vs ‘Trane’s Love Supreme

Where Fredrickson’s selling of love as the supreme emotion is not simply an advertising slogan, it is a bad summary of the research on love and health. John Coltrane makes no empirical claim about love being supreme. But listening to him is an effective self-soothing after taking Love 2.0 seriously and trying to figure it out.  Simply enjoy and don’t worry about what it does for your positivity ratio or micro-moments, shared or alone.

Fredrickson’s study of loving-kindness meditation

Jane Brody, like Fredrickson herself depends heavily on a study of loving kindness meditation in proclaiming the wondrous, transformative health benefits of being loving and kind. After obtaining Fredrickson’s data set and reanalyzing it, my colleagues – James Heathers, Nick Brown, and Harrison Friedman – and I arrived at a very different interpretation of her study. As we first encountered it, the study was:

Kok, B. E., Coffey, K. A., Cohn, M. A., Catalino, L. I., Vacharkulksemsuk, T., Algoe, S. B., . . . Fredrickson, B. L. (2013). How positive emotions build physical health: Perceived positive social connections account for the upward spiral between positive emotions and vagal tone. Psychological Science, 24, 1123-1132.

Consolidated standards for reporting randomized trials (CONSORT) are widely accepted for at least two reasons. First, clinical trials should be clearly identified as such in order to ensure that the results are a recognized and available in systematic searches to be integrated with other studies. CONSORT requires that RCTs be clearly identified in the titles and abstracts. Once RCTs are labeled as such, the CONSORT checklist becomes a handy tallying of what needs to be reported.

It is only in supplementary material that the Kok and Fredrickson paper is identify as a clinical trial. Only in that supplement is the primary outcome is identified, even in passing. No means are reported anywhere in the paper or supplement. Results are presented in terms of what Kok and Fredrickson term “a variant of a mediational, parallel process, latent-curve model.” Basic statistics needed for its evaluation are left to readers’ imagination. Figure 1 in the article depicts the awe-inspiring parallel-process mediational model that guided the analyses. We showed the figure to a number of statistical experts including Andrew Gelman. While some elements were readily recognizable, the overall figure was not, especially the mysterious large dot (a causal pathway roundabout?) near the top.

So, not only might study not be detected as an RCT, there isn’t relevant information that could be used for calculating effect sizes.

Furthermore, if studies are labeled as RCTs, we immediately seek protocols published ahead of time that specify the basic elements of design and analyses and primary outcomes. At Psychological Science, studies with protocols are unusual enough to get the authors awarded a badge. In the clinical and health psychology literature, protocols are increasingly common, like flushing a toilet after using a public restroom. No one runs up and thanks you, “Thank you for flushing/publishing your protocol.”

If Fredrickson and her colleagues are going to be using the study to make claims about the health benefits of loving kindness meditation, they have a responsibility to adhere to CONSORT and to publish their protocol. This is particularly the case because this research was federally funded and results need to be transparently reported for use by a full range of stakeholders who paid for the research.

We identified a number of other problems and submitted a manuscript based on a reanalysis of the data. Our manuscript was promptly rejected by Psychological Science. The associate editor . Batja Mesquita noted that two of my co-authors, Nick Brown and Harris Friedman had co-authored a paper resulting in a partial retraction of Fredrickson’s, positivity ratio paper.

Brown NJ, Sokal AD, Friedman HL. The Complex Dynamics of Wishful Thinking: The Critical Positivity Ratio American Psychologist. 2013 Jul 15.

I won’t go into the details, except to say that Nick and Harris along with Alan Sokal unambiguously established that Fredrickson’s positivity ratio of 2.9013 positive to negative experiences was a fake fact. Fredrickson had been promoting the number  as an “evidence-based guideline” of a ratio acting as a “tipping point beyond which the full impact of positive emotions becomes unleashed.” Once Brown and his co-authors overcame strong resistance to getting their critique published, their paper garnered a lot of attention in social and conventional media. There is a hilariously funny account available at Nick Brown Smelled Bull.

Batja Mesquita argued that that the previously published critique discouraged her from accepting our manuscript. To do, she would be participating in “a witch hunt” and

 The combatant tone of the letter of appeal does not re-assure me that a revised commentary would be useful.

Welcome to one-sided tone policing. We appealed her decision, but Editor Eric Eich indicated, there was no appeal process at Psychological Science, contrary to the requirements of the Committee on Publication Ethics, COPE.

Eich relented after I shared an email to my coauthors in which I threatened to take the whole issue into social media where there would be no peer-review in the traditional outdated sense of the term. Numerous revisions of the manuscript were submitted, some of them in response to reviews by Fredrickson  and Kok who did not want a paper published. A year passed occurred before our paper was accepted and appeared on the website of the journal. You can read our paper here. I think you can see that fatal problems are obvious.

Heathers JA, Brown NJ, Coyne JC, Friedman HL. The elusory upward spiral a reanalysis of Kok et al.(2013). Psychological Science. 2015 May 29:0956797615572908.

In addition to the original paper not adhering to CONSORT, we noted

  1. There was no effect of whether participants were assigned to the loving kindness mediation vs. no-treatment control group on the key physiological variable, cardiac vagal tone. This is a thoroughly disguised null trial.
  2. Kok and Frederickson claimed that there was an effect of meditation on cardiac vagal tone, but any appearance of an effect was due to reduced vagal tone in the control group, which cannot readily be explained.
  3. Kok and Frederickson essentially interpreted changes in cardiac vagal tone as a surrogate outcome for more general changes in physical health. However, other researchers have noted that observed changes in cardiac vagal tone are not consistently related to changes in other health variables and are susceptible to variations in experimental conditions that have nothing to do with health.
  4. No attention was given to whether participants assigned to the loving kindness meditation actually practiced it with any frequency or fidelity. The article nonetheless reported that such data had been collected.

Point 2 is worth elaborating. Participants in the control condition received no intervention. Their assessment of cardiac vagal tone/heart rate variability was essentially a test/retest reliability test of what should have been a stable physiological characteristic. Yet, participants assigned to this no-treatment condition showed as much change as the participants who were assigned to meditation, but in the opposite direction. Kok and Fredrickson ignored this and attributed all differences to meditation. Houston, we have a problem, a big one, with unreliability of measurement in this study.

We could not squeeze all of our critique into our word limit, but James Heathers, who is an expert on cardiac vagal tone/heart rate variability elaborated elsewhere.

  • The study was underpowered from the outset, but sample size decreased from 65 to 52 to missing data.
  • Cardiac vagal tone is unreliable except in the context of carefully control of the conditions in which measurements are obtained, multiple measurements on each participant, and a much larger sample size. None of these conditions were met.
  • There were numerous anomalies in the data, including some participants included without baseline data, improbable baseline or follow up scores, and improbable changes. These alone would invalidate the results.
  • Despite not reporting  basic statistics, the article was full of graphs, impressive to the unimformed, but useless to readers attempting to make sense of what was done and with what results.

We later learned that the same data had been used for another published paper. There was no cross-citation and the duplicate publication was difficult to detect.

Kok, B. E., & Fredrickson, B. L. (2010). Upward spirals of the heart: Autonomic flexibility, as indexed by vagal tone, reciprocally and prospectively predicts positive emotions and social connectedness. Biological Psychology, 85, 432–436. doi:10.1016/j.biopsycho.2010.09.005

Pity the poor systematic reviewer and meta analyst trying to make sense of this RCT and integrate it with the rest of the literature concerning loving-kindness meditation.

This was not our only experience obtained data for a paper crucial to Fredrickson’s claims and having difficulty publishing  our findings. We obtained data for claims that she and her colleagues had solved the classical philosophical problem of whether we should pursue pleasure or meaning in our lives. Pursuing pleasure, they argue, will adversely affect genomic transcription.

We found we could redo extremely complicated analyses and replicate original findings but there were errors in the the original entering data that entirely shifted the results when corrected. Furthermore, we could replicate the original findings when we substituted data from a random number generator for the data collected from study participants. After similar struggles to what we experienced with Psychological Science, we succeeded in getting our critique published.

The original paper

Fredrickson BL, Grewen KM, Coffey KA, Algoe SB, Firestine AM, Arevalo JM, Ma J, Cole SW. A functional genomic perspective on human well-being. Proceedings of the National Academy of Sciences. 2013 Aug 13;110(33):13684-9.

Our critique

Brown NJ, MacDonald DA, Samanta MP, Friedman HL, Coyne JC. A critical reanalysis of the relationship between genomics and well-being. Proceedings of the National Academy of Sciences. 2014 Sep 2;111(35):12705-9.

See also:

Nickerson CA. No Evidence for Differential Relations of Hedonic Well-Being and Eudaimonic Well-Being to Gene Expression: A Comment on Statistical Problems in Fredrickson et al.(2013). Collabra: Psychology. 2017 Apr 11;3(1).

A partial account of the reanalysis is available in:

Reanalysis: No health benefits found for pursuing meaning in life versus pleasure. PLOS Blogs Mind the Brain

Wrapping it up

Strong claims about health effects require strong evidence.

  • Evidence produced in randomized trials need to be reported according to established conventions like CONSORT and clear labeling of duplicate publications.
  • When research is conducted with public funds, these responsibilities are increased.

I have often identified health claims in high profile media like The New York Times and The Guardian. My MO has been to trace the claims back to the original sources in peer reviewed publications, and evaluate both the media reports and the quality of the primary sources.

I hope that I am arming citizen scientists for engaging in these activities independent of me and even to arrive at contradictory appraisals to what I offer.

  • I don’t think I can expect to get many people to ask for data and perform independent analyses and certainly not to overcome the barriers my colleagues and I have met in trying to publish our results. I share my account of some of those frustrations as a warning.
  • I still think I can offer some take away messages to citizen scientists interested in getting better quality, evidence-based information on the internet.
  • Assume most of the claims readers encounter about psychological states and behavior being simply changed and profoundly influencing physical health are false or exaggerated. When in doubt, disregard the claims and certainly don’t retweet or “like” them.
  • Ignore journalists who do not provide adequate links for their claims.
  • Learn to identify generally reliable sources and take journalists off the list when they have made extravagant or undocumented claims.
  • Appreciate the financial gains to be made by scientists who feed journalists false or exaggerated claims.

Advice to citizen scientists who are cultivating more advanced skills:

Some key studies that Brody invokes in support of her claims being science-based are poorly conducted and reported clinical trials that are not labeled as such. This is quite common in positive psychology, but you need to cultivate skills to even detect that is what is going on. Even prestigious psychology journals are often lax in labeling studies as RCTs and in enforcing reporting standards. Authors’ conflicts of interest are ignored.

It is up to you to

  • Identify when the claims you are being fed should have been evaluated in a clinical trial.
  • Be skeptical when the original research is not clearly identified as clinical trial but nonetheless compares participants who received the intervention and those who did not.
  • Be skeptical when CONSORT is not followed and there is no published protocol.
  • Be skeptical of papers published in journals that do not enforce these requirements.

Disclaimer

I think I have provided enough details for readers to decide for themselves whether I am unduly influenced by my experiences with Barbara Fredrickson and her data. She and her colleagues have differing accounts of her research and of the events I have described in this blog.

As a disclosure, I receive money for writing these blog posts, less than $200 per post. I am also marketing a series of e-books,  including Coyne of the Realm Takes a Skeptical Look at Mindfulness and Coyne of the Realm Takes a Skeptical Look at Positive Psychology.

Maybe I am just making a fuss to attract attention to these enterprises. Maybe I am just monetizing what I have been doing for years virtually for free. Regardless, be skeptical. But to get more information and get on a mailing list for my other blogging, go to coyneoftherealm.com and sign up.

Complex PTSD, STAIR, Social Ecology and lessons learned from 9/11- a conversation with Dr. Marylene Cloitre

Dr. Marylene Cloitre is the Associate Director of Research of the National Center for PTSD Dissemination and Training Division and a research Professor of Psychiatry and Child and Adolescent Psychiatry at the New York University, Langone Medical Center in New York City. She is a recipient of several honors related to her service in New York City following 9-11 and was an advisory committee member for the National September 11 Memorial Museum. She has specific expertise in complex PTSD and for the development and dissemination of STAIR (Skills Training in Affective and Interpersonal Regulation), a psychological therapy designed to help survivors of trauma.

Dr. Jain: What exactly is complex PTSD?

Dr. Cloitre:
Complex PTSD has a very long history, really pushed primarily by clinicians who looked at their patients and thought there’s something more going on here than PTSD.
In DSM-4, complex PTSD was recognized in the additional features where there is a mix of problems related to emotion regulation, self-concept and interpersonal relationships. After that, there was really no funding around investigating this further and the research for it has been spotty and it was sort of dying on the vine.

But with the development of a new version of ICD-11, there was an opportunity really to refresh consideration about complex PTSD. I was part of a work group that started in 2012, we looked at the literature and thought there seems to be enough data to support two different forms of PTSD , the classic fear circuitry disturbance and then this more general kind of disturbance in these three core areas of emotion regulation, self-concept and interpersonal relationships.

We proposed that there should be two distinct disorders: PTSD and complex PTSD and it looks like it’s been accepted and it will part of the ICD-11 coming out in the 2018.

Since the initial proposal, I’ve been working with many people, mostly Europeans, where ICD is more prominent than in the United States and there are now about nine published papers providing supporting evidence that these two distinct disorders.

Dr. Jain:
Can you summarize in which ways they’re distinct? So on a clinical level what would you see in complex PTSD?

Dr. Cloitre: Mostly we’ve been looking at latent class analysis which is a newish kind of data analytic technique which looks at how people cluster together and you look at their symptom profile. There are a group of people who very distinctly have PTSD in terms of re-experiencing, avoidance and hyperarousal and then they’re fine on everything else. Then you have another group of people who have these problems as well as problems in these three other areas.And then there are another group of people who, despite exposure to trauma, do fairly well.

What we’ve been seeing are these three groups in clinical populations as well as in community populations and adults as well as in children.

Overall, these latent class analyses are really showing that people cluster together in very distinctly different ways. I think the important thing about this distinction is, what’s next? Perhaps there are different clinical interventions that we want to look at to maximize good outcome. Some people may do very well with exposure therapy. I would say the PTSD clustered folks will do very well and have optimal outcome because that’s all that bothers them. For the other folks, they have a lot of other problems that really contribute to their functional impairment.

For me as a clinician as well as a researcher, I’ve always been worried not so much about the diagnosis of the person in front of me but about how well they’re functioning in the world. What I have noticed is you can get rid of the PTSD symptoms, for people with complex PTSD, but they’re still very impaired.
My motivation for thinking about a different diagnosis and different treatment is to identify these other problems and then to provide interventions, that target these other problems, for the goal of improving our day to day life functioning. If you don’t have ability to relate well to people because you mistrust them or are highly avoidant or if you think poorly about yourself these are huge issues then we need to target these issues in treatment.

Dr. Jain
Have you noticed that different types of trauma contribute to PTSD v complex PTSD?

Dr. Cloitre Yes, it does and it kind of makes sense that people who have had sustained and repeated trauma (e.g. multiple and sustained trauma doing childhood) are the ones who have complex PTSD.

Dr. Jain: Can you tell us a little bit about the fundamental philosophy that drove you to come up with STAIR and what evidence is there for it’s effectiveness?

Dr. Cloitre I came to develop STAIR as a result of paying attention to what my patients were telling me they wanted help with, that was the driving force. It wasn’t a theoretical model, it was that patients came and said,” I’m really having problems with my relationships and that’s what I need help with” or “I really have problems with my moods and I need help with that”.

So, I thought, why don’t we start there? That is why I developed STAIR and developed it as a sequence therapy while respecting the importance of getting into the trauma and doing exposure based work, I also wanted to engage the patient and respect their presented needs. That what it’s all about for me.
Overtime I saw a secondary benefit, that an improved sense of self and improved emotion regulation could actually impact the value of exposure therapy in a positive way.

In my mind, the real question is: What kind of treatments work best for whom? That is the question. There will be some people for whom going straight to exposure therapy is the most effective and efficient way to get them functioning and they’ll be happy with three or four sessions, just like some 9/11 survivors I saw. They only needed three or four sessions.

Other people might do better with combination therapies .

Dr. Jain The studies that you’ve done with STAIR can you summarize the populations you have used it for?

Dr. Cloitre: I began using STAIR + exposure with the population I thought would most need it which is people with histories of childhood abuse. In fact, our data show that the combination of skills training, plus exposure was significantly better than skills alone or exposure alone. So that’s very important. It also reduced dropout very significantly as compared to exposure, which is a continuing problem with exposure therapy especially for this population

Dr. Jain Can you speak to the social ecology/social bonds and PTSD, what the research world can tell us about the social dimensions of PTSD and how we can apply this to returning military members and veterans?

Dr. Cloitre: I think that social support is critical to the recovery of people who have been exposed to trauma and who are vulnerable to symptoms .We have enough studies showing that it’s the critical determinant of return to health.

I think we have done a very poor job of translating this observation into something meaningful for returning veterans. There is general attention that families are part of the solution and communities are part of the solution but it is vague –there isn’t really a sense of what are we going to do about it.

I think these wars (Afghanistan and Iraq) are very different than Vietnam, that’s where soldiers came back and they were called baby killers and had tomatoes and garbage thrown at them. You can really understand why a vulnerable person would spiral downwards into pretty significant PTSD and substance abuse.

I think we need to be more thoughtful and engage veterans in discussions about what’s most helpful in the reintegration progress, because there are probably really explicit things like, being welcomed home, but also very subtle things that we haven’t figured out about the experience.
I think on a community or family level, there’s a general awareness but we haven’t really gotten clear thinking or effective about what to do. I think that’s our next step. The parade and the welcome home signs are not enough.

I’ll give an example of what I witnessed after 9/11. The community around survivors feels awkward and doesn’t know what to do, so they start moving away. Combine this with the survivor who is sad or being irritable and so not the most attractive person to engage with. I say to patients sometimes, it’s a really unfair and unfortunate circumstance, that in a way, not only are you suffering but you’re also kind of responsible for making people around you comfortable with you.

I used to do STAIR because patients ask for it and also I thought,” oh well some people never had these social skills in the first place, which is why they are vulnerable with PTSD” but then I noticed that STAIR was useful for everybody with PTSD because I think the traumatized patient has an unfair burden to actually reach out to people in a process of re-engagement because the community and the family is confused. Others, strangers or say employers are scared. So they have to kind of compensate for the discomfort of others, which is asking a lot.

I think in our therapies we can say look, it’s not fair, but people feel uncomfortable around the veteran. They don’t know how to act and in a way you not only have to educate yourself about your circumstance but, in the end, educate others.

Dr. Jain Survivor perception of social support really matters. If you take a group of disaster survivors, we may feel well we’re doing this for them and we’re doing that for them but if the survivors, for whatever reason, don’t perceive it as being helpful it doesn’t matter. When I think about marginalized populations in our society, I don’t think to communicate to others about how to help you or how to support you is that simple.

Dr. Cloitre It’s very complicated because it is a dynamic. I think we need to talk to trauma survivors and understand what their needs are so that the community can respond effectively and be a match. Not everybody wants the same thing. That’s the real challenge. I think if survivors can be a little bit more compassionate, not only towards themselves for what they have been through but to others who are trying to communicate with them and failing.

Dr. Jain That can be hard to do when you’re hurting. The social ecology of PTSD is really important but it’s really complicated and we are not there, in terms of harnessing social ecology to improve lives.

Dr. Cloitre No. I think we’re just groping around in the dark, in a room that says the social ecology of PTSD is important. We don’t know how to translate that observation into actionable plans either in our individual therapies or in our family therapies and then in our community actions or policies.
But I do think that, in the individual therapy, recognizing the importance of trying to enhance perception of support where they’re real. Secondly, recognizing the burden that they have that’s unfair and try to enhance skills for communicating with people. Thirdly, having compassion for people who are out there who are trying to communicate but failing.
I have had a lot of patients who come, into therapy, and say,
” This is so ridiculous. They’re saying stupid things to me”.
And, I say,
“well at least they’re trying”
I think it’s important for the affected community to have the voice and take leadership, instead of people kind of smothering them with social support that they may or may not need.

Dr. Jain
I know you’re a native New Yorker and you provided a lot of service to New York City following 9/11. Can you speak about that work? And in particular what I’m really interested in is that body of research that emerged after 9/11 because I feel like that has helped us understand so much about disaster related PTSD.

Dr. Cloitre We found out was most people are very resilient. We were able to get prevalence rates of PTSD following 9/11, that in of itself was very important. I think that’s the strongest research that came out.

I think on a social level it broke open awareness, in this country and maybe globally, about the impact of trauma and about PTSD because it came with very little shame or guilt.
Some people say what was so different about 9/11? Well, because it happened to the most powerful country and the most powerful city then if it could happen to them it could happen anywhere. That was the response, there was not this marginalization, ”Well this is a victim circumstance and it couldn’t happen to me and they must have done something to bring it on themselves”.
There was a hugely different response and that was so key to the shift in recognition of the diagnosis of PTSD which then led to more general research about it. I think that that was huge.
Before 9/11, I would say I do research in PTSD and people would say, what is that? Now I say I do research in PTSD, not a single person ever asks me what that is. I mean I’m sure they don’t really know what it is but they never looked confused. It’s a term that is now part and parcel of American society.
9/11 revolutionized the awareness of PTSD and also the acceptability of adverse effects, as a result of trauma. There was new knowledge gained and also just a transformation in awareness that was national and probably global because the impact it had and the ripple effects on another countries.
I think those are the two main things.
I don’t think it’s really done very much for our thinking about treatment. I think we continue to do some of our central treatments and we didn’t get too far in really advancing or diversifying.
For me personally, I learned a lot about the diversity of kinds of trauma survivors. Very different people, very different reactions.
I think probably the other important academic or scholarly advance, was in the recognition of this blend of loss and trauma and how they come together. That people’s responses to death ,under circumstances of unexpected and violent death, has also advanced. In fact now ICD-11 there will be a traumatic grief diagnosis, which I think has moved forward because of 9/11. That’s pretty big.

Talking back to “Talking Therapy Can Literally Rewire the Brain”

This edition of Mind the Brain was prompted by an article in Huffington Post, Talking Therapy Can Literally Rewire the Brain.

The title is lame on two counts: “literally” and any suggestion that psychotherapy does something distinctive to the brain, much less “rewiring” it.

I gave the journalist the benefit of a doubt and assumed that the editor applied the title to the article without having the journalist’s permission. I know from talking to journalists, that’s a source of enormous frustration when it happens. But in this instance, the odd title came directly from a press release from King’s College London  (Study reveals for first time that talking therapy changes the brain’s wiring)which concerned an article published in the  Nature Publishing journal, Translational Psychiatry 

Hmm, authors from King’s College and published in a Nature journal suggest this might be a serious piece of science worth giving a closer look. In the end, I was reminded not to make too much of authors’ affiliation and where they publish.

I poked fun on Twitter at the title of the Huffington Post article.

literally twitter postThe retweets and likes drifted into a discussion of neuroscientists saying they really didn’t know much about the brain. Somebody threw in a link to an excellent short YouTube video by NeuroSkeptic on that topic that I highly recommend.

Anyway, I found serious problems with the Huffington Post article that should have been sufficient to stop with it.  Nonetheless, I proceeded and the problems got compounded when I turned to the press release with its direct quotes from the author. I wasn’t long into the Translational Psychiatry article before I appreciated that its abstract was misleading in claiming that there were 22 patients in the study. That is a small number, but if the abstract had stated the actual number, which was 15 patients, readers would have been warned not to take too seriously complicated multivariate statistics that were coming up.

How did a prestigious journal like Translational Psychiatry allow authors to misrepresent their sample size? I would shortly be even more puzzled about why the article was even published in Translational Psychiatry, although I formed unflattering some hypotheses about that journal. I’ll end with those hypotheses.

Talking To A Therapist Can Literally Rewire Your Brain (Huffington Post)

The opening sentence would raise the skepticism of informed reader:

If you can change the way you think, you can change your brain.

If I accept that statement, it’s going be with a broad stretching of it to meaninglessness. “If you can change the way you think..” covers lots of territory. If the statement  is going to remain the correct, then the phrase “change your brain” is going to have to be similarly broad. If the journalist wants to make a big deal of this claim, she would have to concede that reading my blog changes her brain.

That’s the conclusion of a new study, which finds that challenging unhealthy thought patterns with the help of a therapist can lead to measurable changes in brain activity.

Okay, we now know that at least a specific study with brain measurements is being discussed.

But then

In the study, psychologists at King’s College London show that Cognitive Behavioral Therapy strengthens certain healthy brain connections in patients with psychosis. This heightened connectivity was associated with long-term reductions in psychotic symptoms and recovery eight years later, according to the findings, which were published online Tuesday in the journal Translational Psychiatry.

“Over six months of therapy, we found that connections between certain key brain regions became stronger,” Dr. Liam Mason, a clinical psychologist at King’s College and the study’s lead author, told The Huffington Post in an email. “What we are really excited about here is that these stronger connections lead to long-term improvements in people’s symptoms and overall recovery across the eight years that we followed them up.”

A lot of skepticism is being raised. The article seems to be claiming that changes in brain function observed in the short term with cognitive behavior therapy for psychosis [CBTp] were associated with long-term changes over the extraordinary eight years.

The problems with this? First CBTp is not known to be particularly effective, even in the short term. Second, this a lot heterogeneity under the umbrella of “psychosis,” but in eight years, a person who has had that label appropriately applied will have a lot of experiences: recovery and relapse, and certainly other mental health treatments. How in all that noise and confusion can a signal detected that a psychotherapy that isn’t particularly effective explains any long-term improvement?

[Skeptical about my claim that CBTp is ineffective? See Effect of a missing clinical trial on what cochrane-slide-2we think about cognitive behavior therapy  and the slides about Cochrane reviews from a longer Powerpoint presentation.]

Any discussion of how CBT works and what long-term improvements it predicts has get past considerable evidence CBT doesn’t work any better than nonspecific supportive treatments. Without short-term effects, how can have long-term effects?

cbt cochrane 1

 

 

 

There is no acknowledgment in the Huffington Post article of the lack of efficacy of CBTp. Instead, we have a strong assumption that CBTp works and that the scientific paper under discussion is important because it shows that CBTp strongly works, with observable long-term effects.

The journalist claims that the present scientific paper builds on earlier one:

In the original study, patients with psychosis underwent brain imaging both before and after three months of CBT. The patients’ brains were scanned while they looked at images of faces expressing different emotions. After undergoing CBT, the patients showed marked increases in brain activity. Specifically, the brain scans showed heightened connections between the amygdala, the brain region involved in fear and threat processing, and the prefrontal cortex, which is responsible for reasoning and thinking rationally ― suggesting that the patients had an improved ability to accurately perceive social threats.

“We think that this change may be important in allowing people to consciously re-think immediate emotional reactions,” Mason said.

Readers can click back to my earlier blog post, Sex and the single amygdala: A tale almost saved by a peek at the data. The same experimental paradigms was being used to study the amygdala in terms of activity predicted changes in the number of sexual partners over time. In that particular study, p-hacking, and significance chasing and selective reporting were used by the authors to create the illusion of important findings. If you visit my blog post, check out the comments that ridiculed the study, including from two very bright undergraduates.

We don’t need to deter into a technical discussion of functional magnetic resonance imaging (fMRI) data to make a couple of points. The authors of the present study used a rather standard experimental paradigm and the focus on amygdala concerned some quite nonspecific psychological processes.

The authors of the present study soon concede this:

There’s a good chance that similar brain changes also occur in CBT patients being treated for anxiety and depression, Mason said.

“There is research showing that some of the same connections may also be strengthened by CBT for anxiety disorders,” he explained.

But wait: isn’t the lead author also saying in the Huffington Post article and the title of the press release as well that this is a first-time study ever?

For the present purposes, we need only to dispense with any notion that we’re talking about a rewiring of the brain known to be specifically associated with psychosis or even that there is reason to expect that such “rewiring” could be expected to predict long-term outcome of psychosis.

Reading further, we find that the study only involved following 15 patients from a larger study, un like the misleading abstract that claims 22.

Seriously, are we being asked to get worked up about a fMRI study with only 15 patients? Yup.

The researchers found that heightened connectivity between the amygdala and prefrontal cortex was associated with long-term recovery from psychosis. The exciting finding marks the first time scientists have been able to demonstrate that brain changes resulting from psychotherapy may be responsible for long-term recovery from mental illness.

What is going on here? The journalist next gives free reign to the lead author to climb up on a soap box and proclaim his agenda behind all of these claim:

The findings challenge the “brain bias” in psychiatry, an institutional focus on physical brain differences over psychological factors in mental illness. Thanks to this common bias, many psychiatrists are prone to recommending medication to their clients rather than psychological treatments such as CBT.

But medication has been proved to be effective with psychosis, CBTp has not.

“Psychological therapy can lead to changes in the mechanics of the brain,” Mason said. “This is especially important for conditions like psychosis which have traditionally been viewed as ‘brain diseases’ that require medication or even surgery.”

“Mechanics of the brain”?  Now we have escalated from ‘literally rewiring’ to “changes in the mechanics.” Dude, we are talking about a fMRI study. Do you think we have been transported to an auto repair shop?

“This research challenges the notion that the existence of physical brain differences in mental health disorders somehow makes psychological factors or treatments less important,” Mason added in a statement.

Clicking on the link takes one to Science Daily article which churnals (plagiarizes) a press release from Kings College,  London.

The Press Release: Study reveals for first time that talking therapy changes the brain’s wiring

There is not much in this press release that is not been regurgitated in the Huffington Post article except for some more soapbox preaching:

Unfortunately, previous research has shown that this ‘brain bias’ can make clinicians more likely to recommend medication but not psychological therapies. This is especially important in psychosis, where only one in ten people who could benefit from psychological therapies are offered them.”

But CBT, the most evaluated psychotherapy for psychosis has not been shown to be effective, by itself. Sure, patients suffering from psychosis need a lot of support, efforts to maintain positive expectations, and opportunities to talk about their experience. But in direct comparisons between such support provided by professionals or by peers, CBT has not been shown to be more effective.

The researchers now hope to confirm the results in a larger sample, and to identify the changes in the brain that differentiate people who experience improvements with CBT from those who do not. Ultimately, the results could lead to better, and more tailored, treatments for psychosis, by allowing researchers to understand what determines whether psychological therapies are effective.

Sure, we are to give a high priority to examining the mechanism by which CBT, which has not been proven effective, works its magic.

Translational Psychiatry: Brain connectivity changes occurring following cognitive behavioural therapy for psychosis predict long-term recovery

[This will be a quick tour, only highlighting some of the many problems that I found. I welcome readers probing the open access article and posting what they find.]

The Abstract misrepresents the study as having 22 patients, when it actually only had data from 15.

The Introduction largely focuses on previous work of the author group. If you bothered to check, none of it involves randomized trials, despite making claims of efficacy for CBTp. No reference is made to a large body of literature finding a lack of effectiveness for CBTp. In particular, there is no mention of the Cochrane reviews.

A close reading of the Methods indicates that what are claimed to be “objective clinical outcomes” are actually unblinded, retrospective ratings of case notes by the two raters including the first author. Unblinded ratings, particularly by an investigator, are an important source of bias in studies of CBTp and lead to exaggerated estimates of outcome.

An additional measure with inadequate validation was obtained at 7 to 8 year follow-up:

Questionnaire about the Process of Recovery (QPR,31), a service-user led instrument that follows theoretical models of recovery and provides a measure of constructs such as hope, empowerment, confidence, connectedness to others.

All patients came from clinical studies conducted by the author group that did not involve randomization. Rather, assignment to CBTp was based on provider identifying patients “deemed as suitable for CBTp.“ There is considerable risk of bias if it patient data is treated if it arose in a randomized trial. I previously raised issues about the inadequacy of routine care provided to psychotic patients both in terms of its clinical adequacy and an meaningfulness as a control/comparison group because of its lack of nonspecific factors.

All patients assigned to CBTp were receiving medication and other services. A table revealed that receipt of other services was strongly correlated with recovery status. Yet the authors are attempting to attribute any recovery across the eight years to the brief course of CBTp at the beginning. Obviously, the study is hopelessly confounded and no valid inferences possible. This alone should have gotten this study rejected.

There were data available from control subjects at follow-up, including fMRI data, but they were excluded from the present report. That is unfortunate, because these data would allow at least minimal evaluation of whether CBTp versus remaining in routine care had any difference in outcomes and – importantly – if the fMRI data similarly predicted the outcomes of patients not receiving CBTp.

Data Analysis indicates one tailed, multivariate statistical tests that are quite inappropriate and essentially meaningless with such a small data set. Bonferonni corrections, which were inconsistently applied, offer no salvation.

With such small samples and multivariate statistics, a focus on p-values is inappropriate, but the authors do just that and report p<.04 and p<.06, the latter being treated as significant. The hypothesis that this might represent significance chasing is supported when supplementary data tables are examined. When I showed them to a neuroscientist, his first response was that they were painful to look at.

longitudinalI could go on but…

Why did the authors bother with this study? Why did King’s College London publicize the study with a press release? Why was it published in Nature’s Translational Psychiatry without the editor or the reviewers catching obvious flaws?

The authors had some data lying around and selected out post-hoc a subset of patients and applied retrospective ratings and inappropriate statistics. There is no evidence of a protocol for a priority hypothesis being pursued, but strong circumstantial of p-hacking, significance chasing and selective reporting. This is not a valid study, not even an experimerciasl, it is a political, public relations effort.

soao box 2Statements in the King’s College press release echoed in the Huffington Post indicate a clear ideological agenda. Anyone who knows anything about psychiatry, neuroscience, cognitive behavior therapy for psychosis is unlikely to be persuaded. Anyone who examines the supplementary statistical tables armed with minimal statistical sophistication will be unimpressed, if not shocked. We can assume that as a group, these people would quickly leave the conversation about cognitive behavior therapy for psychosis literally rewiring the brain, if they ever got engaged.

The authors were not engaging relevant audiences in intelligent conversation. I can only presume that they were targeting naive vulnerable patients and their families having to make difficult decisions about treatment for psychosis. And the authors were preaching to the anti-psychiatry crowd. One of the authors also appears as an author of Understanding Psychosis, a strongly non-evidence-based advocacy of cognitive behavior therapy for psychosis, delivered with a hostility towards medication and psychiatrists (See my critique.) I did know that about this author until I read the materials I’ve been reviewing. It is an important bit of information and speaks to the author’s objectivity and credibility.

Obviously, the press office of King’s College London depends a lot, maybe almost entirely, on the credibility of authors associated with that institution. Maybe next time, they should seek an independent evaluation. Or maybe they are  just interested in publicity about research of any kind.

But why was this article published in the seemingly prestigious Nature journal, Translational Psychiatry? It should be noted that this journal is open access, but with exceptionally pricey Article Processing Costs (APCs) of £2,400/$3,900/€2,800. Apparently adequate screening and appropriate peer review are not including in these costs. These authors have purchased a lot of prestige. Moreover, if you want to complain about their work in a letter to the editor, you have to pay $900. So the authors have effectively insulated themselves from critics. Of course, is always blogging, PubMed Commons and PubPeer for post-publication peer review.

I previously blogged about another underpowered, misreported study claiming to have identified a biomarker blood test for depression. The authors were explicitly advertising that they were seeking commercial backers for their blood test. They published in Translational Psychiatry. Maybe that’s the place to go for placing outlandish claims into open access – where anybody can be reached – with a false assurance of a prestige protected by rigorous peer review.

 

Trusted source? The Conversation tells migraine sufferers that child abuse may be at the root of their problems

Patients and family members face a challenge obtaining credible, evidence-based information about health conditions from the web.

Migraine sufferers have a particularly acute need because their condition is often inadequately self-managed without access to best available treatment approaches. Demoralized by the failure of past efforts to get relief, some sufferers may give up consulting professionals and desperately seek solutions on Internet.

A lot of both naïve and exploitative quackery that awaits them.

Even well-educated patients cannot always distinguish the credible from the ridiculous.

One search strategy is to rely on websites that have proven themselves as trusted sources.

The Conversation has promoted itself as such a trusted source, but its brand is tarnished by recent nonsense we will review concerning the role of child abuse in migraines.

Despite some excellent material that has appeared in other articles in The Conversation, I’m issuing a reader’s advisory:

exclamation pointThe Conversation cannot be trusted because this article shamelessly misinforms migraine sufferers that child abuse could be at the root of their problems.

The Conversation article concludes with a non sequitur that shifts sufferers and their primary care physicians away from getting consultation with the medical specialists who are most able to improve management of a complex condition.

 

The Conversation article tells us:

Within a migraine clinic population, clinicians should pay special attention to those who have been subjected to maltreatment in childhood, as they are at increased risk of being victims of domestic abuse and intimate partner violence as adults.

That’s why clinicians should screen migraine patients, and particularly women, for current abuse.

This blog post identifies clickbait, manipulation, misapplied buzz terms, and  misinformation – in the The Conversation article.

Perhaps the larger message of this blog post is that persons with complex medical conditions and those who provide formal and informal care for them should not rely solely on what they find on the Internet. This exercise specifically focusing on The Conversation article serves to demonstrate this.

Hopefully, The Conversation will issue a correction, as they promise to do at the website when errors are found.

We are committed to responsible and ethical journalism, with a strict Editorial Charter and codes of conduct. Errors are corrected promptly.

The Conversation article –

Why emotional abuse in childhood may lead to migraines in adulthood

clickbaitA clickbait title offered a seductive  integration of a trending emotionally laden social issue – child abuse – with a serious medical condition – migraines – for which management is often not optimal. A widely circulating estimate is that 60% of migraine sufferers do not get appropriate medical attention in large part because they do not understand the treatment options available and may actually stop consulting physicians.

Some quick background about migraine from another, more credible source:

Migraines are different from other headaches. People who suffer migraines other debilitating symptoms.

  • visual disturbances (flashing lights, blind spots in the vision, zig zag patterns etc).
  • nausea and / or vomiting.
  • sensitivity to light (photophobia).
  • sensitivity to noise (phonophobia).
  • sensitivity to smells (osmophobia).
  • tingling / pins and needles / weakness / numbness in the limbs.

Persons with migraines differ greatly among themselves in terms of the frequency, intensity, and chronicity of their symptoms, as well as their triggers for attacks.

Migraine is triggered by an enormous variety of factors – not just cheese, chocolate and red wine! For most people there is not just one trigger but a combination of factors which individually can be tolerated. When these triggers occur altogether, a threshold is passed and a migraine is triggered. The best way to find your triggers is to keep a migraine diary. Download your free diary now!

Into The Conversation article: What is the link between emotional abuse and migraines?

Without immediately providing a clicklink so that  readers can check sources themselves, The Conversation authors say they are drawing on “previous research, including our own…” to declare there is indeed an association between past abuse and migraines.

Previous research, including our own, has found a link between experiencing migraine headaches in adulthood and experiencing emotional abuse in childhood. So how strong is the link? What is it about childhood emotional abuse that could lead to a physical problem, like migraines, in adulthood?

In invoking the horror of childhood emotional abuse, the authors imply that they are talking about something infrequent – outside the realm of most people’s experience.  If “childhood emotional abuse” is commonplace, how could  it be horrible and devastating?

In their pursuit of click bait sensationalism, the authors have only succeeded in trivializing a serious issue.

A minority of people endorsing items concerning past childhood emotional abuse actually currently meet criteria for a diagnosis of posttraumatic stress disorder. Their needs are not met by throwing them into a larger pool of people who do not meet these criteria and making recommendations based on evidence derived from the combined group.

Spiky_Puffer_Fish_Royalty_Free_Clipart_Picture_090530-025255-184042The Conversation authors employ a manipulative puffer fish strategy [1 and  2 ] They take what is a presumably infrequent condition and  attach horror to it. But they then wildly increase the presumed prevalence by switching to a definition that arises in a very different context:

Any act or series of acts of commission or omission by a parent or other caregiver that results in harm, potential for harm, or threat of harm to a child.

So we are now talking about ‘Any act or series of acts? ‘.. That results in ‘harm, potential for harm or threat’? The authors then assert that yes, whatever they are talking about is indeed that common. But the clicklink to support for this claim takes the reader behind a pay wall where a consumer can’t venture without access to a university library account.

Most readers are left with the authors’ assertion as an authority they can’t check. I have access to a med school library and I checked. The link is  to a secondary source. It is not a systematic review of the full range of available evidence. Instead, it is a  selective search for evidence favoring particular speculations. Disconfirming evidence is mostly ignored. Yet, this article actually contradicts other assertions of The Conversation authors. For instance, the paywalled article says that there is actually little evidence that cognitive behavior therapy is effective for people whose need for therapy is only because they  reported abuse in early childhood.

Even if you can’t check The Conversation authors’ claims, know that adults’ retrospective of childhood adversity are not particularly reliable or valid, especially studies relying on checklist responses of adults to broad categories, as this research dos.

When we are dealing with claims that depend on adult retrospective reports of childhood adversity, we are dealing with a literature with seriously deficiencies. This literature grossly overinterprets common endorsement of particular childhood experiences as strong evidence of exposure to horrific conditions. This literature has a strong confirmation bias. Positive findings are highlighted. Negative findings do not get cited much. Serious limitations in methodology and inconsistency and findings generally ignored.

[This condemnation is worthy of a blog post or two itself. But ahead I will provide some documentation.]

The Conversation authors explain the discrepancy between estimates based on administrative data of one in eight children suffering abuse or neglect before age 18 versus much higher estimates from retrospective adult reports on the basis of so much abuse going unreported.

The discrepancy may be because so many cases of childhood abuse, particularly cases of emotional or psychological abuse, are unreported. This specific type of abuse may occur within a family over the course of years without recognition or detection.

This could certainly be true, but let’s see the evidence. A lack of reporting could also indicate a lack of many experiences reaching a threshold prompting reporting. I’m willing to be convinced otherwise, but let’s see the evidence.

The link between emotional abuse and migraines

The Conversation authors provide links only to their own research for their claim:

While all forms of childhood maltreatment have been shown to be linked to migraines, the strongest and most significant link is with emotional abuse. Two studies using nationally representative samples of older Americans (the mean ages were 50 and 56 years old, respectively) have found a link.

The first link is to an article that is paywalled except for its abstract. The abstract shows  the study does not involve a nationally representative sample of adults. The study compared patients with tension headaches to patients with migraines, without a no-headache control group. There is thus no opportunity to examine whether persons with migraines recall more emotional abuse than persons who do not suffer headaches.  Any significant associations in a huge sample disappeared after controlling for self-reported depression and anxiety.

My interpretation: There is nothing robust here. Results could be due to crude measurement, confounding of retrospective self-report by current self-report anxious or depressive symptoms. We can’t say much without a no-headache control group.

The second of the authors’ studies is also paywalled, but we can see from the abstract:

We used data from the Adverse Childhood Experiences (ACE) study, which included 17,337 adult members of the Kaiser Health Plan in San Diego, CA who were undergoing a comprehensive preventive medical evaluation. The study assessed 8 ACEs including abuse (emotional, physical, sexual), witnessing domestic violence, growing up with mentally ill, substance abusing, or criminal household members, and parental separation or divorce. Our measure of headaches came from the medical review of systems using the question: “Are you troubled by frequent headaches?” We used the number of ACEs (ACE score) as a measure of cumulative childhood stress and hypothesized a “dose–response” relationship of the ACE score to the prevalence and risk of frequent headaches.

Results — Each of the ACEs was associated with an increased prevalence and risk of frequent headaches. As the ACE score increased the prevalence and risk of frequent headaches increased in a “dose–response” fashion. The risk of frequent headaches increased more than 2-fold (odds ratio 2.1, 95% confidence interval 1.8-2.4) in persons with an ACE score ≥5, compared to persons with and ACE score of 0. The dose–response relationship of the ACE score to frequent headaches was seen for both men and women.

The Conversation authors misrepresent this study. It is about self-reported headaches, not the subgroup of these patients reporting migraines. But in the first of their own studies they just cited, the authors contrast tension headaches with migraine headaches, with no controls.

So the data did not allow examination of the association between adult retrospective reports of childhood emotional abuse and migraines. There is no mention of self-reported depression and anxiety, which wiped out any relationship with childhood adversity in headaches in the first study. I would expect that a survey of ACES would include such self-report. And the ACEs equate either parental divorce and separation (the same common situation likely occur together and so are counted twice) with sexual abuse in calculating an overall score.

The authors make a big deal of the “dose-response” they found. But this dose-response could just represent uncontrolled confounding  – the more ACEs indicates the more confounding, greater likelihood that respondents faced other social, person, economic, and neighborhood deprivations.  The higher the ACE score, the greater likelihood that other background characteristic s are coming into play.

The only other evidence the authors cite is again another one of their papers, available only as a conference abstract. But the abstract states:

Results: About 14.2% (n = 2,061) of the sample reported a migraine diagnosis. Childhood abuse was recalled by 60.5% (n =1,246) of the migraine sample and 49% (n = 6,088) of the non-migraine sample. Childhood abuse increased the chances of a migraine diagnosis by 55% (OR: 1.55; 95% CI 1.35 – 1.77). Of the three types of abuse, emotional abuse had a stronger effect on migraine (OR: 1.52; 95% CI 1.34 – 1.73) when compared to physical and sexual abuse. When controlled for depression and anxiety, the effect of childhood abuse on migraine (OR: 1.32; 95% CI 1.15 – 1.51) attenuated but remained significant. Similarly, the effect of emotional abuse on migraine decreased but remained significant (OR: 1.33; 95% CI 1.16 – 1.52), when controlled for depression and anxiety.

The rates of childhood abuse seem curiously high for both the migraine and non-migraine samples. If you dig a bit on the web for details of the National Longitudinal Study of Adolescent Health, you can find how crude the measurement is.  The broad question assessing emotional abuse covers the full range of normal to abnormal situations without distinguishing among them.

How often did a parent or other adult caregiver say things that really hurt your feelings or made you feel like you were not wanted or loved? How old were you the first time this happened? (Emotional abuse).

An odds ratio of 1.33 is not going to attract much attention from an epidemiologist, particularly when it is obtained from such messy data.

I conclude that the authors have made only a weak case for the following statement: While all forms of childhood maltreatment have been shown to be linked to migraines, the strongest and most significant link is with emotional abuse.

Oddly, if we jump ahead to the closing section of The Conversation article, the authors concede:

Childhood maltreatment probably contributes to only a small portion of the number of people with migraine.

But, as we will  see, they make recommendations that assume a strong link has been established.

Why would emotional abuse in childhood lead to migraines in adulthood?

This section throws out a number of trending buzz terms, strings them together in a way that should impress and intimidate consumers, rather than allow them independent evaluation of what is being said.

got everything

The section also comes below a stock blue picture of the brain.  In web searches, the picture  is associated with social media where the brain is superficially brought into  in discussions where neuroscience is  not relevant.

An Australian neuroscientist commented on Facebook:

Deborah on blowing brains

The section starts out:

The fact that the risk goes up in response to increased exposure is what indicates that abuse may cause biological changes that can lead to migraine later in life. While the exact mechanism between migraine and childhood maltreatment is not yet established, research has deepened our understanding of what might be going on in the body and brain.

We could lost in a quagmire trying to figuring out the evidence for the loose associations that are packed into a five paragraph section.  Instead,  I’ll make some observations that can be followed up by interested readers.

The authors acknowledge that no mechanism has been established linking migraines and child maltreatment. The link for this statement takes the reader to the authors own pay walled article that is explicitly labeled “Opinion Statement ”.

The authors ignore a huge literature that acknowledges great heterogeneity among sufferers of migraines, but points to some rather strong evidence for treatments based on particular mechanisms identified among carefully selected patients. For instance, a paper published in The New England Journal of Medicine with well over 1500 citations:

Goadsby PJ, Lipton RB, Ferrari MD. Migraine—current understanding and treatment. New England Journal of Medicine. 2002 Jan 24;346(4):257-70.

Speculations concerning the connections between childhood adversity, migraines and the HPA axis are loose. The Conversation authors their obviousness needs to be better document with evidence.

For instance, if we try to link “childhood adversity” to the HPA axis, we need to consider the lack of specificity of” childhood adversity’ as defined by retrospective endorsement of Adverse Childhood Experiences (ACEs). Suppose we rely on individual checklist items or cumulative scores based on number of endorsements. We can’t be sure that we are dealing with actual rather than assumed exposure to traumatic events or that there be any consistent correlates in current measures derived from the HPA axis.

Any non-biological factor defined so vaguely is not going to be a candidate for mapping into causal processes or biological measurements.

An excellent recent Mind the Brain article by my colleague blogger Shaili Jain interviews Dr. Rachel Yehuda, who had a key role in researching HPA axis in stress. Dr. Yehuda says endocrinologists would cringe at the kind of misrepresentations that are being made in The Conversation article.

A recent systematic review concludes the evidence for specific links between child treatment and inflammatory markers is of limited and poor quality.

Coelho R, Viola TW, Walss‐Bass C, Brietzke E, Grassi‐Oliveira R. Childhood maltreatment and inflammatory markers: a systematic review. Acta Psychiatrica Scandinavica. 2014 Mar 1;129(3):180-92.

The Conversation article misrepresents gross inconsistencies in the evidence of biological correlates representing biomarkers. There are as yet no biomarkers for migraines in the sense of a biological measurement that reliably distinguishes persons with migraines from other patient populations with whom they may be confused. See an excellent funny blog post by Hilda Bastian.

Notice the rhetorical trick in authors of The Conversation article’s assertion that

Migraine is considered to be a hereditary condition. But, except in a small minority of cases, the genes responsible have not been identified.

Genetic denialists like Oliver James  or Richard Bentall commonly phrased questions in this manner to be a matter of hereditary versus non-hereditary. But complex traits like height, intelligence, or migraines involve combinations of variations in a number of genes, not a single gene or even a few genes.. For an example of the kind of insights that sophisticated genetic studies of migraines are yielding see:

Yang Y, Ligthart L, Terwindt GM, Boomsma DI, Rodriguez-Acevedo AJ, Nyholt DR. Genetic epidemiology of migraine and depression. Cephalalgia. 2016 Mar 9:0333102416638520.

The Conversation article ends with some signature nonsense speculation about epigenetics:

However, stress early in life induces alterations in gene expression without altering the DNA sequence. These are called epigenetic changes, and they are long-lasting and may even be passed on to offspring.

Interested readers can find these claims demolished in Epigenetic Ain’t Magic by PZ Myers, a biologist who attempts to rescue an extremely important development concept from its misuse.

Or Carl Zimmer’s Growing Pains for Field of Epigenetics as Some Call for Overhaul.

What does this mean for doctors treating migraine patients?

The Conversation authors startle readers with an acknowledgment that contradicts what they have been saying earlier in their article:

Childhood maltreatment probably contributes to only a small portion of the number of people with migraine.

It is therefore puzzling when they next say:

But because research indicates that there is a strong link between the two, clinicians may want to bear that in mind when evaluating patients.

Cognitive behavior therapy is misrepresented as an established effective treatment for migraines. A recent systematic review and meta-analysis  had to combine migraines with other chronic headaches and order to get ten studies to consider.

The conclusion of this meta-analysis:

Methodology inadequacies in the evidence base make it difficult to draw any meaningful conclusions or to make any recommendations.

The Conversation article notes that the FDA has approved anti-epileptic drugs such as valproate and topiramate for treatment of migraines. However, the article’s claim that the efficacy of these drugs are due to their effects on epigenetics is quite inconsistent with what is said in the larger literature.

Clinicians specializing and treating fibromyalgia or irritable bowel syndrome would be troubled by the authors’ lumping these conditions with migraines and suggesting that a psychiatric consultation is the most appropriate referral for patients who are having difficulty achieving satisfactory management.

See for instance the links contained in my blog post, No, irritable bowel syndrome is not all in your head.

The Conversation article closes with:

Within a migraine clinic population, clinicians should pay special attention to those who have been subjected to maltreatment in childhood, as they are at increased risk of being victims of domestic abuse and intimate partner violence as adults.

That’s why clinicians should screen migraine patients, and particularly women, for current abuse.

It’s difficult to how this recommendation is relevant to what has preceded it. Routine screening is not evidence-based.

The authors should know that the World Health Organization formerly recommended screening primary care women for intimate abuse but withdrew the recommendation because of a lack of evidence that it improved outcomes for women facing abuse and a lack of evidence that no harm was being done.

I am sharing this blog post with the authors of The Conversation article. I am requesting a correction from The Conversation. Let’s see what they have to say.

Meanwhile, patients seeking health information are advised to avoid The Conversation.

Is risk of Alzheimer’s Disease reduced by taking a more positive attitude toward aging?

Unwarranted claims that “modifiable” negative beliefs cause Alzheimer’s disease lead to blaming persons who develop Alzheimer’s disease for not having been more positive.

Lesson: A source’s impressive credentials are no substitute for independent critical appraisal of what sounds like junk science and is.

More lessons on how to protect yourself from dodgy claims in press releases of prestigious universities promoting their research.

If you judge the credibility of health-related information based on the credentials of the source, this article  is a clear winner:

Levy BR, Ferrucci L, Zonderman AB, Slade MD, Troncoso J, Resnick SM. A Culture–Brain Link: Negative Age Stereotypes Predict Alzheimer’s Disease Biomarkers. Psychology and Aging. Dec 7 , 2015, No Pagination Specified. http://dx.doi.org/10.1037/pag0000062

alzheimers
From INI

As noted in the press release from Yale University, two of the authors are from Yale School of Medicine, another is a neurologist at Johns Hopkins School of Medicine, and the remaining three authors are from the US National Institute on Aging (NIA), including NIA’s Scientific Director.

The press release Negative beliefs about aging predict Alzheimer’s disease in Yale-led study declared:

“Newly published research led by the Yale School of Public Health demonstrates that                   individuals who hold negative beliefs about aging are more likely to have brain changes associated with Alzheimer’s disease.

“The study suggests that combatting negative beliefs about aging, such as elderly people are decrepit, could potentially offer a way to reduce the rapidly rising rate of Alzheimer’s disease, a devastating neurodegenerative disorder that causes dementia in more than 5 million Americans.

The press release posited a novel mechanism:

“We believe it is the stress generated by the negative beliefs about aging that individuals sometimes internalize from society that can result in pathological brain changes,” said Levy. “Although the findings are concerning, it is encouraging to realize that these negative beliefs about aging can be mitigated and positive beliefs about aging can be reinforced, so that the adverse impact is not inevitable.”

A Google search reveals over 40 stories about the study in the media. Provocative titles of the media coverage suggest a children’s game of telephone or Chinese whispers in which distortions accumulate with each retelling.

Negative beliefs about aging tied to Alzheimer’s (Waltonian)

Distain for the elderly could increase your risk of Alzheimer’s (FinancialSpots)

Lack of respect for elderly may be fueling Alzheimer’s epidemic (Telegraph)

Negative thoughts speed up onset of Alzheimer’s disease (Tech Times)

Karma bites back: Hating on the elderly may put you at risk of Alzheimer’s (LA Times)

How you feel about your grandfather may affect your brain health later in life (Men’s Health News)

Young people pessimistic about aging more likely to develop Alzheimer’s later on (Health.com)

Looking forward to old age can save you from Alzheimer’s (Canonplace News)

If you don’t like old people, you are at higher risk of Alzheimer’s, study says (RedOrbit)

If you think elderly people are icky, you’re more likely to get Alzheimer’s (HealthLine)

In defense of the authors of this article as well as journalists, it is likely that editors added the provocative titles without obtaining approval of the authors or even the journalists writing the articles. So, let’s suspend judgment and write off sometimes absurd titles to editors’ need to establish they are offering distinctive coverage, when they are not necessarily doing so. That’s a lesson for the future: if we’re going to criticize media coverage, better focus on the content of the coverage, not the titles.

However, a number of these stories have direct quotes from the study’s first author. Unless the media coverage is misattributing direct quotes to her, she must have been making herself available to the media.

Was the article such an important breakthrough offering new ways in which consumers could take control of their risk of Alzheimer’s by changing beliefs about aging?

No, not at all. In the following analysis, I’ll show that judging the credibility of claims based on the credentials of the sources can be seriously misleading.

What is troubling about this article and its well-organized publicity effort is that information is being disseminated that is misleading and potentially harmful, with the prestige of Yale and NIA attached.

Before we go any further, you can take your own look at a copy of the article in the American Psychological Association journal Psychology and Aging here, the Yale University press release here, and a fascinating post-publication peer review at PubPeer that I initiated as peer 1.

Ask yourself: if you encountered coverage of this article in the media, would you have been skeptical? If so what were the clues?

spoiler aheadcure within The article is yet another example of trusted authorities exploiting entrenched cultural beliefs about the mind-body connection being able to be harnessed in some mysterious way to combat or prevent physical illness. As Ann Harrington details in her wonderful book, The Cure Within, this psychosomatic hypothesis has a long and checkered history, and gets continually reinvented and misapplied.

We see an example of this in claims that attitude can conquer cancer. What’s the harm of such illusions? If people can be led to believe they have such control, they are set up for blame from themselves and from those around them when they fail to fend off and control the outcome of disease by sheer mental power.

The myth of “fighting spirit” overcoming cancer that has survived despite the accumulation of excellent contradictory evidence. Cancer patients are vulnerable to blaming themselves for being blamed by loved ones when they do not “win” the fight against cancer. They are also subject to unfair exhortations to fight harder as their health situation deteriorates.

onion composite
                                                        From the satirical Onion

 What I saw when I skimmed the press release and the article

  • The first alarm went off when I saw that causal claims were being made from a modest sized correlational study. This should set off anyone’s alarms.
  • The press release refers to this as a “first ever” d discussion section of the article refer to this as a “first ever” study. One does not seek nor expect to find robust “first ever” discoveries in such a small data set.
  • The authors do not provide evidence that their key measure of “negative stereotypes” is a valid measure of either stereotyping or likelihood of experiencing stress. They don’t even show it is related to concurrent reports of stress.
  • Like a lot of measures with a negative tone to items, this one is affected by what Paul Meehl calls the crud factor. Whatever is being measured in this study cannot be distinguished from a full range of confounds that not even being assessed in this study.
  • The mechanism by which effects of this self-report measure somehow get manifested in changes in the brain lacks evidence and is highly dubious.
  • There was no presentation of actual data or basic statistics. Instead, there were only multivariate statistics that require at least some access to basic statistics for independent evaluation.
  • The authors resorted to cheap statistical strategies to fool readers with their confirmation bias: reliance on one tailed rather than two-tailed tests of significance; use of a discredited backwards elimination method for choosing control variables; and exploring too many control/covariate variables, given their modest sample size.
  • The analyses that are reported do not accurately depict what is in the data set, nor generalize to other data sets.

The article

The authors develop their case that stress is a significant cause of Alzheimer’s disease with reference to some largely irrelevant studies by others, but depend on a preponderance of studies that they themselves have done with the same dubious small samples and dubious statistical techniques. Whether you do a casual search with Google scholar or a more systematic review of the literature, you won’t find stress processes of the kind the authors invoke among the usual explanations of the development of the disease.

Basically, the authors are arguing that if you hold views of aging like “Old people are absent-minded” or “Old people cannot concentrate well,” you will experience more stress as you age, and this will accelerate development of Alzheimer’s disease. They then go on to argue that because these attitudes are modifiable, you can take control of your risk for Alzheimer’s by adopting a more positive view of aging and aging people

The authors used their measure of negative aging stereotypes in other studies, but do not provide the usual evidence of convergent  and discriminant validity needed to establish the measure assesses what is intended. Basically, we should expect authors to show that a measure that they have developed is related to existing measures (convergent validity) in ways that one would expect, but not related to existing measures (discriminate validity) with which it should have associations.

Psychology has a long history of researchers claiming that their “new” self-report measures containing negatively toned items assess distinct concepts, despite high correlations with other measures of negative emotion as well as lots of confounds. I poked fun at this unproductive tradition in a presentation, Negative emotions and health: why do we keep stalking bears, when we only find scat in the woods?

The article reported two studies. The first tested whether participants holding more negative age stereotypes would have significantly greater loss of hippocampal volume over time. The study involved 52 individuals selected from a larger cohort enrolled in the brain-neuroimaging program of the Baltimore Longitudinal Study of Aging.

Readers are given none of the basic statistics that would be needed to interpret the complex multivariate analyses. Ideally, we would be given an opportunity to see how the independent variable, negative age stereotypes, is related to other data available on the subjects, and so we could get some sense if we are starting with some basic, meaningful associations.

Instead the authors present the association between negative age stereotyping and hippocampal volume only in the presence of multiple control variables:

Covariates consisted of demographics (i.e., age, sex, and education) and health at time of baseline-age-stereotype assessment, (number of chronic conditions on the basis of medical records; well-being as measured by a subset of the Chicago Attitude Inventory); self-rated health, neuroticism, and cognitive performance, measured by the Benton Visual Retention Test (BVRT; Benton, 1974).

Readers get cannot tell why these variables and not others were chosen. Adding or dropping a few variables could produce radically different results. But there are just too many variables being considered. With only 52 research participants, spurious findings that do not generalize to other samples are highly likely.

I was astonished when the authors announced that they were relying on one-tailed statistical tests. This is widely condemned as unnecessary and misleading.

Basically, every time the authors report a significance level in this article, you need to double the number to get what is obtained with a more conventional two-tailed test. So, if they proudly declare that results are significant p = .046, then the results are actually (non)significant, p= .092. I know, we should not make such a fuss about significance levels, but journals do. We’re being set up to be persuaded the results are significant, when they are not by conventional standards.

So the authors’ accumulating sins against proper statistical techniques and transparent reporting: no presentation of basic associations; reporting one tailed tests; use of multivariate statistics inappropriate for a sample that is so small. Now let’s add another one, in their multivariate regressions, the authors relied on a potentially deceptive backwards elimination:

Backward elimination, which involves starting with all candidate variables, testing the deletion of each variable using a chosen model comparison criterion, deleting the variable (if any) that improves the model the most by being deleted, and repeating this process until no further improvement is possible.

The authors assembled their candidate control/covariate variables and used a procedure that checks them statistically and drop some from consideration, based on whether they fail to add to the significance of the overall equation. This procedure is condemned because the variables that are retained in the equation capitalize on chance. Particular variables that could be theoretically relevant are eliminated simply because they fail to add anything statistically in the context of the other variables being considered. In the context of other variables, these same discarded variables would have been retained.

The final regression equation had fewer control/covariates then when the authors started. Statistical significance will be calculated on the basis of the small number of variables remaining, not the number that were picked over and so results will artificially appear stronger. Again, potentially quite misleading to the unwary reader.

The authors nonetheless concluded:

As predicted, participants holding more-negative age stereotypes, compared to those holding more-positive age stereotypes, had a significantly steeper decline in hippocampal volume

The second study:

examined whether participants holding more negative age stereotypes would have significantly greater accumulation of amyloid plaques and neurofibrillary tangles.

The outcome was a composite-plaques-and-tangles score and the predictor was the same negative age stereotypes measure from the first study. These measurements were obtained from 74 research participants upon death and autopsy. The same covariates were used in stepwise regression with backward elimination. Once again, the statistical test was one tailed.

Results were:

As predicted, participants holding more-negative age stereotypes, compared to those holding more-positive age stereotypes, had significantly higher composite-plaques-and-tangles scores, t(1,59) = 1.71 p = .046, d = 0.45, adjusting for age, sex, education, self-rated health, well-being, and number of chronic conditions.

Aha! Now we see why the authors commit themselves to a one tailed test. With a conventional two-tailed test, these results would not be significant. Given a prevailing confirmation bias, aversion to null findings, and obsession with significance levels, this article probably would not have been published without the one tailed test.

The authors’ stirring overall conclusion from the two studies:

By expanding the boundaries of known environmental influences on amyloid plaques, neurofibrillary tangles, and hippocampal volume, our results suggest a new pathway to identifying mechanisms and potential interventions related to Alzheimer’s disease

pubpeerPubPeer discussion of this paper [https://pubpeer.com/publications/16E68DE9879757585EDD8719338DCD ]

Comments accumulated for a couple of days on PubPeer after I posted some concerns about the first study. All of the comments were quite smart, some directly validated points that I been thinking about, but others took the discussion in new directions either statistically or because the commentators knew more about neuroscience.

Using a mechanism available at PubPeer, I sent emails to the first author of the paper, the statistician, and one of the NIA personnel inviting them to make comments also. None have responded so far.

Tom Johnstone, a commentator who exercise the option of identifying himself noted the reliance on inferential statistics in the absence of reporting basic relationships. He also noted that the criterion used to drop covariates was lax. Apparently familiar with neuroscience, he expressed doubts that the results had any clinical significance or relevance to the functioning of the research participants.

Another commentator complained of the small sample size, use of one tailed statistical tests without justification, the “convoluted list of covariates,” and “taboo” strategy for selecting covariates to be retained in the regression equation. This commentator also noted that the authors had examined the effect of outliers, conducting analyses both with and without the inclusion of the most extreme case. While it didn’t affect the overall results, exclusion dramatically change the significance level, highlighting the susceptibility of such a small sample to chance variation or sampling error.

Who gets the blame for misleading claims in this article?

dr-luigi-ferrucciThere’s a lot of blame to go around. By exaggerating the size and significance of any effects, the first author increases the chance of publication and also further funding to pursue what is seen as a “tantalizing” association. But it’s the job of editors and peer reviewers to protect the readership from such exaggerations and maybe to protect the author from herself. They failed, maybe because exaggerated findings are consistent with the journal‘s agenda of increasing citations by publishing newsworthy rather than trustworthy findings. The study statistician, Martin Slade obviously knew that misleading, less than optimal statistics were used, why didn’t he object? Finally, I think the NIA staff, particularly Luigi Ferrucci, the Scientific Director of NIA  should be singled out for the irresponsibility of attaching their names to such misleading claims. Why they do so? Did they not read the manuscript?  I will regularly present instances of NIH staff endorsing dubious claims, such as here. The mind-over-disease, psychosomatic hypothesis, gets a lot of support not warranted by the evidence. Perhaps NIH officials in general see this as a way of attracting research monies from Congress. Regardless, I think NIH officials have the responsibility to see that consumers are not misled by junk science.

This article at least provided the opportunity for an exercise that should raise skepticism and convince consumers at all levels – other researchers, clinicians, policymakers, and those who suffer from Alzheimer’s disease and those who care from them – we just cannot sit back and let trusted sources do our thinking for us.

 

Sex and the single amygdala: A tale almost saved by a peek at the data

So sexy! Was bringing up ‘risky sex’ merely a strategy to publish questionable and uninformative science?

wikipedia 1206_FMRIMy continuing question: Can skeptics who are not specialists, but who are science-minded and have some basic skills, learn to quickly screen and detect questionable science in the journals and media coverage?

You don’t need a weatherman to know which way the wind blows.” – Bob Dylandylan wind blows

I hope so. One goal of my blogging is to arouse readers’ skepticism and provide them some tools so that they can decide for themselves what to believe, what to reject, and what needs a closer look or a check against trusted sources.

Skepticism is always warranted in science, but it is particularly handy when confronting the superficial application of neuroscience to every aspect of human behavior. Neuroscience is increasingly being brought into conversations to sell ideas and products when it is neither necessary nor relevant. Many claims about how the brain is involved are false or exaggerated not only in the media, but in the peer-reviewed journals themselves.

A while ago I showed how a neuroscientist and a workshop guru teamed up to try to persuade clinicians with functional magnetic resonance imaging (fMRI) data  that a couples therapy was more sciencey than the rest. Although I took a look at some complicated neuroscience, a lot of my reasoning [1, 2, 3] merely involved applying basic knowledge of statistics and experimental design. I raised sufficient skepticism to dismiss the neuroscientist and psychotherapy guru’s claims, Even putting aside the excellent specialist insights provided by Neurocritic and his friend Magneto.

In this issue of Mind the Brain, I’m pursuing another tip from Neurocritic about some faulty neuroscience in need of debunking.

The paper

Victor, E. C., Sansosti, A. A., Bowman, H. C., & Hariri, A. R. (2015). Differential Patterns of Amygdala and Ventral Striatum Activation Predict Gender-Specific Changes in Sexual Risk Behavior. The Journal of Neuroscience, 35(23), 8896-8900.

Unfortunately, the paper is behind a pay wall. If you can’t get it through a university library portal, you can send a request for a PDF to the corresponding author, elizabeth.victor@duke.edu.

The abstract

Although the initiation of sexual behavior is common among adolescents and young adults, some individuals express this behavior in a manner that significantly increases their risk for negative outcomes including sexually transmitted infections. Based on accumulating evidence, we have hypothesized that increased sexual risk behavior reflects, in part, an imbalance between neural circuits mediating approach and avoidance in particular as manifest by relatively increased ventral striatum (VS) activity and relatively decreased amygdala activity. Here, we test our hypothesis using data from seventy 18- to 22-year-old university students participating in the Duke Neurogenetics Study. We found a significant three-way interaction between amygdala activation, VS activation, and gender predicting changes in the number of sexual partners over time. Although relatively increased VS activation predicted greater increases in sexual partners for both men and women, the effect in men was contingent on the presence of relatively decreased amygdala activation and the effect in women was contingent on the presence of relatively increased amygdala activation. These findings suggest unique gender differences in how complex interactions between neural circuit function contributing to approach and avoidance may be expressed as sexual risk behavior in young adults. As such, our findings have the potential to inform the development of novel, gender-specific strategies that may be more effective at curtailing sexual risk behavior.

My thought processes

Hmm, sexual risk behavior -meaning number of partners? How many new partners during a follow-up period constitutes “risky” and does it matter whether safe sex was practiced? Well, ignoring these issues and calling it “sexual risk behavior “allows the authors to claim relevance to hot topics like HIV prevention….

But let’s cut to the chase: I’m always skeptical about a storyline depending on a three-way statistical interaction. These effects are highly unreliable, particularly in a sample size of only N = 70. I’m suspicious why investigators ahead of time staking their claims on a three-way interaction, not something simpler. I will be looking for evidence that they started with this hypothesis in mind, rather than cooking it up after peeking at the data.

fixed-designs-for-psychological-research-35-638Three-way interactions involve dividing a sample up into at eight boxes, in this case, 2 x (2) x (2). Such interactions can be mind-boggling to interpret, and this one is no exception

Although relatively increased VS activation predicted greater increases in sexual partners for both men and women, the effect in men was contingent on the presence of relatively decreased amygdala activation and the effect in women was contingent on the presence of relatively increased amygdala activation.

And then the “simple” interpretation?

These findings suggest unique gender differences in how complex interactions between neural circuit function contributing to approach and avoidance may be expressed as sexual risk behavior in young adults.

And the public health implications?

As such, our findings have the potential to inform the development of novel, gender-specific strategies that may be more effective at curtailing sexual risk behavior.

hs-amygdalaJust how should these data inform public health strategies beyond what we knew before we stumbled upon this article? Really, should we stick people’s heads in a machine and gather fMRI data  before offering them condoms? Should we encourage computer dating services to post along with a recent headshot, recent fMRI images showing that prospective dates do not have their risky behavior center in the amygdala activated? Or encourage young people to get their heads examined with an fMRI before deciding whether it’s wise to sleep with somebody new?

So it’s difficult to see the practical relevance of these findings, but let’s stick around and consider the paragraph that Neurocritic singled out.

The paragraph

outlierThe majority of the sample reported engaging in vaginal sex at least once in their lifetime (n = 42, 60%). The mean number of vaginal sexual partners at baseline was 1.28 (SD =0.68). The mean increase in vaginal sexual partners at the last follow-up was 0.71 (SD = 1.51). There were no significant differences between men and women in self-reported baseline or change in self-reported number of sexual partners (t=0.05, p=0.96; t=1.02, p= 0.31, respectively). Although there was not a significant association between age and self-reported number of partners at baseline (r = 0.17, p= 0.16), younger participants were more likely to report a greater increase in partners over time (r =0.24, p =0.04). Notably, distribution analyses revealed two individuals with outlying values (3 SD from M; both subjects reported an increase in 8 partners between baseline and follow up). Given the low rate of sexual risk behavior reported in the sample, these outliers were not excluded, as they likely best represent young adults engaging in sexual risk behavior.

What triggers skepticism?

This paragraph is quite revealing if we just ponder it a bit.

First, notice there is only a single significant correlation (p=.04) in a subgroup analysis. Differences between men and women were examined finding no significant findings in either baseline or changes in number of sexual partners over the length of the observation. However, disregarding that finding, the authors went on to explore changes in number of partners over time among the younger participants and, bingo, there was their p =0.04.

Whoa! Age was never mentioned in the abstract. We are now beyond the 2 x 2 x 2 interaction mentioned in the abstract and rooting through another dimension, younger versus older.

But, worse, getting that significance required retaining two participants with eight new sexual partners each during the follow-up period. The decision to retain these participants was made after the pattern of results was examined with and without inclusion of these outliers. The authors say so and essentially say they decided because it made a better story.

The only group means and standard deviation included these two participants. Even including the participants, the average number of new sexual partners was less than one during some follow-up. We have no idea whether that one was risky or not. It’s a safer assumption that having eight new partners is risky, but even that we don’t know for sure.

Keep in mind for future reference: Investigators are supposed to make decisions about outliers without reference to the fate of the hypothesis being studied. And knowing nothing about this particular study, most authorities would say if two people out of 70 are way out there on a particular variable that otherwise has little variance, you should exclude them.

It is considered a Questionable Research Practice to make decisions about inclusion/exclusion based on what story the outcome of this decision allows the authors to tell. It is p-hacking, and significance chasing.

And note the distribution of numbers of vaginal sex partners. Twenty eight participants had none at the end of the study. Most accumulated less than one during the follow up, and even that mean number was distorted by two having eight partners. Hmm, it is going to be hard to get multivariate statistics to work appropriately when we get to the fancy neuroscience data. We could go off on discussions of multivariate normal or Poisson distributions or just think a bit..

We can do a little detective work and determine that one outlier was a male, another a female. (*1) Let’s go back to our eight little boxes of participants that are involved in the interpretation of the three-way interaction. It’s going to make a great difference exactly where the deviant male and female are dropped into one of the boxes or whether they are left out.

And think about sampling issues. What if, for reasons having nothing to with the study, neither of these outliers had shown up? Or if only one of them had showed up, it would skew the results in a particular direction, depending on whether the participant was the male or female.

Okay, if we were wasting our time continuing to read the article after finding what we did in the abstract, we are certainly wasting more of our time by continuing after reading this paragraph. But let’s keep poking around as an educational exercise.

The rest of the methods and results sections

We learn from the methods section that there was an ethnically diverse sample with a highly variable follow-up, from zero days to 3.9 years (M = 188.72 d, SD = 257.15; range = 0 d–3.19 years). And there were only 24 men in the original sample for the paper of 70 participants.

We don’t know whether these two outliers had eight sexual partners within a week of the first assessment or they were the ones captured in extending the study to almost 4 years. That matters somewhat, but we also have to worry whether this was an appropriate sample – with so few participants in it in the first place and even fewer who had sex by the end of the study – and length of follow-up to do such a study. The mean follow-up of about six months and huge standard deviation suggest there is not a lot of evidence of risky behavior, at least in terms of casual vaginal sex.

This is all getting very funky.

So I wondered about the larger context of the study, with increasing doubts that the authors had gone to all this trouble just to test an a priori hypothesis about risky sex.

We are told that the larger context is the ongoing “Duke Neurogenetics Study (DNS), which assesses a wide range of behavioral and biological traits.” The extensive list of inclusions and exclusions suggests a much more ambitious study. If we had more time, we could go look up the Duke Neurogenetics Study and see if that’s the case. But I have a strong suspicion that the study was not organized around the specific research questions of this paper (*2). I really can’t tell without any preregistration of this particular paper but I certainly have questions about how much Hypothesizing after the Results Are Known (HARKing) is going on here in the refining of hypotheses and measures, and decisions about which data to report.

Further explorations of the results section

I remind readers that I know little about fMRI data. Put it aside and we can discover some interesting things reading through the brief results section.

Main effects of task

As expected, our fMRI paradigms elicited robust affect-related amygdala and reward-related VS activity across the entire parent sample of 917 participants (Fig. 1). In our substudy sample of 70 participants, there were no significant effects of gender (t(70) values < 0.88, p values >0.17) or age (r values < 0.22; p values > 0.07) on VS or amygdala activity in either hemisphere.

figure1Hmm, let’s focus on the second sentence first. The authors tell us absolutely nothing is going on in terms of differences in amygdala and reward-related VS activity in relation to age and gender in the sample of 70 participants in the current study. In fact, we don’t even need to know what “amygdala and reward-related VS activity” is to wonder why the first sentence of this paragraph directs us to a graph not of the 70 participants, but a larger sample of 917 participants. And when we go to figure 1, we see some wild wowie zowie, hit-the-reader-between-the-eyes differences (in technical terms, intraocular trauma) for women. And claims of p < 0.000001 twice. But wait! One might think significance of that magnitude would have to come from the 917 participants, except the labeling of the X-axis must come from the substudy of the 70 participants for whom data concerning number of sex partners was collected. Maybe the significance comes from the anchoring of one of the graph lines by the one wayout outlier.

Note that the outlier woman with eight partners anchors the blue line for High Left Amygdala. Without inclusion of that single woman, the nonsignificant trends between women with High Left Amygdala versus women with Low Left Amygdala would be reversed.

figure2The authors make much of the differences between Figure 1 showing Results for Women and Figure 2 showing Results for Men. The comparison seems dramatic except that, once again, the one outlier sends the red line for Low Left Amygdala off from the blue line for High Left Amygdala. Otherwise, there is no story to tell. Mind-boggling, but I think we can safely conclude that something is amiss in these Frankenstein graphs.

Okay, we should stop beating a corpse of an article. There are no vital signs left.

Alternatively, we could probe the section on Poisson regressions and minimally note some details. There is the flash of some strings of zeros in the P values, but it seems complicated and then we are warned off with “no factors survive Bonferroni correction.” And then in the next paragraph, we get to exploring dubious interactions. And there is the final insult of the authors bringing in a two-way interaction trending toward significance among men, p =.051.

But we were never told how all this would lead as we were promised in the end of the abstract, “to the development of novel, gender-specific strategies that may be more effective at curtailing sexual risk behavior.”

Rushing through the discussion section, we note the disclosure that

The nature of these unexpected gender differences on clear and warrants further consideration.

So, the authors confess that they did not start with expectations of finding a gender difference. They had nothing to report from a subset of data from an ambitious project put together for other purposes with an ill-suited follow-up for the research question (and even an ill-suited experimental task. They made a decision to include two outliers, salvaged some otherwise weak and inconsistent differences, and then constructed a story that depended on their inclusion. Bingo, they can survive confirmation bias and get published.

Readers might have been left with just their skepticism about the three-way interaction described in the abstract. However, the authors implicated themselves by disclosing in the article their examination of a distribution and reasons for including outlier. Then they further disclosed they did not start with a hypothesis about gender differences.

Why didn’t the editor and reviewers at Journal of Neuroscience (impact factor 6.344) do their job and cry foul? Questionable research practices (QRPs) are brought to us courtesy of questionable publication practices (QPPs).

And then we end with the confident

These limitations notwithstanding, our current results suggest the importance of considering gender-specific patterns of interactions between functional neural circuits supporting approach and avoidance in the expression of sexual risk behavior in young adults.

Yet despite this vague claim, the authors still haven’t explained how this research could be translated to practice.

Takeaway points for the future.

Without a tip from NeuroCritic, I might not have otherwise zeroed in on the dubious complex statistical interaction on which the storyline in the abstract depended. I also benefited from the authors for whatever reason telling us that they had peeked at the data and telling us further in the discussion that they had not anticipated the gender difference. With current standards for transparency and no preregistration of such studies, it would’ve been easy for us to miss what was done because the authors did not need to alert us. Until there are more and better standards enforced, we just need to be extra skeptical of claims of the application of neuroscience to everyday life.

Trust your skepticism.

Apply whatever you know about statistics and experimental methods. You probably know more than you think you do

Beware of modest sized neuroscience studies for which authors develop storylines from the patterning authors can discover in their data, not from a priori hypotheses suggested by a theory. If you keep looking around in the scientific literature and media coverage of it, I think you will find a lot of this QRP and QPP.

Don’t go into a default believe-it mode just because an article is peer-reviewed.

Notes

  1. If both the outliers were of the same gender, it would have been enough for that gender to have had significantly more sex partners than the other.
  1. Later we had told in the Discussion section that particular stimuli for which fMRI data were available were not chosen for relevance to the research question claimed for this this paper.

We did not measure VS and amygdala activity in response to sexually provocative stimuli but rather to more general representations of reward and affective arousal. It is possible that variability in VS and amygdala activity to such explicit stimuli may have different or nonexistent gender-specific patterns that may or may not map onto sexual risk behaviors.

Special thanks to Neurocritic for suggesting this blog post and for feedback, as well as to Neuroskeptic, Jessie Sun, and Hayley Jach for helpful feedback. However, @CoyneoftheRealm bears sole responsibility for any excesses or errors in this post.