Saturday, July 9, 2016

The Evidence-Based Mind of Psychiatry on Display

The following piece was written by Robert Whitaker, author of several books including Mad in America and Anatomy of an Epidemic. The original article can be found here.

rwhitakerEarlier this year, Ronald Pies and Allen Frances wrote a series of blogs that collectively might be titled: “Why Robert Whitaker Is Wrong about Antipsychotics.” In regard to reviewing the “evidence” on that question, Pies did most of the heavy lifting, but he also told of drawing on the expertise of E. Fuller Torrey, Joseph Pierre and Bernard Carroll. Given the prominence of this group, it could be fairly said that Pies’ review reflects, to a large degree, the collective “thoughts” of American psychiatry.

And with that understanding in mind, therein lies an opportunity, one not to be missed.

Over the past 35 years, psychiatry—as an institution—has remade our society. This is the medical specialty that defines what is normal and not normal. This is the medical specialty that tells us when we should take medications that will affect how we respond to the world. And this is the profession that determines whether such medications are good for our children. Given that influence, we as a society naturally have reason to want to know how the leaders in the profession think, and thus how they come to their conclusions  about the merits of their drugs. The blogs by Pies and Frances provide us with just that opportunity. We can watch their minds at work and ask ourselves, do we see on display the type of thinking—the openness of mind, the critical thinking, the curiosity, the humility of character, and the devotion to public wellbeing—that we want to see in a medical specialty that has such influence over our lives?

There are two aspects to their review of the “evidence” for antipsychotics. The first is their dismissal of the case against antipsychotics (e.g., that they worsen long-term outcomes in the aggregate), and the second is their making a case that the drugs provide a long-term benefit. In each instance, there are specific studies to be reviewed, and, as a result, a warning to readers is in order: this is going to take some time to do.

The Case Against Antipsychotics

As I first detailed in Anatomy of An Epidemic and have expanded on since (including in a January blog), I believe there is a history of science, stretching across 50 years and composed of many types of studies, that tells of how antipsychotics, in the aggregate, worsen long-term outcomes for schizophrenia patients and those with other psychotic disorders. The case against antipsychotics doesn’t consist of just a few studies, but rather arises from that history of research, and what I find so compelling is how the various kinds of evidence fit together, and how they do so over time. This is not the place to go over that history once again, but if readers are interested, they can see a summary in my January blog, and a review of many of the studies here.

It’s not important, for the purposes of this blog, to come to any pro-or-con conclusion about whether antipsychotics worsen long-term outcomes. What is important is to see how Pies and his colleagues responded to that argument and evaluated it.

Neither Pies nor Frances addressed the larger history I wrote about. Instead, they focused on the recent literature that is part of this “case against antipsychotics“—research by Martin Harrow and Lex Wunderink, and MRI studies that have found that antipsychotics shrink the brain. They also addressed the question of whether antipsychotics induce a dopamine supersensitivity that could increase one’s biological vulnerability to psychosis, which is something I have written about.

Harrow’s Longitudinal Study

In any area of medicine, it is essential to research the “natural” course of an illness. Prognosis is dependent upon that knowledge, and this knowledge also provides the framework for assessing whether a treatment is improving long-term outcomes. Such research is exhausting, difficult, and expensive, which is why the long-term study of schizophrenia and other psychotic disorders by Martin Harrow is so important. It provided information that had been missing from psychiatry’s “evidence base” for so long.

Harrow’s was a prospective study, which means he followed the 200 subjects from early on in their “illness,” mostly at the start of their having been diagnosed with schizophrenia or another psychotic disorder. Their median age was 22 years and nine months, and most were suffering either a first or second hospitalization. At the end of 15 years, he still had 145 of the 200 in his study, which—as any schizophrenia researcher can attest—is an extraordinary result on its own. Harrow had gone to extraordinary lengths to keep track of the subjects enrolled into his study, which made his findings that more robust.

There is much to be learned from his published results, and if there is one response that I think our society would want to see from the psychiatric establishment, it is just that: a keen curiosity about the results, and a desire to use this information to improve treatment protocols. Harrow’s results belied the common beliefs that have driven societal thinking—and treatment protocols—for the longest while.

The first surprising finding was that, as a group, patients who stopped taking antipsychotic medications notably improved between the 2-year and 4.5-year follow-up. The conventional wisdom is that this group could have been expected to deteriorate and suffer from multiple relapses, but Harrow found the opposite to be true: on the whole, their psychotic and anxiety symptoms abated, and 40 percent of the group were in “recovery” by the end of 4.5 years. And Harrow defined recovery in a robust way: an absence of symptoms; working or in school more than 50% of the time; and a decent social life.

This is exciting news. This is the only place in the research literature where you can see this potential for healing play out over this longer period of time, and it tells of the possibility that a significant percentage of patients, even those diagnosed with schizophrenia, can get better and resume a meaningful life, and do so without suffering the many adverse effects of antipsychotics. This is a finding to be embraced, with the profession challenged, one might expect, to develop protocols to maximize this possibility.

The second surprising finding, however, was quite dispiriting, given the current standard of care. The medicated patients, as a group, worsened during this same time frame (2 to 4.5 years): the percentage of patients who were psychotic increased; the percentage who were highly anxious increased; and the recovery rate decreased, to the point it was only 1/8 the rate for the patients who had stopped taking antipsychotics.

The divergent results also led to an obvious question: Did the medications, for some reason, block or thwart the natural healing process seen in many of the unmedicated patients? Medication is thought to be an essential treatment for schizophrenia, and yet these findings, which remained stable throughout the study, did not support that belief.

Moreover, any researcher digging into Harrow’s data would have found reason for this concern to deepen. In every subgroup of patients, the medicated patients did worse over the long term. The good prognosis schizophrenia patients who got off medication did better than the good prognosis patients who stayed on, and the same was true for the bad prognosis patients, and for those with milder psychotic disorders. In each subset, the off-medication group did much better. Even more compelling, the schizophrenia patients who got off antipsychotics did better than those with milder psychotic disorders who stayed on the medication, as can be seen in the graphic below.

HARROW-STUDY.001

These are the findings that Harrow’s longitudinal study, the best of its kind in the modern drug era, presented to American psychiatry. This was data, it seemed, that should be added to the “evidence base” for antipsychotics. However, Pies did not see it that way:

“Patients in the Harrow study were not randomized. This means those with milder symptoms may have ‘self-selected’ to discontinue medication, whereas those with more severe illness—who would be expected to have a poorer outcome—elected to stay on medication. So the Harrow studies did not prove that long-term antipsychotic treatment per se worsened outcome. It is more likely, as Dr. Pierre notes, that the type or severity of patients’ symptoms determined whether or not they and their doctors decided to continue medication. Thus, in analyzing the Harrow studies, some critics of antipsychotic treatment may have misperceived the ‘arrow of causality.’ ”

The response by Pies has been the same argument voiced by others who have criticizedAnatomy of an Epidemic, and fairly sums up how psychiatry, as a field, has responded to Harrow’s study. There isn’t much to be learned from it; the fact it wasn’t randomized provides an explanation for the results; and the likely explanation for the divergent outcomes is that the more severely ill patients were the ones who stayed on an antipsychotic, while psychiatrists helped the less severely ill get off  the medication.

This is a response that does protect psychiatry’s belief in its antipsychotics, and its sense of self. But it is one that is absent of any curiosity about the study, and absent of any desire to use this information to improve treatment protocols. Pies’s dismissal also reveals a lack of knowledge about the details of the study, or — and this may be more likely — a turning of a blind eye to them.

First, the idea that a difference in severity of illness explains the divergent outcomes is belied by the group-by-group outcomes, and by the fact that the more severely ill who got off the drugs did better than the less severely ill who stayed on. Perhaps there is a non-drug explanation for the divergent outcomes, but the available facts don’t support the “severity of illness” speculation.

Second, the idea that psychiatrists helped the off-medication group get off their drugs is plucked out of thin air too. Those who stopped taking medication and got better mostlydropped out of care. These were non-compliant patients who did better.

Third, the “correlation not causation” dismissal is also not descriptive of the actual data. A naturalistic study, of course, does have its limitations. However, an antipsychotic is meant to be a causative agent, and thus change the course of the illness, and so what you have in this instance is “correlation with a causative agent,” which is a very different thing. This is a correlation that provides reason to give the outcomes further weight. The findings in Harrow’s study don’t prove causality, but particularly once the group-by-group outcomes are detailed, the study — at the very least — provides reason to worry about possible causality.

But such worry is missing from Pies’ response, and indeed, if his response is carefully parsed, you can see that instead he praises his colleagues for their scientific acumen and their prescribing skills. He and his colleagues, being men and women of science, understand that correlation doesn’t equal causation, unlike psychiatry’s misinformed critics (e.g. me). Psychiatrists could also take credit for the good outcomes of patients off medication in Harrow’s study. Psychiatrists had assessed how well their patients were doing, and in collaboration with their patients, successfully identified those who could discontinue their medication. His profession is doing quite well, thank you.

Wunderink’s Randomized Study

A few years after Harrow reported his findings, Lex Wunderink from the Netherlands published the seven-year results from his study, which included a randomized element in its design. His findings, broadly speaking, replicated Harrow’s.

In Wunderink’s study, 128 first-episode psychosis patients who had remitted on antipsychotics were randomized into two groups: drug treatment as usual, or treatment that involved dose reduction or discontinuation from the drugs. In essence, his was a randomized study that compared long-term outcomes for patients treated, at a single moment in their care, with a different treatment protocol.

At the end of two years, the relapse rate was higher for the low-dose/discontinuation group (43% vs. 21%.) However, by the end of seven years, the relapse rate was slightly lower for the no-drug/low dose group, and the recovery rate was more than twice as high for this group (40% versus 18%.) This study revealed the long-term benefit of a drug-tapering protocol following initial remission.

If Wunderink’s data is carefully studied, there is one other comparison to be made at the end of seven years. During the followup, there were patients in the low-dose/off-med group who ended up on regular doses of antipsychotics, and conversely, patients randomized to regular treatment who stopped their medication, or got down to a low dose. Here are the seven-year results grouped by medication use:

WUNDERINK.001

Together, Harrow’s and Wunderink’s studies provide evidence—or so it would seem—for psychiatry to rethink its treatment protocols, which today emphasize medication compliance as the key to a successful long-term outcome. These studies argue for a treatment protocol that would try to help first-episode patients get down to a low dose or discontinue antipsychotics altogether. That strategy, these two studies indicate, would produce much better long-term recovery rates.

Pies and those he consulted  didn’t see it that way. Much like Harrow’s study, Wunderink’s could be discounted. Pies wrote:

“First of all, most of the subjects in the dose/reduction arm of the study actually remained on antipsychotic medication, although at a reduced dose. Secondly, as Dr. Pierre notes, ‘while the initial treatment allocation was randomized, the subsequent dose changes in both treatment groups were based on clinical response and occurred at the whim of the treating psychiatrists.’ Thus, this was not really a randomized study. And rather than antipsychotic treatment worsening outcome, it seems more likely that patients perceived by their doctors as doing relatively well were, understandably, given lower doses of medication; conversely, patients perceived as doing worse were likely maintained on higher doses . . . this study does not support the claim that long-term antipsychotic maintenance is causally related to poorer outcome.”

Once again, there apparently is little to be learned from Wunderink’s results. There is no drug-tapering protocol to be adopted; it seems psychiatrists are already quite good at helping their “less ill” patients get down to low doses, or off the drugs altogether. Nor is there reason to worry that antipsychotics may be doing long-term harm. As was the case with their review of Harrow’s study, Pies and his colleagues saw Wunderink’s findings as another account of less-ill patients doing better than severely ill patients, with psychiatrists helping their patients find the right medication path. The study, in their view, provides no reason to think that medications played a role in the divergent outcomes.

MRI Research

The advent of MRI technology twenty-five years ago enabled researchers to begin studying brain volume changes in patients diagnosed with schizophrenia and other psychotic disorders, with such changes measured over time. By the late 1990s, investigators doing such research had reported that antipsychotics caused basal ganglion structures and the thalamus to swell, and the frontal lobes to shrink, with these changes in brain volumes “dose related.” In 1998, Raquel Gur, from the University of Pennsylvania, reported that the swelling of the basal ganglia and thalamus was “associated with greater severity of both negative and positive symptoms.” This was disconcerting news: the brain volume changes were associated with a worsening of the very symptoms the drugs were supposed to treat.

Soon Nancy Andreasen, who was then editor-in-chief of the American Journal of Psychiatry, weighed in with her findings from a study of 500 schizophrenia patients. In 2003, she reported that their frontal lobes shrank over time, and that this shrinkage was associated with a worsening of negative symptoms and functional impairment, and after five years, with a significant worsening of their cognitive abilities. She at first attributed this shrinkage to a disease process, but subsequently reported that it was related to antipsychotic usage, and was in fact dose related. The “more drugs you’ve been given, the more brain tissue you lose,” she told the New York Times.

This research appears to tell of a clear iatrogenic process. Antipsychotics cause changes in brain volumes that are associated with a worsening of negative and positive symptoms, and a worsening of functional impairment. Then, once Harrow reported his findings, the pieces of an “evidence-based” puzzle seemed to come together. The MRI studies told of drugs that worsened long-term outcomes, including psychotic symptoms, and that is what Harrow had found. Investigations of different types had pointed to the same conclusion.

In his review of the MRI studies, Pies acknowledged that they “indicate an association between antipsychotic use and reductions in cortical gray matter.” He also acknowledged that such findings “are surely concerning.” But in his review of the MRI literature, he didn’t include reports by Gur and Andreasen that such brain changes were associated with a worsening of symptoms and functional impairment. That salient fact was missing. Instead, he directed his readers to a 2015 study by investigators at the University of California at Davis who, Pies wrote, found that “while short-term treatment with antipsychotics was associated with prefrontal cortical thinning, treatment was also associated with better scores on a continuous performance task.”

The suggestion being made here is this: Although the drugs may cause brain shrinkage, they may still improve functional outcomes.

This finding, at first glances, seems at odds with Andreasen’s and Gur’s. But if you read the UC Davis study, you find that the psychotic patients in this study had been on antipsychotics for a short time (mean duration was 99 days), and had suffered a first episode of psychosis less than one year earlier.  The finding that the medicated patients scored better on a continuous performance task at the end of 99 days is consistent with the idea that these drugs provide a short-term benefit. However, Andreasen charted brain shrinkage over a number of years and found that, over this longer period of time, it was associated with worsening symptoms and impairments. The MRI data reviewed here supports the idea that while the drugs may provide a short-term functional benefit, they induce brain-volume changes that are harmful over the long-term.

That would be the picture to be presented from a full review of the brain shrinkage evidence. But Pies focused on the UC Davis study, and having done so, he concluded: “In my view, we need more research to sort out this complex issue, while always weighing carefully the neurological risks of antipsychotic treatment (including movement disorders) against their very real benefits.”

In this summary, Pies has recast the brain shrinkage as an adverse event, which needs to be weighed against the drugs’ very “real benefits.”  That is a statement that tells of drugs that still reduce symptoms over the long term, rather than make them worse, and it also lessens a sense of immediate alarm. Gur’s study may now be nearly 20 years old, and Andreasen’s not far behind, but this remains today a worry to be sorted out in the future. Until that time,  patients prescribed these drugs will just have to hope that Gur’s and Andreasen’s research, linking brain shrinkage to a worsening of symptoms and functional outcomes, is wrong.

The Case For Antipsychotics

Having dismissed the case against antipsychotics, Pies then set forth a summary claim about the evidence for the drugs: “I believe that most randomized, long-term studies of schizophrenia support the benefit of antipsychotics in preventing relapse. Some data also show better ‘quality of life’ with maintenance antipsychotic treatment, compared with drug discontinuation.”

Although this statement makes it appear that there are a great many randomized, long-term studies that tell of better outcomes for the medicated patients, Pies doesn’t cite any such studies. But he does cite a meta-analysis of drug-withdrawal studies by Stefan Leucht, and so it appears he is referring to this body of research, which is the very “evidence” that psychiatry has long pointed to as reason for maintaining their patients on these drugs. 

Relapse studies

In his meta-analysis, Leucht identified 65 drug-withdrawal studies that had been conducted from 1959 to 2011. The median age of the patients was 40.8 years; their mean duration of illness was 13.6 years. More than half of the studies were in hospitalized patients.

In 54 of the 65 studies, the antipsychotic was abruptly withdrawn. In the 11 other studies, the medication was either tapered over at least a 3-week period, or depot treatment was allowed to lapse. In two-thirds of the studies, relapse was based on “clinical judgment” or patient seen “in need of medication,” as opposed to the use of a scale to measure psychotic symptoms. In other words, relapse in the majority of studies was dependent on an eyeball assessment by the psychiatrist.

relapse-study-1.001At the end of three months, the relapse rate was 12 percent for the drug-maintained group versus 37% for the drug-withdrawn group. At the end of one year, the relapse rate was 27% for the drug-maintained group versus 65% for the drug-withdrawn patients. Although the superiority of the drug-maintained group lessened over time, there was still a higher percentage of relapses in the drug-withdrawn group from months 7 to 12 (among those patients who didn’t relapse in the first six months.)

Data on “quality of life” from this relapse literature was “poor.” There were only three studies–out of the 65– that assessed this outcome; in two of the three studies there was “an almost significant trend” favoring the drug-maintained group. There was no significant difference in the third. The data on unemployment was “very poor,” as it had been collected in only two studies, and in those two, there was no difference in employment rates. There was no data at all on “satisfaction of care.”

Although “relapse” rates were lower in the drug-maintained group, 70% of these patients either failed to improve or worsened during the study (versus 88% among the drug-withdrawn group.) In the inpatient studies, only 5% of the drug-maintained patients were discharged.

Such is the data from fifty years of relapse studies. If the findings are critically accessed, what conclusions can be drawn?

Now, I have to admit, I had looked at the relapse literature before, but I didn’t realize, until I read Leucht’s study, just how flimsy this literature was. What does “relapse” even mean in most of these studies? Since the determination relied on “clinical judgment” in two-thirds of the studies, rather than a measurement of psychotic symptoms, were insomnia, agitation, or loud behavior symptoms of relapse? Wouldn’t withdrawal symptoms, whether one was abruptly or gradually withdrawn from the drug, often be seen, in the clinical judgment of the psychiatrist, as evidence of “relapse?” In fact, based on patient accounts of withdrawing from antipsychotics, it would seem that nearly everyone withdrawn from the medication would be seen, at some point, to have “relapsed.”

At the same time, what does it mean to be on the drug and to not have relapsed? If 70% of the drug-maintained patients failed to improve or worsened, with only 5% of the hospitalized patients who stayed on medications discharged, what is the state of a “non-relapsed” patient? Is a patient who sits quietly on a ward in a subdued state judged to have been “non-relapsed”? If someone is still too dysfunctional to be released, how does that qualify as a “good outcome,” e.g., non-relapsed?

And how is it that in fifty years of this research, which was designed to assess whether patients should be maintained on antipsychotics, there have been only three studies that even assessed quality of life, or employment?

But such are the questions that occur to my admittedly “untrained” mind. Pies, for his part, found this research to be reassuring. The meta-analysis, he wrote, suggests “that long-term antipsychotic treatment clearly improves outcome in schizophrenia . . . quality of life was also better in participants staying on medication.”

In that phrase, the results have been transformed: a difference in “relapse” rates is equated to “clearly improves outcomes,” and “poor data” on quality of life is remade into a broad conclusion that drug-maintained patients have a better quality of life. (Leucht, in his abstract, similarly turned the “poor data” described  in the discussion part of his paper into a broad conclusion.)

No Evidence of Dopamine Supersensitivity

Pies and his colleagues also found comforting evidence of another sort in the relapse literature: it provides reason to discount the worry that antipsychotics induce a dopamine supersensitivity that makes patients more biologically vulnerable to relapse, which exposes the patients to severe relapses upon drug withdrawal.

The “dopamine supersensitivity” concern arose in the late 1970s, after a series of NIMH-funded studies found that relapse rates, over longer periods of time, were higher for medicated schizophrenia patients than for patients never exposed to the drugs. This led two Canadian investigators, Guy Chouinard and Barry Jones, to posit a biological explanation for why this might be so.

Antipsychotics blocked dopamine receptors in the brain (and in particular a subtype known as the D2 receptor.) In compensatory response, the brain increased the density of its D2 receptors. The brain was now supersensitive to dopamine, and Chouinard and Jones reasoned that that this could have two harmful effects: It could lead to severe relapses upon drug withdrawal, and yet, if patients stayed on antipsychotics long-term, there was the risk that a persistent, chronic psychosis would set in. In 1982, they reported that 30% of the 216 patients they studied had signs of “tardive psychosis.” When this happens, they wrote, “the illness appears worse” than ever before. “New schizophrenic or original symptoms of greater severity will appear.”

Chouinard has since written several articles on supersensitivity psychosis, noting that it often appears “with the decrease or withdrawal of an antipsychotic.” This “discontinuation syndrome,” he wrote in 2008, produces “psychiatric symptoms that can be confounded with true relapse of the original illness,” and if clinicians would recognize this, “long-term maintenance treatment could be reduced and avoided in some patients.” This rebound psychosis was “known to occur within 6 weeks following the decrease or withdrawal of an oral antipsychotic or within 3 months for a long-acting injectable antipsychotic.”

In other words, from Chouinard’s perspective, many of the drug-withdrawn patients in the relapse studies were likely suffering drug-withdrawal symptoms, as opposed to a return of the illness, and counting such patients as relapsed leads to a mistaken understanding of the “benefits” of using antipsychotics as a maintenance treatment.

Numerous other researchers have weighed in on these withdrawal risks. Australian researcherssurveyed 98 users with varying diagnoses who had stopped taking antipsychotics and found that 78% experienced “negative effects” during withdrawal, which included “difficulty falling or staying asleep, mood changes, increases in anxiety/agitation, increases in hallucinations/delusions/unusual beliefs, difficulty concentrating/completing tasks, increases in paranoia, headaches, memory loss, nightmares, nausea and vomiting.”

Meanwhile, Japanese investigators have reported how drug-induced dopamine supersensitivity leads to “treatment-resistant” schizophrenia in a significant percentage of patients.” Philip Seeman, in animal-model experiments, concluded that this was why antipsychotics “fail over time.” Martin Harrow cited drug-induced dopamine supersensitivity as a possible reason for why such a high percentage of the medication-compliant patients remained psychotic over the long-term, while the majority of off-medication patients became asymptomatic.

Leucht and his colleagues, in their review of the relapse literature, briefly addressed this worry. Although the relapse rate for drug-withdrawn patients was particularly high in the first three months, there was no significant difference in those rates in the meta-analysis depending on whether the drug was abruptly or gradually withdrawn. This argued against the dopamine supersensitivity theory, they said.

In addition, they noted, the relapse rate during months 7 to 12 was still higher for the drug-withdrawn patients. These results, the researchers concluded, did “not support the suggestion that beneficial effects of antipsychotic drugs could be merely because of supersensitivity psychosis.” At the same time, they wrote, it was “possible that supersensitivity psychosis explains a pattern of the decreasing effect sizes in longer trials.”

In essence, Leucht and his colleagues had not put aside the dopamine supersensitivity worry, but had argued that, even apart from that possible confounding factor, the drugs appeared to provide a real benefit in reducing relapse, at least to some extent. However, Pies found reason, in the relapse data, to dismiss the worry that the dopamine supersensitivity was a confounding factor.

“Critics sometimes charge that apparent relapse among persons with schizophrenia does not represent a bona fide recurrence of the original illness. Rather, they claim, it is simply a ‘withdrawal effect’ that occurs when antipsychotic medication is rapidly discontinued, owing to a flare-up of ‘super-sensitized” dopaminergic neurons. Yet when we look at the time course of psychotic relapse, it usually occurs several months after discontinuation of the antipsychotic. This is not consistent with what we know about most drug withdrawal syndromes, which usually occur days to a few weeks after a drug is suddenly stopped. Thus the ‘withdrawal psychosis/super-sensitivity psychosis’ notion remains, at best, a highly speculative hypothesis, in so far as psychotic relapse is concerned.”

Once again, this is a misreading of the data in Leucht’s meta-analysis. In fact, 37% of the drug-withdrawn patients  relapsed within the first three months, which belies Pies’ claim that relapse “usually occurs several months after discontinuation of the antipsychotic.” It is this initial post-withdrawal period that creates most of the difference in relapse rates between the two groups. (See graphic below.)

relapse-study-2.001

But Pies saw this data as reason to downgrade  the worry about dopamine supersensitivity to a “highly speculative hypothesis.” Much like the MRI studies, this was another concern that could be punted down the road. Perhaps, in the future some researchers could look into it a bit more.

Harrow No, China Yes

Although Harrow’s and Wunderink’s longitudinal studies were not seen by Pies and his colleagues as providing useful information about the effects of antipsychotics over the long term, Pies did find one longitudinal study, conducted in rural China, worth citing in this regard.

“Recently, researchers in China carried out a 14-year prospective study of outcome in people with schizophrenia (N=510) who had never been treated with antipsychotic medications and compared outcome with those who were so treated. Consistent with the Leucht findings, the Chinese investigators found that partial and complete remission in treated patients were significantly higher than that in the never-treated group—57.3% vs. 29.8%. Moreover, the authors concluded that . . . ‘never-treated/remaining untreated patients may have a poorer long-term outcome (for example higher rates of death and homelessness) than treated patients.’ ”

Five years ago, when I gave a psychiatry Grand Rounds presentation at Massachusetts General Hospital, which was followed by a rebuttal by Andrew Nierenberg (in which he concludedAnatomy of an Epidemic should come with a black box warning), Nierenberg also pointed to this Chinese study as providing evidence of the long-term efficacy of antipsychotics, while similarly dismissing Harrow’s research as meaningless. As such, by examining this study, we can see the type of longitudinal research that psychiatry’s Thought Leaders find worthwhile.

The Hong Kong investigators, in a  survey of more than 100,000 people in a rural community in China, identified 510 people who met the criteria for a diagnosis of schizophrenia. In this cohort, there were 156 who had never been treated at the start of the study in 1994, and 354 who had been “treated,” which was defined as having received antipsychotic medication at least once.

The two groups were not at all similar at baseline. On average, the untreated group was 48 years old and had been ill for 14 years. People in the community who had suffered a psychotic episode but then recovered without treatment would not have shown up in this group. In layman’s terms, this was a chronically “crazy” group that researchers had identified. Moreover, compared with the treated patients, they were “significantly older, less likely to be married, more likely to have no family caregiver and to live alone, had a lower education level, and fewer family members.” The “untreated” group also came from families with a significantly lower economic status, and they were more likely to have been abused by their families. In addition, the never-treated group was more severely ill at baseline: they had a “longer duration of illness; higher mean scores on the PANSS positive subscale; and had higher PANSS negative subscale and general mental scores.” Eighty three percent had “marked symptoms/or were deteriorated,” compared to 53% of those in the “treated” group.

At the end of 14 years, the treated group—which simply meant that they had exposure to antipsychotics at some point in their lives—was still doing better. Fifty-seven percent were now in complete or partial remission, up from 47% at the start of the study (an increase of 10%). Thirty percent of the untreated group were now in complete or partial remission, and while that was still lower than the treated cohort, it meant that there had been an increase of 13% in this good outcomes category.

In sum, the percentage of the untreated group who had improved was actually greater than for the treated patients, but since they had been so much more severely ill at baseline, with so many worse prognostic factors, their collective outcomes were still worse at the end of 2008.

chinese-study.001

We can now see, quite clearly, the “mind” of psychiatry at work in its assessment of the research literature. Pies and his colleagues dismiss an NIMH study (Harrow’s) that followed patients from early on in their “illness” and regularly documented their functioning on a number of domains, and the reason they give for their dismissal is  that “correlation doesn’t equal causation.” They use this reasoning even though those with a milder diagnosis at baseline who stayed on the drugs did much worse over the long-term than those with a more severe diagnosis at baseline who got off the medications. Yet, the profession can find merit in a study that isolates a group of chronically ill, “crazy” people in a rural Chinese community, and compares them to a less ill group that also has many social advantages, and then finds that this latter “treated” group is doing slightly better, in terms of percentage who were in remission, at the end of 14 years. The “correlation doesn’t equal causation” refrain is now suddenly absent.

At this point, I have to confess that I am reminded of how Pies began his piece. Assessing psychiatry’s evidence base, he said, was a task best left up to the profession itself, as critics lacked the necessary expertise.

“This article is by no means a comprehensive review of the voluminous, decades-old literature on AP (antipsychotic) maintenance: rather, it is a commentary on some recent studies and their sometimes controversial interpretation. I would argue that interpreting these complex studies requires an in-depth understanding of medical research design, psychopharmacology, and the numerous confounds that can affect treatment outcome. Unfortunately, a lack of medical training has not stopped a few critics from confidently charging that psychiatrists are harming their patients by prescribing long-term AP treatment.”

It requires a certain medical training, it seems, to see the scientific literature in the way they do, and I have to agree, that is true. Pies’ writings reflected the same assessment of the literature that Andrew Nierenberg made years ago, when he “repudiated” Anatomy of an Epidemic at a Grand Rounds. It is now five years later, and I still cannot see the Harrow study or the Chinese study in the way they do. Perhaps if I went to medical school, and became trained as an academic psychiatrist, I would  — and I am not being sarcastic here — see the studies in the way they do. We think of medical training as providing doctors with an expertise, but it also inducts them into a tribe, which has a terrible tendency to think alike.

The Record Has Been Set Straight

Pies’ piece was published in the Psychiatric Times, which means that it was directed as his peers, a post designed to assure them that all was okay in the world of psychiatry, and with its use of antipsychotics. Frances then incorporated Pies’ piece into his blogs on this topic, where, over the course of three posts, he told of “setting the record straight on antipsychotics.” Here are a few salient quotes from his blogs:

  • “Bob’s position that antipsychotics cause more psychosis than they cure is based on his fundamental misreading of the research literature. . . . (in the case of Harrow’s study, he makes) the classic error of confusing correlation with causality.”
  • “There is no real evidence that (drug induced dopamine hypersensitivity) is related to the return of symptoms (upon drug withdrawal.) It is just Bob’s unproven and in a way irrelevant theory.”
  • “Bob’s stubborn insistence on blaming meds for causing psychosis also flies in the face of history and everyday common sense experience.”
  • “Bob’s doctrinaire, ideological, and one sided warnings of medicine’s harms can lead to reckless risk taking.”
  • “My hope is that Bob will present a more balanced and objective view in his future writings and talks.”
  • My two previous blogs show why Whitaker is wrong.
  • Typically, “dissatisfied patients . . . have had a disastrous experience with psychiatric medication that was prescribed in too high a dose and/or for too long and/or in odd combinations and/or for a faulty indication. They are angry for a perfectly understandable reason—meds made them worse and going off meds make them better. Their natural conclusion is that medicine is bad stuff, for everyone. And this is confirmed by the journalist Robert Whitaker’s misreading of the scientific literature, leading him to the extreme position that ‘I think that antipsychotics, on the whole, worsen long-term outcomes . . . people treated with antipsychotics, would be better off if these drugs did not exist.’ “

I think I come off as a little deranged in Frances’ descriptions, and so be it, but the last quote attributed to me caught my attention. I knew that I had never said that “people treated with antipsychotics would be better off if these drugs didn’t exist.” It’s not the type of blanket statement I ever make, and so I wrote Frances, which triggered this email exchange:

Me: Would you please tell me where you are pulling this quote from, that I said, “people treated with antipsychotic drugs, would be better off if these drugs did not exist.”

Frances: Don’t know. I will pull if you like.

Me: I am not asking you to pull it from your published piece. I am asking you to tell me where you got it from. So when I said pull it, I am asking you to tell me where you got it from. You wrote it and so it should stay part of the piece.

Frances: Sorry for screw up, Bob. Didn’t mean to misquote you. Reconstructing it, I started doing this as a shared piece with someone else, who then got too busy and had to drop out. It was in his section and I included it in the final without checking accuracy. My bad. Probably best if I just ask to have it deleted. What do you think. Sorry for inconvenience.

Now I guess I could be upset by this. Many people might find it more than an “inconvenience” when the head of the DSM-IV task force puts words into your mouth to make you look extreme, without any real concern of whether it was true, and then, when asked about it, apologizes for the “inconvenience.” I am not sure if he and his colleagues would find it an inconvenience if I had done the reverse. But then I figured it was all of a piece, and that if the blogs by Pies and Frances gave us an opportunity to watch their minds at work, and we had the opportunity to see their embrace of the China study, while simultaneously dismissing the Harrow study, then this making up of a quote, and shrugging it off as an inconvenience, was just one more little tidbit of information for us mind-watchers to mull over as we contemplate the authority that this profession, with their diagnoses and drug treatments, has over our lives.

A Case Study in Cognitive Dissonance

The writings of Pies and his colleagues, I believe, provide a compelling case study of cognitive dissonance. Cognitive dissonance arises when people are presented with information that creates conflicted psychological states, challenging some belief they hold dear, and people typically resolve dissonant states by sifting through information in ways that protect their self-esteem and their financial interests. It is easy to see that process operating here.

Harrow and Wunderink belie the conventional belief that antipsychotics are an essential long-term treatment for schizophrenia and other psychotic disorders. As such, these studies are bound to provoke conflicted feelings in those who hold such beliefs. In order to resolve that psychological conflict, Pies and his colleagues need to dismiss these studies, and so their minds reach for the “correlation doesn’t equal causation” refrain, which allows them an easy way to do that. Similarly, by focusing on an MRI study that found drug-induced brain shrinkage to be associated with better scores on a cognitive test, they can  put aside other MRI research that found that changes in brain volumes were associated with a worsening of symptoms and functional impairment. And so on. Seeing the drug-withdrawal studies as measuring the return of the illness,  rather than as research confounded by withdrawal symptoms, allows them to conclude that their protocols are “evidence-based.”   The Chinese longitudinal study helps them believe that too — even if it requires ignoring the details of the study. Finally, another regular feature of cognitive dissonance is to see the critic as biased, or uninformed in some way, and voila, here I am, cast in that role.

Indeed, in this review of the “mind of evidence-based psychiatry,” we can see all of the elements of cognitive dissonance at work. Pies and Frances reviewed the evidence in ways that allowed them to remain comfortable with their medicating practices and their professional sense of self. Unfortunately, what we don’t see is a curiosity and openness of mind about findings that challenge their medicating practices, and what we don’t see is a desire to plumb the scientific literature to figure out how to improve treatment protocols. And what that means is that psychiatry, as an institution, isn’t capable of adopting evidence-based-practices. The information on psychiatric drugs that is to be found in the research literature is simply too threatening to psychiatry, and it provokes what I like to think of as an institutional cognitive dissonance. When that happens,  the institution is going to sort through the scientific literature in ways that protects its power, its prestige, and its products in the marketplace.

Which leads to a challenge for us as a society: How can we yank power from a medical discipline that resides within such a dissonant state, and yet has such an impact on our lives?

About the Author

Robert Whitaker has won numerous awards as a journalist covering medicine and science, including the George Polk Award for Medical Writing and a National Association for Science Writers’ Award for best magazine article. In 1998, he co-wrote a series on psychiatric research for the Boston Globe that was a finalist for the Pulitzer Prize for Public Service. Anatomy of an Epidemic won the 2010 Investigative Reporters and Editors book award for best investigative journalism.

The post The Evidence-Based Mind of Psychiatry on Display appeared first on Kelly Brogan MD.



from Kelly Brogan MD http://ift.tt/29qx6lg

No comments:

Post a Comment