Timothy Baker has a problem with psychology today. He thinks it bears a dangerous resemblance to the medicine of yesteryear: anecdotal, unscientific, as likely to hurt as help.
"[D]espite compelling research support for the merits of specific interventions for specific problems, clinical psychology, as a field, has failed to embrace these treatments," writes Baker, a professor of medicine at the UW-Madison's School of Medicine and Public Health, in a paper that's generating national attention and controversy.
The paper argues that many clinical psychologists, like most doctors up until a century ago, "value personal clinical experience over research evidence, tend to use assessment practices that have dubious psychometric support, and tend not to use practices for which there is the strongest evidence of efficacy."
Baker, also director of research at the university's Center for Tobacco Research and Intervention, is one of three authors of the paper, which will be published in November in the journal Psychological Science in the Public Interest. Released online in early October, it has already been covered in Newsweek and the Los Angeles Times and drawn notice from the clinical psychologists it indicts.
Some of them think the paper's analysis is flawed.
"The problem is that you cannot necessarily study what happens in psychotherapy in a double-blind methodology like you might use in a drug study," says Gordon Herz, a Madison clinical psychologist. People's psyches are too different from each other, and what works on one patient in a research setting won't necessarily work on another in the real world.
Herz says the new paper has stirred up controversy on psychology email lists and other forums, but "in some ways, it doesn't plow a lot of new ground."
Still, it's fertile ground for discussion.
Baker's study is just the latest salvo in a longstanding dispute between psychological researchers and clinicians.
On the same side as Herz is Bruce Wampold, a professor in the counseling psychology department at UW-Madison's School of Education. Wampold wrote a 2001 book called The Great Psychotherapy Debate: Models, Methods, and Findings, which claims that the cause-and-effect model of psychology Baker and his co-authors advocate is inadequate. His work is cited in the new paper.
The paper anticipates Herz's argument. For example, it agrees that cognitive-behavioral therapy may not always work for every patient suffering from post-traumatic stress disorder. But it says that since this method has been shown to work most of the time, it should always be tried first.
But Wampold, who earned a B.A. in mathematics before studying psychology, says it's not true that cognitive-behavior therapy, or any particular treatment, has a proven level of effectiveness.
"The problem with mental health science is that for any particular disorder, it's never been shown that one treatment is better than other treatments," he says. "You can find individual studies that might show it, but if you look at meta-analyses, where you take all the studies, the effectiveness is essentially zero."
Baker's response is that treatments do differ. In an email exchange with Isthmus, he maintains that there is empirical evidence that some treatments are especially cost-effective, superior to pharmaceutical avenues, and easily disseminated in real-world settings.
"Some treatments do have much more of this support than do others," he says, "and there are other treatments that will be shown in the future to be equal or superior to those now identified as evidence-based."
The paper acknowledges that one reason clinical psychologists rely on their own experiences is the dearth of research in some areas. But this, it says, points up the need for more research, not the wisdom of using treatments that lack empirical support.
Indeed, the paper says psychologists' failure to employ scientifically validated techniques is bad for patients, who must decide whether to undergo therapy without knowing the chances that a particular treatment will be effective.
And it's bad for psychology, because insurers want proof that the treatments they pay for actually work. If clinical psychologists can't offer compelling evidence to that end, they'll be passed over by insurers in favor of less expensive alternatives, like treatment through primary-care physicians or social workers.
The paper compares the present state of psychology to that of medicine before the early 20th century, when the American Medical Association began requiring medical schools to train future doctors in scientifically validated techniques. It argues that a similar reform movement is needed now.
Such reform, it says, cannot come through an older institution like the American Psychological Association, because such groups have too much invested in the status quo. Instead, it recommends the adoption of a new accreditation system for clinical psychologists.
The Psychological Clinical Science Accreditation System would certify qualifying nonprofit university Ph.D. psychology programs as providing "high-quality science-centered education and training," according to the PCSAS website. Eventually, Baker and his co-authors believe, graduates of PCSAS-accredited programs would demonstrate consistent enough success that schools without the accreditation would fall out of favor.
That's essentially what happened to the old for-profit medical schools once the AMA cracked down. And no sane person would say that wasn't a change for the better.
Baker and his co-authors don't deny that contextual aspects of clinical psychology, such as the individual therapist involved, have an impact. But they think this impact is too ineffable to be useful in making a solid case to insurers. And so their focus is on strategies that can make such a case.
Wampold, too, would like the discipline to have more demonstrable evidence of its effectiveness. But he fears doing so at the level the paper calls for would "in the long run probably destroy psychotherapy as a place where people can make sense of their lives and get better."
Grave as that sounds, Wampold isn't suggesting the paper is ill intended or that the authors have an ulterior motive. (Baker and one co-author are officers in the Psychological Clinical Science Accreditation System. But it's a nonprofit organization, and Baker says that even if PCSAS accreditation is widely adopted, he won't make any money off it.)
"We both want the same outcome - the best mental health services possible," Wampold says. "But there's no guarantee that this is going to improve the quality."