Members: Click on the image to download the PDF

Imagine I receive an invitation to present a keynote at a major therapy conference. Essentially the charge is something along the lines of:

Dr. Van Nuys, over the past 13 years you’ve conducted more than 700 in-depth interviews with a wide variety of movers and shakers across the field of psychology. We think that gives you a unique overview. What significant changes have you seen across that span of time?

I say “imagine” because I’ve not actually received such an invitation, but I have had such reveries in the shower, where I’m prone to such daydreams of glory. With the hot water beating on my head and shoulders and stimulating my fantasies, it occurred to me that the advent of evidence-based practice is one of the most interesting developments in the field.

I guess the reason I find it interesting is that I’m surprised to see that evidence-based practice has taken hold to the extent that it apparently has. I must confess that when I first heard the term I was skeptical. It seemed like the sort of good idea that people would pay lip service to but not much more. Fortunately, I would not have to take it too seriously inasmuch as I hadn’t practiced as a therapist in quite a few years, having embraced an academic career. Also, as a humanistically inclined psychologist, I tended toward the belief that psychotherapy involves processes too varied and subtle to lend themselves to any sort of meaningful quantification. I’m more of a qualitative research guy than a quantitative guy. To keep my confessional honest, I should add that my defensive reaction to the idea of evidence-based practice betrays a certain intellectual laziness on my part.

At the same time, it must be acknowledged that I was not the only doubting Thomas questioning the validity of evidence-based practice. For example, my podcast interviewee, Dr. Jonathan Shedler, has written:

  • The term evidence-based therapy has become a de facto code word for manualized therapy—most often brief, highly scripted forms of cognitive behavior therapy.
  • It is widely asserted that “evidence-based” therapies are scientifically proven and superior to other forms of psychotherapy. Empirical research does not support these claims.
  • Empirical research shows that “evidence-based” therapies are weak treatments. Their benefits are trivial, few patients get well, and even the trivial benefits do not last.
  • Troubling research practices paint a misleading picture of the actual benefits of “evidence-based” therapies, including sham control groups, cherry-picked patient samples, and suppression of negative findings. (Shedler, 2018, p. 319)

To cite one more source in the litany of opposition, Berg and Slaattelid (2017) recently argued:

The notion of research-supported psychological treatments is based on a reductive conceptualization of psychotherapy. Research-supported psychological treatments hinge upon an empirical reduction where psychotherapy schools become conceptualized as mere collections of empirical propositions. However, this paper argues that the different psychotherapy schools have distinct ethoses that are constituted by normative claims. Consequently, the evaluation of the different psychotherapy schools and the practice of psychotherapy should include the underlying normative claims of these ethoses. (p. 1075)

I take this to mean that the evidence-based folks and the therapist folks come from different underlying world views.

[wlm_private “1 Year Subscription|NPT Basic|3 Year Subscription|NPT Standard|Staff|NPT Premium|NPT Standard Monthly|2 Year Subscription”]
Evidence-based practice burst onto the scene on 2005, endorsed by the American Psychological Association (APA), the very same year I retired from academia and started my podcast interviewing of all those psychologists and psychotherapists. For the sake of historical accuracy, I might add here that evidence-based practice had already been described as a “hot topic” in the BMJ a decade earlier (see Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996).So, what constitutes evidence-based practice? At this point, it seems less rigid than I feared. For example, if they required a double-blind research design as the only valid approach, that would be reason for concern. Double-blind studies are typically very expensive to carry out. In medicine, they are generally funded by pharmaceutical companies; however, “big pharma” is hardly likely to want to fund studies that demonstrate the effectiveness of non-pharmaceutical psychological remedies. Besides, while the double-blind approach is the gold standard in medicine, it’s not necessarily the best approach for research in the psychotherapeutic realm.

In 2005, the APA commissioned a task force to report on evidence-based practice as it relates to health services provided by psychologists. Specifically, the task force was charged with “defining and explicating the principles of evidence-based practice in psychology” (EBP Task Force, 2005, p. 4). After extensive review and public comment, the final report was presented to the APA Council of Representatives in August, 2005, and published the following year (APA Presidential Task Force on Evidence-Based Practice, 2006). A broad definition of evidence-based practice was stated as follows: “Evidence-based practice is the integration of the best available research with clinical expertise in the context of patient characteristics, culture and preferences” (APA Presidential Task Force on Evidence-Based Practice, 2006, p. 273).

On the face of it, this definition strikes me as reasonable and flexible.

They further declared that evidence-based practice be based on the “best” research evidence. This is where the rubber meets the road. I wondered how attainable that standard would be. According to the report:

Best research evidence refers to scientific results related to intervention strategies, assessment, clinical problems, and patient populations in laboratory and field settings as well as to clinically relevant results of basic research in psychology and related fields. A sizeable body of evidence drawn from a variety of research designs and methodologies attests to the effectiveness of psychological practices. Generally, evidence derived from clinically relevant research on psychological practices should be based on systematic reviews, reasonable effect sizes, statistical and clinical significance, and a body of supporting evidence. (p. 274)

I’m somewhat reassured that clinical observations carry at least some weight in the determination of evidence-based practice. They went on to say:

The validity of conclusions from research on interventions is based on a general progression from clinical observation through systematic reviews of randomized clinical trials, while also recognizing gaps and limitations in the existing literature and its applicability to the specific case at hand. (p. 284)

The recognition that there may be gaps and limitations in the existing literature is also reassuring. So is this statement: “It is important not to assume [emphasis added] that interventions that have not yet been studied in controlled trials are ineffective” (p. 284). That’s the kind of flexibility a humanistically-oriented codger such as myself can take comfort from.

Ideally, this will be a win-win-win for researchers, clinicians, and patients. In fact, the report urges researchers to seek to do research and develop relevant designs explicitly to fill in the gaps for therapeutic approaches that need more support. At the same time, therapists are urged to actively seek out the research that will help them refine their work. So, researchers and therapists are tasked equally with responsibility to move us all into a meaningful evidence-based practice world.

But who is going to police this? Won’t at least some clinical schools of thought be tempted to exaggerate their claims of efficacy? I don’t see the APA taking on this role, although I do believe that the APA will exert tremendous influence over academic programs and internships to make sure that a commitment to evidence-based practice is understood and emphasized. I also believe the hammer will be wielded most forcefully by the insurance companies. More and more they will insist on evidence-based practice for therapist reimbursement, and part of that process will likely involve them as arbiters of which approaches they will recognize as evidence-based practice worthy. This is something that APA and other relevant professional associations are likely to monitor and, in some cases, resist. So says my own crystal ball.

As I poked around the Web, including a search of the APA online database PsycINFO® (, I was not able to discover a single authoritative list of approaches that are currently recognized as meeting the standard of evidence-based practice, yet you would think such a list exists.

Certainly, we might expect all approaches based on some version of cognitive behavioral therapy would make the list, since its propoents were quick to claim the research high ground. In fact, New Harbinger Publications (, an independent, employee-owned publisher of evidence-based self-help books based in Oakland, CA, lists the following approaches under their evidence-based practice area:

  • cognitive behavioral therapy,
  • acceptance and commitment therapy,
  • dialectical behavior therapy,
  • mindfulness-based stress reduction,
  • mindfulness-based cognitive therapy,
  • compassion-focused therapy, and
  • imago relationship therapy.

Based on podcast interviews I’ve conducted, I would have to add intensive short-term dynamic psychotherapy and accelerated experiential dynamic psychotherapy to that list.
What about depth-oriented, psychodynamic approaches? This was a category I was initially concerned might be swept off the table, but I found that the Pacifica Graduate Institute, a Jungian-oriented program, offers a long list of publication citations affirming evidence-based practice status (see I’d be remiss if I didn’t also mention Dr. Jonathan Shedler’s 2010 American Psychologist article, “The Efficacy of Psychodynamic Psychotherapy”. And I’m certain many more approaches could be added to those I’ve listed here.

What about neuropsychotherapy? Should that be considered an evidence-based practice? On the one hand, it derives from detailed scientific investigation of the brain. On the other hand, interviewees whom I’ve pressed for explanation of what the therapy involves seem to be saying that it provides them with a vocabulary that helps their patients to reframe their difficulties in terms of brain processes. Is there evidence that this reframing increases the potency of therapy? Actually, this vocabulary/reframing could be applied to a variety of existing psychotherapy approaches. I’m not sure a distinct and unique “neuropsychotherapy” exists yet—the jury is still out (Rosier, 2015).

So, on the one hand, I’m not sure that my original ambivalence about evidence-based practice is resolved. On the other hand, it is quite clear to me that the train has left the station and is gathering speed.

APA Presidential Task Force on Evidence-Based Practice. (2006). Evidence-based practice in psychology. The American Psychologist, 61, 271–285. Retrieved from
Berg, H., & Slaattelid, R. (2017). Facts and values in psychotherapy: A critique of the empirical reduction of psychotherapy within evidence based practice. Journal of Evaluation in Clinical Practice, 23, 1075–1080. Abstract retrieved from
EBP Task Force. (2005). Report of the 2005 Presidential Task Force on evidence-based practice. Retrieved from
Rosier, J. (2015, April). What has neuroscience ever done for us? The Psychologist. Retrieved from
Sackett, D. L., Rosenberg, W. M. C., Gray, J. A, M., Haynes, R. B., & Richardson, W. S. (1996). Evidence-based medicine: What it is and what it isn’t. BMJ, 312, 71. doi:10.1136/bmj.312.7023.71
Shedler, J. (2010). The efficacy of psychodynamic psychotherapy. The American Psychologist, 65(2), 98–109. doi:10.1037/a0018378
Shedler, J. (2018). Where is the evidence for “evidence-based” therapy? Psychiatric Clinics of North America, 41, 319–329. Retrieved from