Clinical sessions in the behavioral healthcare world – including both mental health and addiction interventions, as well as both professional and para-professional encounters – can be analyzed from any number of perspectives, but I want to focus on three critical drivers of therapeutic interactions: traditional interventions which lack any real basis in evidence; therapeutic activities that originate in evidence-based standards; and interpersonal dynamics between the people in the room.
During my years of clinical training and practice, I have listened to tapes of my own clinical hours and those of others I have supervised. I will never forget my graduate school experience of watching videotaped therapy sessions by three master therapists (Carl Rogers, Fritz Perls, and Albert Ellis) with Gloria, a mildly neurotic woman who then rated each of these clinicians on their value to her. I also remember my training in SUD treatment with a focus on the 12-Step program. This tradition-rich, evidence-free approach has helped many people. Peer counselors with serious mental illness have also amazed me by the power of their empathy, support, and practical wisdom.
However, I also remember the random quality of so many therapy discussions I have witnessed. We are taught as clinicians to have a therapeutic model in mind as we work with people, and this model presumably guides what we say. Yet a clinical encounter is just a discussion between two people. There is a dynamic, unpredictable element to any discussion. It is driven by the level of comfort and engagement between the participants, by the unexpected answers, statements, and emotions that the clinician elicits and then tries, in the moment, to understand, and by the overall framework of this being an encounter intended for one person helping another.
Do things change when we move from the complexity of human relationships to the dynamics of digital self-help tools? These web-based resources are generally based on empirically-supported therapeutic models such as cognitive behavioral therapy. There are many such products on the market today, and all of them report significant clinical value in peer-reviewed publications. Should we conclude that the sole reason for the clinical improvement is the evidence-based technique used on the digital platform?
Most marketing claims by these technology companies would endorse that argument, but we know that factors within each person seeking help are critical to outcome (e.g., motivation, existing coping skills, comfort with the tools being introduced), and so there is no simple answer about why people do or do not benefit.
Some people are primed for an intervention, any intervention. Some people are primed for that particular intervention. Some people are closed to any intervention. You will look like an amazing healer if you only get the first type of client. You will seem pretty great if you are selling what clients are buying. You will look pretty awful if you only work with the people not ready to change. The point is that this is an interaction, and a focus just on the therapeutic “secret sauce” being sold is overly simplistic.
Measurement changes the equation
Discussions about clinical encounters, both in person and virtually, are too often stuck in these three dimensions, but clinical measurement can change the discussion and lead us to a new guidance system for tomorrow. Consider this: If therapists using cognitive behavioral therapy or interpersonal therapy or some other validated approach were all achieving an effect size around 0.08, then why would we spend much time focused on fidelity to a specific model? This is what the evidence shows – no model is better than another – and yet there are many people willing to die on the sword of promoting evidence-based practices.
Even if you accept this last point, there should be no reluctance to encourage master clinicians to articulate new interventions for helping people achieve better mental health and wellbeing. The message of Bruce Wampold’s analyses over the past two decades (The Great Psychotherapy Debate, 2001 and 2015) is that psychotherapy is “remarkably efficacious” as practiced today by clinicians with deep training and commitment to specific models of therapy. He is suggesting that clinicians adhere to the validated clinical model that best fits their thinking and style.
The message might be stated another way: Trust in the validated clinical model that you are using, but verify that each patient is progressing as expected. There is a vast research literature (started by Michael Lambert) that shows clinicians commonly fail to detect which patients are deteriorating and are likely to drop out of treatment with a poor outcome. Measuring their progress with patient self-report outcome assessment tools, continuously throughout the treatment episode, helps to identify those high risk drop-outs. Furthermore, it does so in time for the clinician to develop an enhanced treatment plan to keep each patient engaged and on a path to recovery.
We help our patients in every way we can. We rely at times on traditional interventions with little empirical evidence; more commonly, on well-validated clinical practices; and, of course, we can never escape our ongoing judgment about how a complex and dynamic interaction is evolving. Behavioral healthcare is subjective and scientific, personal and prescribed, reactive and routine.
We should enter every therapeutic encounter with a sense of freedom to react as we see fit to every issue that is raised. Years of training prepare us for this sense of freedom. Yet we should also not be too arrogant to measure our results. We should always be asking how this patient is progressing right now. This is more important than the question of how one model of therapy is faring against another.
Let me end with a final analogy related to guidance systems. The person helping another is always making decisions on the spot with whatever guidance they can find. While the historical guidance for clinical work is fundamentally a training model, we might also avail ourselves of a data monitoring model. The pilot of an airplane flies the plane, but she uses all of the data available to make decisions. The clinician is no different. The problem is that we have generally not made valuable data available to clinicians. It would be like telling the pilot to fly at a certain altitude, yet expecting that their training prepared them to know their altitude without the assistance of instruments. I don’t want to fly on that plane.
Ed Jones, PhD is senior vice president for the Institute for Health and Productivity Management.