Skip to main content

Artificial Intelligence Brings Potential—and Challenges—to Behavioral Health, Addiction Treatment

February 21, 2020

Artificial intelligence offers the potential to reshape behavioral healthcare and addiction treatment in the coming years. Are providers, payers and regulators prepared to keep pace?

Alaap B. Shah, a member of the firm in Epstein Becker Green’s Health Care and Life Sciences practice, recently spoke with BHE about the emerging role of AI in behavioral healthcare and addiction treatment, potential legal and regulatory changes, and how providers can leverage their use of AI to maximize reimbursement.

Editor’s note: This interview has been edited for length and clarity.

How would you describe the role artificial intelligence is currently playing in behavioral healthcare and addiction treatment?

There has obviously been a huge surge in interest in leveraging artificial intelligence to solve problems across healthcare, and behavioral health is certainly not an exception. A lot of times, what people are trying to do with artificial intelligence in behavioral health depends on the segment you are trying to impact. There are some folks of the mindset we need to be empowering physicians to do their jobs better—smarter, more efficiently, more effectively to reduce behavioral health issues, whether it’s addiction management, addiction prevention, suicide prevention or other issues people may have. Some people are also taking the tact to leverage artificial intelligence more from a direct-to-patient perspective. They’re of the mindset that the old paradigm of how we’ve taken care of people with behavioral health or addiction issues is broken, it’s too slow and too reactive. And when I say “broken” or “reactive,” I’m referring to things that are geared toward inpatient settings where the person already went through their traumatic event. They already had their overdose or they’re having their psychiatric issue that has led them to the hospital door, or it’s something in the recovery phase after you’ve already gone through a traumatic event.

All of this is a little too late in some people’s view. What some people are trying to do with artificial intelligence is get ahead of that process and disrupt it in some manner and say we can discover these issues much earlier. Perhaps we shouldn’t wait until someone shows up at the hospital door or has to go through recovery programs after the fact. Perhaps we can detect whether they have suicidal tendencies earlier or help them track their drug use and alert them that they’re taking too many opioids and are at risk of becoming addicted. There are lots of ways people are trying to inject artificial intelligence through the continuum of care.

Are the laws and regulations around healthcare that are currently in place capable of accounting for the implementation of artificial intelligence? Do you foresee more legislation and regulatory changes on the horizon?

It’s a big question. Generally speaking, there are no direct regulations around artificial intelligence to govern its use in healthcare specifically and certainly not in behavioral health. There are historical laws and regulations that govern use of information in the healthcare context—privacy rules like HIPAA on the federal level or some state laws that govern privacy and security that would be germane to AI operating appropriately. Any providers leveraging patient information in the context of AI would need to navigate those laws.

There are also medical malpractice laws. To the extent that providers are leveraging artificial intelligence to help with clinical decision making, there are still questions about who bears the risk. Is it going to be the provider? The developer of the artificial intelligence? That’s still being sorted out, oftentimes contractually between the vendors and the providers using the technologies.

I think there is still a lot of thought that needs to be put into governing the use of AI in healthcare. Right now, there are people talking about these things on Capitol Hill. There are people thinking about these things at the state level. But nothing has come to fruition, and that’s because the technology development is outpacing lawmakers’ ability to regulate it. There are a few things that have emerged as consensus when it comes to making regulations around this. For one, you need to strike a balance between regulating foreseeable risks and being out of the way enough to spur innovation. Regulators are grappling with that line right now. I don’t think anyone has a good answer quite yet. The American Health Lawyers Association is convening a group of industry experts from the public sector, private sector and academia in March to have a discussion about this specific issue. How do we regulate artificial intelligence in healthcare? What are the risks we see? What can we do to mitigate risk? It’s an ongoing discussion.

Let’s talk reimbursement. Are payers up to speed on how AI is starting to be used specifically around behavioral health and addiction treatment. From the provider perspective, are there are strategies you would recommend for maximizing reimbursement?

AI is not a secret to the insurance sector. They’re already using AI on their side of the house. They’re leveraging AI for lots of things like utilization management and prior authorization. When it comes to providers leveraging AI to provide care, the payers are not necessarily reimbursing for the use of AI. Rather, they’re either reimbursing either on a fee-for-service basis or as the industry moves toward a value-based payment model, that’s where AI can make a dent to the extent that payers adopt those models. Artificial intelligence can theoretically improve efficiency and quality, and make the whole healthcare endeavor cheaper. It’s the old adage “better, faster, cheaper: you pick two.” The artificial intelligence folks are saying we can get all three with AI.

That’s ambitious.

I think payers want to have that happen. It means their bottom line is better off. But the jury is still out on “better.” I think we can get faster and cheaper, but better is contingent on the volume and quality of data being used to train these algorithms and ultimately use them.

Are there specific use cases where you have seen AI make an impact in behavioral health or addiction treatment to this point?

One use case I’ve heard about is using AI to help people track their opioid use. It’s a way to say that you’ve gotten a legitimate prescription to deal with a legitimate pain issue. You’re taking the prescriptions, but as we know, opioid use can quickly become overuse or misuse. So, there is AI being developed to help people track those medications, so when they get foggy on the path of taking those meds, they’ll be able to go back and get reports to see how their usage is trending and get alerts to the extent that AI detects the person might be misusing the prescription.

Another one in the addiction space is when people are recovering from addiction, peer groups are often needed to create a stable culture around the person’s recovery. With that comes good stuff—the right peers who can support you through the process. But if you get into the wrong peer group, you may face triggers that lead to relapse that leads to negative consequences. Striking the right balance of who is in that peer group is always a challenge. AI is being leveraged to solve that problem. Looking at characteristics of patients in the recovery program and trying to match up people based on certain attributes is something that people have shown to be valuable to patients and recovery and able to be powered by data and AI.

And then, of course, there are predictive analytics. Let’s get a bunch of attributes about a person, often in the context of depression and addiction, and let’s triangulate to see based on various attributes if we can predict whether a person is at risk of addiction or relapse. Let’s see if we can predict whether this person could become suicidal. There are studies out there using artificial intelligence to prove this out, and it has been pretty successful. There’s a lot of headway being made. These things have to be scaled over time. It’s small population research at this point. But certainly, people are making progress on this front. If we can start to mature these AI algorithms and get them to scale, they may be able to be interventions before someone becomes addicted or attempts suicide and ends up at the hospital. There’s a lot of promise, but we’re still in the early days.

Back to Top