Even if the stigma surrounding getting mental health care is beginning to subside, one huge barrier remains for many people: actually finding a doctor with time to treat them.

While suicide rates spiked back up in 2021, the United States is expected to be short by 15,000 to 30,000 psychiatrists nationwide by 2024. Some estimates project the nation will be short more than 30,000 psychiatrists. But a medical data company based in Boston called OM1 built an AI-based platformed called PhenOM, which could potentially help streamline the process for patients to get the care they need, when they need it.

Dr. Carl Marci, chief psychiatrist and managing director of mental health and neuroscience at OM1, joined GBH’s All Things Considered host Arun Rath to break down the new technology and shine a light on the role artificial intelligence could be playing in the future of mental health care. What follows is a lightly edited transcript.

Arun Rath: First off, give us a broad sense of how this new technology works. What does PhenOM do, and how does it do it?

Dr. Carl Marci: I think the first step in any really good application of artificial intelligence in health care is great data. We get data from a variety of sources — often from medical and pharmacy claims, we get data from the government — but importantly, we also get data from electronic health records of patients. We’re very, very careful to de-identify the data and make sure it’s handled in a very secure way and organize it in the cloud. Once that’s done and in a common data architecture — which means we can actually apply some of these tools — then the fun begins and we can really begin to leverage that data for patient care.

Rath: Is this a diagnostic aid, ultimately?

Marci: Well, there’s many applications. One of them is to help identify patients who are at risk for either worsening disease, or what’s often referred to as treatment resistance.

In the space of depression, you may be aware that often our medications, as good as they are, don’t work for everyone. So we’re using our data to begin to subtype different types of depression and identify patients who may go on to have treatment resistance, and then they could benefit from some of the new treatments that are in our pipeline that we hope will help patients who have failed other types of interventions.

Rath: What are some of the other applications?

Marci: One of my favorite applications, and one I’m very proud of, is our ability to amplify endpoints. What does that mean? Well, in the real world, it’s very hard to get patients to fill our surveys, and it’s often hard to get clinicians to ask patients to fill in surveys. One of the challenges in mental health is measurement. We don’t have a culture in this country — in psychiatry, in mental health — of assessing patients with the types of tools that are available.

So we’ve developed a technology that can model based on a clinician’s notes in the chart and estimate what’s essentially a disease progression score. People might be familiar with the PHQ-9, [the Patient Health Questionnaire-9]. You’ve probably filled one out of your primary care doctor where they ask you a series of questions screening you for depression. We’re able to do that with our artificial intelligence tool and generate a score to fill in gaps in the patient’s journey, and that allows us to do a better job researching our data sets and identifying who’s going to respond to what type of treatment.

"[We're using our data to identify who] could benefit from some of the new treatments that are in our pipeline that we hope will help patients who have failed other types of interventions."
Dr. Carl Marci, chief psychiatrist at OM1

Rath: You reminded me of something a psychiatrist said to me a long time ago about the difficulty of the field: they don’t have the diagnostic tools that other doctors do. There aren’t X-rays or the kinds of ways you can see inside the body to treat the kind of disease they do. It sounds like the data could provide that kind of window?

Marci: I think that’s exactly the right way to think about it. We call it PHenOM because what it really is doing is phenotyping. A phenotype is the sort of behavioral expression of a disease or an illness. So we use our machine learning and our artificial intelligence tools to take literally billions of data points in identified patients to begin to parse them into different groups.

Then, the next step is to begin to look and see who responds to what types of treatment so that, in the near future, we can have clinicians sitting at the bedside, put into the computer a few parameters based on the history the patient gives us, or even read the clinical note and give some intelligent feedback about what that patient is most likely to respond to. We haven’t been able to do that in mental health, and that’s one of the things I’m most excited about.

Rath: You know, a lot of people will come into this thinking about AI and thinking of autonomous things — like a psychiatry bot that steps in. But this is more of augmenting what humans are doing.

Marci: In this case, we’re really trying to create tools for the bedside in clinics and within existing care models.

Now, what you’re referring to, and what I think everyone is excited about, are these large language models, like ChatGPT, and what kind of role they could have in mental health. There are some exciting applications, and for me, the idea of using an artifical bot that has natural language is exciting in a couple of applications.

One of them is to fill in the gaps in care. You know, if I’m lucky, when I’m in a clinic, I will see a patient every two weeks — more typically, every six, eight or 12 weeks. So there’s a lot of time in between those encounters with me. If we had a tool that we trusted to interact with patients, assess them, give some basic advice, encourage them to change their behaviors and apply some of the skills we’ve been working on, to take their medication, I think we would see outcomes get a lot better.

Rath: So, say that first stage where you’re filling out those forms — and we know that people sometimes don’t fill them out that well — you could have some kind of interface that asked you questions in a better way.

Marci: You also could have an interface to remind you to fill it out. One of the biggest challenges we have is just reminding people that they haven’t filled it out, and encouraging them and doing that in an engaging and empathic and natural way.

We know that face-to-face interactions create and generate a therapeutic response. So what I’m optimistic about is that computers can get to the place where they can augment and complement human interventions — not replace — and fill in some of the gaps in care and collect data that then a clinician can then use to get better outcomes.

Rath: How soon do you expect that people will be seeing this technology at their own doctor’s offices?

Marci: Well, if history is any guide, I think there’s going to be a lot of hype over the next one to two years, and then reality will set in. What I encourage everyone to think about and realize is that any of these models are really only as good as the data they’re based on.

What I worry today about rushing to use large language models like ChatGPT clinically is that they are based on good information and misinformation. God forbid someone who, for example, is thinking about suicide interacts with a tool like this, and it mistakenly gives them instructions for how to commit suicide. That would be a tragedy.

So we also simultaneously, as we’re developing these models, need to work with the government and need to think about the equivalent of the Food and Drug Administration evaluation of new medications to evaluate devices and tools like this, or else we probably will see some bad outcomes, and that could spoil it for everyone.