Today, Open Health Policy is launching our inaugural podcast featuring a conversation on the future of AI in healthcare. Join me and my colleague Matthew Mittelsteadt as we explore how AI’s potential extends from revolutionary advancements you wouldn’t think possible to less sexy, but just as beneficial, everyday tasks, hosted by Sam Alburger. Enjoy!
Transcript
Note: While transcripts are lightly edited, they are not rigorously proofed for accuracy. If you notice an error, feel free to reach out to mbjoerkheim@mercatus.gmu.edu
Sam: Welcome in. This is Sam Alburger, and I'll be your host today for a conversation on the present and future of AI and healthcare. I am joined today by two esteemed scholars who should be able to sort out some of my pressing questions. First, we have Markus Bjoerkheim, a research fellow specializing in healthcare at the Mercatus Center. Also joining us from the Mercatus Center is Matthew Mittelsteadt, a research fellow focused on AI. Gentlemen, welcome.
Markus: Glad to be here.
Matthew: Thank you.
Sam: Let's jump into this. Markus, tell us a little bit about your work as a research fellow working in the realm of health care.
Markus: My background is that I did my PhD in economics at George Mason University, where I studied nursing homes. Essentially how regulations and policies can prevent or cause the worst kinds of health outcomes, the accidents and adverse events for patients in nursing homes. Then I came here to Mercatus, started with the Open Health Project, and I've continued studying those kinds of questions in the realm of Medicaid.
Sam: Wonderful. Matt, AI is the most rapidly evolving field in the world right now. Tell us a little bit about your role as a research fellow in the space.
Matthew: As you said, this stuff is rapidly evolving. In terms of my role, in terms of what I'm trying to do, a lot of it's keeping up, because this stuff is changing constantly. A lot of it's also trying to figure out how is this technology transforming the wide, wide variety of industries like health care that this general purpose set of technologies is touching. Now, in terms of policy questions, that makes it hard to do this work.
Overall, I would say that my big goal in terms of policy choices is try to push policymakers to write good policies that specifically emphasize across the board AI diffusion. We have this technology, I don't want to see it sitting on shelves collecting dusts in the form of a research paper. I want to see it actually being applied to real-world use cases, so it can actually start transforming people's lives as we've been talking about for decades now. That should be the emphasis of policy making sure that this tech is, of course, used safely, but gets into people's hands so it can start transforming the world. That's largely what I'm trying to emphasize in my research.
Sam: Absolutely. Gentlemen, we're truly at ground zero when it comes to AI advancements. Let's talk about the early stages of AI. That's the first thing we'll talk about today, the early stages of AI when it comes to health care. Currently, the craze with AI is in the form of LLMs, such as ChatGPT, and how they can comb over mass amounts of data. Markus, can you tell me about the project you're currently working on that takes advantage of this?
Markus: I can. This is a project where researchers like me, we usually use a natural experiment in text to try to find the cause and effect of some regulation or policy on people out in the real world. The great thing, as you mentioned with LLMs, is they can go over huge amounts of text and find patterns in that text. I'm working on a project where we use LLMs to identify those patterns that would allow researchers like me to study a policy, find out, "Hey, is this a policy that is helping people, or is it a costly policy that is preventing healthcare providers from providing good care and that we should look into getting rid of."
Sam: Absolutely. Matt, there's currently the first AI drug being developed, and it's in clinical trials. Can you tell us a little bit about this drug and why it's a major breakthrough for drug development?
Matthew: Just to give some context here, the same sorts of technologies that can be used to comb through policies and figure out whether or not they're working can, of course, be used in health care with things like drugs to treat diseases. In recent years, we've seen some massive advancements in what AI can do in terms of figuring out how drugs work and how they impact the human body.
As a result of this, this is pretty amazing, we're starting to see AI systems both develop and target, which is a critical piece of the puzzle, drugs at the human body. There's this company Insilico Medicine, they're a Chinese biotech firm. They actually developed the first AI-generated drug. The AI actually invented the molecule that will go into your body and AI-targeted drug, meaning the AI figured out, "Okay, we have this molecule, how does it interact with the human body? How does it play with your proteins? How might it impact this specific disease?"
They developed this drug for a disease called IPF. This is a terminal disease, and it impacts roughly five million people globally at any given time. They invented this drug using artificial intelligence. Not only that, they invented it in three years time, which, based off what I understand, traditionally, it takes about 6 to 10 years, that's correct, to develop a drug. Three years is really fast. You can imagine the cost savings. Not only that, you can imagine how quickly can we get drugs out to people to cure these diseases? There are so many things without cures today, and we now have this drug.
What’s amazing about this is that this drug has now entered FDA phase 2 clinical trials. It's being tested actively on humans. Really, what this should illustrate to you is not only the promise of artificial intelligence in domains like drug development, but the fact that this is a reality. This is in human trials right now, going into people's arms. We don't know if this is going to be approved or not. This is a test case of this whole potential technology, but it really should show you that everything that we've been saying is promise for so long is now reality, and we need to start focusing on policies that can harness that reality.
Sam: Absolutely. Markus, healthcare outcomes, they're super important to you. We're going to, what can anybody do with ChatGPT? This is what companies are doing to develop drugs, but what can I do as an individual? Healthcare outcomes are super important to you and your work. AI can be used to expedite and synthesize information for everyday people such as building a workout plan or creating a diet. Can you tell me about these ways and others that AI can improve healthcare outcomes?
Markus: Great question, Sam. I think AI can help us a lot with healthcare outcomes both in the healthcare system and as you say, our health behaviors, how we take care of ourselves on a regular basis. Most of my research is in the healthcare sector. I'll start there. One thing that I'm very excited about in terms of AI is that AI can flag those times when doctors or nurses make potentially deadly mistakes.
In the healthcare sector, we have about 50- to 100,000 deaths occur every year from mistakes that doctors and nurses make. We obviously go to the doctor, to the hospital to get better. Just like any other profession, doctors, nurses make mistakes. Sometimes they're deadly. If we can integrate AI into the workflow of our healthcare providers, they can flag that one time when the mistake is potentially deadly.
I like the analogy to airplanes today. They have what's called the ground proximity warning system, which basically tells the pilot, “You have lost track of where you are. In 60 seconds, you're about to fly into a mountain.” Most pilots will never hear that, but the one time they do, they know exactly how they should respond, pull the plane up. They will get an alarm in their air that will call out, “Pull up, pull up.”
AI, if we integrate it into the workflow of healthcare providers, it can flag those same instances where, “Oh, the mistake is potentially deadly. You need to look into this right away. The patient's test result is back. We can save the patient, but the surgery needs to happen as soon as possible.” When the doctor gets that result and then something happens, the power goes out, someone walks in with an incident, their son or daughter calls. These things happen to everyone. The one time that happens, AI can flag it and say, “You need to remember this thing.” That's I think one area where I am very optimistic about AI in healthcare.
Sam: That's wonderful. You touch on the fact that healthcare providers aren't right 100% of the time. It also brings into a tricky question where AI currently as it stands is not right 100% of the time. There's a tricky situation going on here in regards to the ethics of AI, especially when it comes to healthcare. Either of you, could you tell me a little bit about the future of liability, accountability? Markus, I know we've talked about this previously, patient empathy and how that applies to AI.
Matthew: The liability question is a tricky one. I think overall, and this is going to be a trial and error thing, we have to figure out how to integrate these systems into our workflows and how doctors are going to be using them. We need to figure out how to gear the system at incentivizing people to reduce errors as much as possible. At the end of the day, I think what's key here is that AI systems, while they can flag cancer in radiological images, while they can make diagnoses, while they can do X, Y, and Z, these are tools. The person who is applying this tool to the patient and using that tool needs to know full well how that tool works, what types of patterns in diagnoses it has, any blind spots it has because the patient is not going to be informed about that. The person, the doctor using the tool will have much more access to the information about these tools. Therefore, the doctor should be reading up on when and when not to use these technologies and be taking it upon themselves in terms of liability to control those potential failures.
The challenge is some people think that liability should fall onto companies, and the problem is that companies don't know exactly how these technologies are going to be used in the real world. They could design it for a very specific use case, but on the ground, a doctor could still end up using it for a different use case. That's not something the company can control, but that is something the doctor is actively controlling, and therefore that person who chooses when to apply these systems should take on liability.
Sam: Absolutely, and Markus.
Markus: What I would add is that I completely agree that if AI is going to actually make clinical decisions without any oversight by an actual doctor, there's a liability question that we need to think about. What seems to be a key point to me is that there's an awful lot of benefits on the path from where we are today until we get to a stage where AI is making decisions autonomously .
Self-driving cars, people are worried about what are they going to do when they have to crash? Are they going to crash this way or that way into the two adults or the four children? Yes, that's a liability question, but there's an awful lot of benefits on the way there. Right now, the cars are braking if you're about to hit the car in front of you. It's the same with AI, they can assist in a lot of ways with our oversight. I think a lot of the benefits are going to be on the path to that state where AI is making decisions themselves. A lot of these liability questions, they come in down the road. There's a lot we can do with AI before we need to solve some of these difficult liability questions.
Matthew: I just want to add on that in terms of the benefits here, we need to make sure that liability, any laws or whatever that we pass regarding this, do keep in mind a certain amount of error should be acceptable, because we want these systems in part to work as a double-check.
The doctor makes a diagnosis and then the AI reviews that diagnosis and offers its opinion as just a second check to make sure the doctor has thought about X, Y, and Z. I think in that specific circumstance, the AI can be wrong and we should expect that it will be wrong, and we don't want to deter people because of liability questions from using it and perhaps getting a wrong result. A doctor should be able to use this thing as a double check so we can get overall better outcomes and feel confident in the ability for certain mistakes along the way.
Markus: The relevant comparison is how often are the doctors wrong? We shouldn't expect the AI to be right 100% of the time, doctors are not right 100% of the time.
Matthew: Absolutely. I was actually just on that note reading a story. This woman went in for a mammogram, and she had two doctors tell her she did not have cancer. Then they ran the screens through an AI system, and the AI saw some little gray patch or something like that, that turned out to be stage two cancer. Two doctors got this wrong. Actually, two doctors they're doing due diligence there to have a second opinion on this very critical screening still got it wrong. The AI then filled in and got it right. That's the type of thing we're trying to prevent here, doctors get stuff wrong all the time and we just need this double-check.
Sam: Absolutely. Markus, you had a fun tidbit on the empathy piece of AI. We think of AI as this robotic future. For a lot of people AI is very scary, but you've found some research that that may point to the contrary, actually.
Markus: Yes. Initially, I think a lot of people thought, “Well, these tools might be helpful, but they're never going to be able to reproduce the human connection a doctor has with their patient.” What turns out to be the case is that doctors are often not all that personable. They have a lot of doctors speak, they talk in a way that's not always that empathetic. One study found that actually, a well-designed chatbot, patients find it to be more empathetic than the doctors. That doesn't mean that we should replace the doctor entirely, but it's clear that it can already assist the doctors in delivering messages to patients or talk through questions patients may think of after they've seen the doctor and received the diagnosis. “Oh, you have a follow-up question? Maybe a chatbot is the first place you should ask.”
Sam: It's a wonderful nugget. It's a great place especially-- as we talked about previously, everybody has a MyChart now, and so they can receive the results and you can actually put those PDFs into something like ChatGPT and have them read your results. You just had an experience with this recently.
Markus: Yes, exactly. My beloved dog Uno, he had trouble-- he got himself into a bottle of melatonin and ate the whole bottle. Melatonin wasn't dangerous for him, but it was strawberry-flavored and had xylitol in it, which is dangerous for dogs, so we had to take Uno to the emergency vet. Me and my background, I was obviously worried for Uno as anyone would be, but I also know how these healthcare systems work. I immediately knew that it's worth getting a second opinion here. Are we treating appropriately here?
I called our regular vet. She didn't call us back. It could be because calling people and giving a second opinion isn't something you can bill for. I did ask the place for a PDF of just all the clinical notes, test results, everything they had, and I gave it to Chat. Chat was perfectly able to guide me through all the questions that I had along the way. Initially, it was like, "Look, are we giving the appropriate treatment here? These are the results." It was comforting, actually, to know that, "Yes, these results are serious. It should be treated this way, and it is being treated that way."
The other thing that I'll just mention is we know that, there's this asymmetric information between doctors and patients. Doctors know what the results are, what's best. Patients don't know that. One great thing about the LLMs is that they are reducing this asymmetric information problem because you can say, "Hey, Chat, you are a clinical doctor specializing in this. Is this appropriate?" The next questions that I then started asking Chat was, "Look, are we overtreating, Uno? He received three plasma transfusions on the same day." I still don't know how much they cost, but I know there was a lot of zeros. It was also comforting to know that those values had gone up a lot and it needed to be taken care of. It needed a lot of treatment.
That's one place where we might reduce some wasteful healthcare spending if patients have this ability to double-check in both directions, are we doing too little? Are we doing too much? I think one area where that aspect patients just go through their patient portal, downloading everything that the doctors have, notes, test results, and feeding them to AI, can help in the areas where regulations might prevent AI from being integrated, like VA hospitals, government-run healthcare facilities. My guess is they're going to be the last place.
Matthew: Probably, given current regulations, yes.
Markus: Probably, the last place where AI will be integrated. Patients can be empowered by AI here. I think that's also going to have a lot of benefits.
Sam: Absolutely. Is Uno home safe?
Markus: Thank God. He is back. It's like he's 10 years younger. He is powering throughout, patrolling the yard, chasing squirrels. Uno is back.
Sam: Wonderful. Matt, one final question for you, and this may be an ambitious question. I'm going to be asking you to be a fortune teller here. LLMs currently are all the rage. We're finding incredible uses for them, and they're getting more and more advanced by the day. Beyond LLMs, where does the future of AI lie? Because AI and LLMs are not exactly the same thing, even though the general public equates them to be very often.
Matthew: Especially within health care, there's so many areas. Obviously, LLMs can help with things as we just discussed, patient charts, interpreting those to people, providing second opinions, providing just a 24-hour doctor, if you will, that could potentially analyze your test results and whatnot. There's so many different automation cases that this stuff can be used for. Triage, for example, in hospitals, whether it be a dog or a human. Triage is one area in which stressed out humans are probably not the best decision makers. ER docs and support staff are being flooded with patients all the time, patients who have different linguistic needs, who are yelling and screaming and doing all these crazy things. That's not a great place for good decision making about who should receive care start.
AI systems, perhaps, and there's good evidence to suggest this, perhaps could do a much better job at this. If you get triage right, if you automate that well, you can save a lot of lives. Just to give you an example, there's a system called eStroke that was recently trialed in the National Health Service in Britain. What they found was that by improving the automation of stroke triage, specifically, they're able to cut down the time to care by about an hour. By doing that, the result was that people who achieve functional independence after their hospital visit for the stroke increased from 16%, which is exceedingly low, to 48%, about half. That's crazy.
This isn't like some fantastical technology. This isn't an LLM or any of this cutting edge stuff. It's more traditional technologies that focus on automation, but by integrating those types of things, we can really change people's lives. We should be thinking about that type of thing, too, when we're talking about this conversation. It's not just ChatGPT. It's analyzing studies to improve policies. It's automating triage. It's developing new drugs. It's detecting cancer in diagnostic images. It's all of these things. We really need to be thinking about AI in this holistic manner because it's so diverse. If we take advantage of that full diversity, that's how we get this transformation.
Markus: One thing that I would add to that is I think some areas where we'll see a lot of benefits might not sound that sexy when we talk about them here at a podcast. One example that I'm excited about is that doctors have to write these pre-authorization requests to get the insurance company to pay for a lot of services. On average, doctors write 37 of these per week, and it takes a couple of minutes, five minutes each of them. AI can both write those and the doctor can just have a quick look at it. "Yes, this is what it should say," and send those to the insurance company. If AI can save doctors one minute for each of those, that's more time that they can spend actually doing what they're good at and have more time with patients.
Similarly, if AI write these pre-authorization forms, sends them to the insurance company, the insurance company can have AI that reads them and reviews them, and in a second, it can be approved. 80% of them are approved anyway. Instead of having to wait three to four weeks for it to get to an actual human's desk before it gets approved, this can be quicker so patients can go and get that service, that treatment, faster.
I think there's a lot of use cases, and some of them might not sound that sexy, but will produce a lot of benefits by allowing doctors, the most highly skilled people, just giving them more time to do what their skills are needed for, not menial tasks that universally anyone could do.
Matthew: To really emphasize this, I've talked about things like drug development, but it is the boring stuff that really will matter, especially with doctors, as was insinuated. We actually have a pretty large labor shortage right now. If we can unlock 50% more of a doctor's time, that's a doubling of labor. I mention to people frequently, it's like when you think of what a doctor does, you think of someone at the bed injecting things and whatnot. What a doctor actually does is paperwork, and we need to change that. AI could potentially go a long way towards fixing that problem.
Sam: Absolutely. Thank you both so much for enlightening us on the present and future of AI and healthcare. I look forward to seeing more great work from both of you in the future.
Markus: Thank you, Sam.
Matthew: Thank you.