ICYMI: At Hearing, Warren Calls Out Medicare Advantage Insurers for Using AI to Deny Care and Boost Profits
Warren: “Without significant guardrails in place, these (AI) algorithms will continue to harm patients while padding the private insurers’ profits.”
Washington, D.C. — At a hearing of the Senate Committee on Finance, U.S. Senator Elizabeth Warren (D-Mass.) questioned Dr. Michelle Mellow, professor of Health Policy and of Law at Stanford University, and Dr. Ziad Obermeyer, Blue Cross of California Distinguished Professor at the University of California Berkeley, on the use of AI algorithms by private Medicare Advantage (MA) insurers to make coverage decisions. Senator Warren expressed deep concerns about findings from recent investigations revealing that insurance companies in MA are using AI algorithms to systematically deny medically necessary care to patients in need, while simultaneously boosting corporate profits.
In response to Senator Warren’s questions, the witnesses affirmed that federal law requires insurance companies participating in MA to comply with Medicare coverage rules, even if they use AI algorithms. The witnesses also discussed how AI algorithms may be reinforcing structural inequalities and preventing patients, many from underserved populations, from accessing the lifesaving care they need.
Senator Warren concluded by calling on the Centers for Medicare & Medicaid Services (CMS) to ban companies from using AI in their MA plans until they can verify these algorithms comply with Medicare coverage guidelines.
Transcript: Artificial Intelligence and Health Care: Promise and Pitfalls
U.S. Senate Finance Committee
Thursday, February 8, 2024
Senator Warren: Alright, Thank you.
So, over 31 million Americans are enrolled in Medicare Advantage, or M-A, the program that allows private, for-profit insurers to offer Medicare coverage. Now, under federal law, these private insurers are required to cover all Medicare Part A and Part B services.
But in recent years, government watchdogs have found that private insurers are routinely delaying and denying care because doing so boosts their profits. In 2019, the Health and Human Services Inspector General found that nearly one in five payment denials by insurers in M-A violated Medicare coverage rules, meaning seniors were unlawfully denied access to services that they were, by law, entitled to.
Some of the largest insurers that offer M-A are now relying on flawed artificial intelligence tools, like predictive algorithms, to scale up their efforts to deny coverage to seniors. These algorithms sift through millions of medical records to determine the level of patient need that the algorithm thinks they need.
So, Professor Mello, you’re an expert on AI and health policy. Let’s start with an easy question: does federal law require all insurance companies to follow Medicare coverage guidelines, even if they are using AI algorithms to determine coverage?
Michelle M. Mello, JD, PhD, Professor of Health Policy and of Law, Stanford University: Yes.
Senator Warren: Yes. So the law is not suspended just because you used AI. I just want to underscore that. Because it is clear that these companies are not playing by the rules.
So, take UnitedHealthcare, which covers more beneficiaries in M-A than any other insurance company. In 2020, UnitedHealthcare bought NaviHealth, a company that sells its AI services to insurance companies in order to help them make these coverage decisions.
Last year, an investigation revealed that UnitedHealthcare had pressured employees to strictly follow NaviHealth’s algorithm’s determinations, leading these human beings to systematically deny care at skilled nursing facilities, even when those decisions were against doctors’ orders.
Dr. Obermeyer, you’ve conducted extensive research on how algorithms are used in health care. Can you talk just a little bit about the dangers that come from solely relying on AI algorithms to make these coverage decisions?
Ziad Obermeyer, MD, Associate Professor and Blue Cross of California Distinguished Professor, University of California, Berkeley: In the case you mentioned, like in all other cases, AI learns from historical data. So it trolls through those millions of records and it sees, for example, that there are some privileged people with great insurance who probably stay in nursing homes for longer than they should. And there are also vulnerable, underinsured people who are often kicked out too early. And rather than undoing that problem, the AI reinforces it and encodes it as policy. And I think that’s very contrary to the spirit of medical utilization review.
And it’s also a huge missed opportunity, because I think well-designed AI could do much better. It could look at the patient's x-ray. It could look at the public transportation in their neighborhood. It could look at the layout of their house. It could integrate all those things into a far better judgment than a doctor is able to make about who needs to be in that nursing home and who doesn’t.
Senator Warren: So it’s a really important point that you make about how it takes the bad information and accelerates it, or the information that tells us about bad practices. You know, according to the, to the , some seniors, these AI denials led to quote “amputations, fast-spreading cancers, and other devastating diagnoses.”
I appreciate that CMS has now finalized a rule to increase transparency requirements on insurers’ AI systems and require doctors to verify coverage decisions – but I think we need a lot more to protect patients here.
So, if I could come back to you, Professor Mello, in addition to the rule that the agency has just finalized, what measures do you think that CMS should take to ensure that private insurers are not leveraging AI tools to unlawfully deny care?
Professor Mello: Thank you for the question.
I think it’s really important to look, to see whether they are given the incentives involved. And I was very heartened to see in the FAQs released this week on that final rule.
CMS plans to beef up its audits in 2024 and specifically look at these denials. That seems extremely important. But beyond that, I think additional clarification is needed to the plans about what it means to use algorithms properly or improperly. For example, for electronic health records, it didn’t just say “make meaningful use of those records,” it laid out standards for what meaningful use was.
Senator Warren: So, I think the point here is we need guardrails. And without significant guardrails in place, these algorithms, as you put it Dr. Obermeyer, are going to accelerate the problems that we’ve got and pad private insurers’ profits, which gives them even more incentive to use AI in this way.
Until CMS can verify that AI algorithms reliably adhere to Medicare coverage standards, by law, then my view on this is: CMS should prohibit insurance companies from using them in their MA plans for coverage decisions. They’ve got to prove they work before they put them in place.
Thank you, Mr. Chairman.
###
Next Article Previous Article