Facebook Twitter YouTube Instagram Homepage
Latest Natural Health News

Inhumane Algorithm-Driven Hospital Care Could Kill You

Inhumane Algorithm-Driven Hospital Care Could Kill You
Share This Article

How insurance companies are using artificial intelligence (AI) to deny care for elderly patients.

A devastating report from STAT News details how AI-powered decision-making tools are being used by Medicare Advantage plans to deny life-saving medical care for older patients. Patients who fight these decisions often die before they get through the appeal process. The tragic stories of patients unable to walk or stand being kicked out of recovery programs at nursing homes are starting to mount, providing another dire warning about how we integrate AI systems into our society.

Consider the story of Frances Walter, an 85-year-old Wisconsin woman who shattered her left shoulder. Her Medicare Advantage insurer, Security Health Plan, used an algorithm to predict a rapid recovery for Frances of 16.6 days in the nursing home. Security Health Plan diligently cut off payments on day 17. Yet medical notes showed that France’s pain was at a maximum, and she could not move or dress herself without help. It took more than a year and a significant chunk of Frances’ life savings for her to be paid the thousands of dollars she was owed for three more weeks of treatment.

Medicare Advantage plans are offered by private companies; Medicare pays these companies to cover a patient’s Medicare benefits. These plans have become highly profitable for insurers because more and more Medicare patients are seeking plans that offer lower premiums and prescription drug coverage. The downside is that insurers have more leeway to deny or restrict services.

AI and machine learning have allowed these insurers to nickel and dime patients at a whole new level. In fact, as STAT reports, a whole industry has formed around developing algorithms to predict how many hours of therapy patients will need and exactly when they will be able to leave a hospital or nursing home. The AI tools used to make these predictions have become so important that the insurers are acquiring the companies that make them. UnitedHealth Group, Elevance, Cigna, and CVS Health (which owns Aetna) have all made such purchases in recent years.

Yet STAT’s investigation concluded that, “for all of AI’s power to crunch data, insurers with huge financial interests are leveraging it to help make life-altering decisions with little independent oversight.” Healthcare providers report that medical care routinely covered in traditional Medicare is increasingly being denied in Medicare Advantage plans.

NaviHealth, owned by UnitedHealth Group, created one of the more notorious AI models, nH Predict. It uses a person’s diagnosis, age, living situation, and physical function to find similar individuals in a massive database. It then generates an assessment of the patient’s mobility and cognitive capacity, along with a “down-to-the-minute” prediction of their medical needs, estimated length of stay, and target discharge date.

The STAT article details some of the results we get when we rely on algorithms to decide on our medical care:

Patients with stroke complications whose symptoms were so severe they needed care from multiple specialists were getting blocked from stays in rehabilitation hospitals. Amputees were denied access to care meant to help them recover from surgeries and learn to live without their limbs. And efforts to reverse what seemed to be bad decisions were going nowhere.

The authors continue, stating that between 2020 and 2022, the number of appeals filed to contest Medicare Advantage denials increased 58 percent, per a federal database.

A couple of years ago, we pointed out some of the dangers of relying on “medicine by algorithm.” While AI systems are incredibly powerful and have the potential to revolutionize certain aspects of our lives, they are far from panaceas and, as we see in the above examples, can produce appalling outcomes.

For example, the University of Pittsburgh Medical Center evaluated the risk of death from pneumonia of patients arriving in its emergency department. An AI model told researchers at the center that mortality decreased when patients were 100 years old or had a diagnosis of pneumonia. Sound ridiculous? Rather than these patients actually being at low risk of death, their risk was so high that they were immediately given antibiotics before they were registered in the electronic medical record—throwing off the AI’s analysis and producing a ridiculous conclusion. Not to mention unnecessary use of antibiotics – the key driver of antimicrobial resistance.

An infamous 2019 study found that a commercial algorithm widely used by health organizations was biased against black patients. The algorithm was designed to determined which patients were eligible for extra care and gave higher priority to white patients. The developers had removed race from the data used by the AI to make decisions, but the algorithm selected health-care spending as one of the factors it used. Less money is spent on black patients with the same level of need as white patients, causing the algorithm to conclude that black patients were less sick.

Compounding these problems is the tendency that, the more sophisticated these AI models become, the less transparent they are. The inability for us to see how deep learning systems make their decisions is referred to as the “black box problem.” When an AI system can make decisions affecting your life, but we can’t explain exactly how or why it ended up at that decision, we have a massive issue.

The AI genie cannot be put back into the bottle, but we must carefully consider how we integrate these systems into our society. This is why the work being done by our sister organization, ANH-International, to develop a new framework for health and ethics in health and care is so important and timely: there are many powerful forces, like AI and the censorship industrial complex, that are undermining ethics in healthcare and the doctor-patient relationship.

Find out more:

ANH International Health and Ethics framework launched (April 28, 2023)

The Harvard Gazette: AI Revolution in Medicine (November 11, 2020) – risks and benefits 

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts