By B.N. Frank
Artificial Intelligence (AI) technology operates using algorithms. This doesn’t guarantee accuracy. In fact, earlier this month, Activist Post reported about an “Artificial Intelligence Hall of Shame.” The following systems seem to qualify.
An Algorithm That Predicts Deadly Infections Is Often Flawed
A study found that a system used to identify cases of sepsis missed most instances and frequently issued false alarms.
A complication of infection known as sepsis is the number one killer in US hospitals. So it’s not surprising that more than 100 health systems use an early warning system offered by Epic Systems, the dominant provider of US electronic health records. The system throws up alerts based on a proprietary formula tirelessly watching for signs of the condition in a patient’s test results.
But a new study using data from nearly 30,000 patients in University of Michigan hospitals suggests Epic’s system performs poorly. The authors say it missed two-thirds of sepsis cases, rarely found cases medical staff did not notice, and frequently issued false alarms.
Karandeep Singh, an assistant professor at University of Michigan who led the study, says the findings illustrate a broader problem with the proprietary algorithms increasingly used in health care. “They’re very widely used, and yet there’s very little published on these models,” Singh says. “To me that’s shocking.”
The study was published Monday in JAMA Internal Medicine. An Epic spokesperson disputed the study’s conclusions, saying the company’s system has “helped clinicians save thousands of lives.”
Epic’s is not the first widely used health algorithm to trigger concerns that technology supposed to improve health care is not delivering, or even actively harmful. In 2019, a system used on millions of patients to prioritize access to special care for people with complex needs was found to lowball the needs of Black patients compared to white patients. That prompted some Democratic senators to ask federal regulators to investigate bias in health algorithms. A study published in April found that statistical models used to predict suicide risk in mental health patients performed well for white and Asian patients but poorly for Black patients.
A 2019 study revealed that 82% of Americans believed Artificial Intelligence (AI) technology is more hurtful than helpful. Think they’re onto something?
Activist Post reports regularly about unsafe technology. For more information, visit our archives.
Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.