from the I'm-sorry-I-can't-do-that,-Dave dept

‘AI’ Nets 26% Reduction In Unexpected Canadian Hospital Deaths

by · Techdirt

Automation and language learning models aren’t inherently bad. But when half-cooked automation is layered on top of very broken systems by greedy, incompetent people, it certainly can be. See, for example, the way brunchlords are using “AI” to cut corners or undermine labor in journalism, or the way insurance companies are using it to automatically and unfairly deny elderly Medicare benefits.

Where automation is probably most helpful is in areas that often aren’t going to generate a lot of headlines. Such as in complicated scientific data analysis. Or in this new study in the Canadian Medical Association Journal, which found that the use of automation led to a 26 percent drop in the number of unexpected deaths among hospitalized patients.

Researchers looked at data tethered to 13,000 admissions to St. Michael’s general internal medicine ward — an 84-bed unit that cares for many of the facility’s most complicated patients. Some of the facility’s patients used the facility’s in-house automation system, chart watch, to monitor 100 different key health metrics consistently to track for potential complications.

The system then used that data to predict when patients might take a turn for the worse, helping health care folks get out ahead of potential problems. Patients tethered to the system were substantially less likely to die. That said, researchers were quick to point out the study was limited (it was conducted during peak COVID in a unique hospital during severe healthcare staffing shortages) and more research is needed:

“Our study was not a randomized controlled trial across multiple hospitals. It was within one organization, within one unit,” [Dr. Amol] Verma said. “So before we say that this tool can be used widely everywhere, I think we do need to do research on its use in multiple contexts.”

In this case, folks carefully studied the potential of automation, took years to develop a useful tool, and are taking their time understanding the impact before expanding its use. AI’s greatest potential lies in seeing real-world patterns beyond the limited attention span of humans, and supplementing assistance, whether that’s predictive analytics or easing administrative burdens.

The problem, again, is that folks primarily looking at the technology as a path to vast riches (aka a majority of people) are rushing untested and uncooked technology into adoption, or they’re viewing it not as a way to assist and supplement human labor, but as a lazy replacement.

But as we’ve seen already across countless fronts, simply layering automation on top of already broken sectors is a recipe for disaster. Such as over at health insurance companies like United Health, where the company’s sloppy “AI” was found to have a whopping 90 percent error rate when automatically determining when vulnerable elderly patients should be kicked out of rehabilitation programs.

It would be nice if we had patient, intelligent, competent regulators and politicians capable of drafting quality regulatory guardrails that could protect consumers and patients from the sort of systemic, automated negligence that’s clearly coming down the road; but courtesy of recent Supreme Court rulings, lobbying, corruption, and a whole lot of greed, we seem deadly intent on doing nothing of the sort.