The Doctor is AI
Forget that we cannot trust self-driving cars and that those flying ones we were promised remain elusive. Two random items this morning reinforce concerns about AI.
This one:
Followed by this one:
Science is an imperfect process, the Hill opinion notes. “Since 1980, more than 40,000 scientific publications have been retracted. They either contained errors, were based on outdated knowledge or were outright frauds.” The problem is that those zombie studies do not disappear simply because they’ve been retrcated. They continue to be cited “unwittingly“:
Just by citing a zombie publication, new research becomes infected: A single unreliable citation can threaten the reliability of the research that cites it, and that infection can cascade, spreading across hundreds of papers. A 2019 paper on childhood cancer, for example, cites 51 different retracted papers, making its research likely impossible to salvage.
AI relying on undigitized medical knowledge from 1853 may seem unlikely. But relying on 40,000 retracted studies still floating around?
I dredged this up from the summer based on a comment in the Blue Sky thread:
Cigna is using an algorithm to review — and often reject — hundreds of thousands of patient health insurance claims, a new lawsuit claims, with doctors rubber-stamping those denials without individually reviewing each case.
{…}
The litigation highlights the growing use of algorithms and artificial intelligence to handle tasks that were once routinely handled by human workers. At issue in health care is whether a computer program can provide the kind of “thorough, fair, and objective” decision that a human medical professional would bring in evaluating a patient’s claim.
“Relying on the PXDX system, Cigna’s doctors instantly reject claims on medical grounds without ever opening patient files, leaving thousands of patients effectively without coverage and with unexpected bills,” the suit alleges.
The Doctor is AI:
“The chatbot is here to see you” (Politico):
Right now, no fewer than half of health care organizations are planning to use some kind of generative AI pilot program this year, according to a recent report by consulting firm Accenture. Some could involve patients directly. AI could make it easier for patients to understand a providers’ note, or listen to visits and summarize them.
But what about… you know, actual doctoring? So far, in limited research, chatbots have proven decent at asking simple health questions. Researchers from Cleveland Clinic and Stanford recently asked ChatGPT 25 questions on heart disease prevention. Its responses were appropriate for 21 of 25, including on how to lose weight and reduce cholesterol.
But it stumbled on more nuanced questions, including in one instance “firmly recommending” cardio and weightlifting, which could be dangerous for some patients.
Steven Lin, a physician and executive director of the Stanford Healthcare AI Applied Research Team, said that the models are fairly solid at getting things like medical school test questions right. However, in the real world, questions from patients are often messy and incomplete, Lin said, unlike the structured questions on exams.
I remain skeptical.