Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

AI shouldn’t decide who dies. It’s neither human nor humane

Tech expert Kurt Knutsson reveals how scientists developed a method for robots to sense touch using AI and sensors.
Artificial intelligence (AI) will certainly change the practice of medicine. As we write this, PubMed (the website repository for medical research) indexes 4,018 publications with the keyword “ChatGPT.” Indeed, researchers have been using AI and large-language models (LLMs) for everything from reading pathology slides to answering patient messages. However, a recent paper in the Journal of the American Medical Association suggests that AI can act as a surrogate in end-of-life discussions. This goes too far.   
The authors of the paper propose creating an AI “chatbot” to speak for an otherwise incapacitated patient. To quote, “Combining individual-level behavioral data—inputs such as social media posts, church attendance, donations, travel records, and historical health care decisions—AI could learn what is important to patients and predict what they might choose in a specific circumstance.” Then, the AI could express in conversant language what that patient “would have wanted,” to inform end-of-life decisions.  
We are both neurosurgeons who routinely have these end-of-life conversations with patients’ families, as we care for those with traumatic brain injuries, strokes and brain tumors. These gut-wrenching experiences are a common, challenging and rewarding part of our job.  
AI WEARABLE PROMISES TO HELP YOU REMEMBER EVERYTHING
Our experience teaches us how to connect and bond with families as we guide them through a life-changing ordeal. In some cases, we shed tears together as they navigate their emotional journey and determine what their loved one would tell us to do if they could speak.  
AI is changing medicine and aiding doctors. But it’s not human enough to handle end-of-life decisions. (iStock)
Never once would we think it appropriate to ask a computer what to do, nor could a computer ever take the role of physician, patient or family in this situation. 
The primacy and sanctity of the individual are at the heart of modern medicine. Philosophical individualism underlies the chief “pillars” of medical ethics: beneficence (do good), non-maleficence (do no harm), justice (be fair), and – our emphasis – autonomy. Medical autonomy means a patient is free to choose, informed but uncoerced. Autonomy often trumps other values: a patient can refuse an offered treatment; a physician can decline to perform a requested procedure.  
But it is the competent individual who decides, or a designated surrogate when the patient cannot speak for themselves due to disability. Critically, the surrogate is not merely someone appointed to recite the patient’s will, but rather someone entrusted to judge and decide. True human decision-making, in an unexpected circumstance and with unforeseeable knowledge, should remain the sacred and inviolate standard in these most weighty moments.  
Even a tech-zealot must acknowledge several limitations to AI technology that should give any reasonable observer pause.  
The “garbage in, garbage out” principle of computer science is self-explanatory: the machine only sees what it’s given and will produce an answer accordingly. So, do you want a computer deciding about life support based on a social media post from years ago? But even stipulate perfect reliability and accuracy in the data going into this algorithm: we are more than our past selves, and certainly more than even hours of recorded speech. We ought not reduce our identities to such paltry “content.” 
Having addressed incompetence, we turn to malice. First and simplest: this year alone, multiple hospital systems have fallen victim to cyberattacks by criminal hackers. Should an algorithm purporting to speak and decide for an actual human exist on those same, vulnerable servers?  
More worrisome: who would make and maintain the algorithms? Would they be funded or operated by large health systems, insurers or other payors? Could physicians and families stomach even the consideration that these algorithms may be weighted to “nudge” human decision-makers down a more affordable path?  
The opportunities for fraud are many. An algorithm programmed to favor withdrawal of life support might save money for Medicare, while one programmed to favor expensive life-sustaining treatments may be a revenue generator for a hospital. 
Air Force Secretary Frank Kendall smiles after a test flight of the X-62A VISTA aircraft against a human-crewed F-16 aircraft in the skies above Edwards Air Force Base, Calif., on Thursday, May 2, 2024. The flight on the Artificial Intelligence-controlled VISTA is serving as a public statement of confidence in the future role of AI in air combat. The military is planning to use the technology to operate an unmanned fleet of 1,000 aircraft. (AP Photo/Damian Dovarganes)
The appearance of impropriety is, itself, cause for alarm. Not to mention the challenge of specific patient groups with linguistic/cultural barriers or a baseline distrust of institutions (medical or otherwise). We doubt that consulting a mysterious computer program would inspire greater faith in these scenarios. 
CLICK HERE FOR MORE FOX NEWS OPINION
The large and still-growing role of computers in modern medicine has been a source of massive frustration and disaffection for physicians and patients alike, perhaps most felt in the replacing of patient-physician face-to-face time with burdensome documentation and “clicks.”  
These countless computational catastrophes are exactly where AI should be deployed in healthcare: not to supplant humans from our most humane roles, but to cut down on electronic busy work, so in times of greatest moment doctors can turn away from screens, look people in the eye, and give wise counsel. 
More worrisome: who would make and maintain the algorithms? Would they be funded or operated by large health systems, insurers or other payors? Could physicians and families stomach even the consideration that these algorithms may be weighted to “nudge” human decision-makers down a more affordable path?  
The lay public would be astonished at what small fraction of a doctor’s day involves practicing medicine, and how much time we instead invest in billing, coding, quality metrics and so many technical trivialities. These low-hanging fruit would seem a better target for AI while this technology is still in its infancy, before we hand the reins of end-of-life decisions to a nescient machine. 
CLICK HERE TO GET THE FOX NEWS APP
Fear can be paralyzing. Fear of death, fear of decision, fear of regret – we do not envy the surrogate decision-maker, haunted by possibilities. But abdication of that role is no solution; the only way out, is through.  
We physicians help patients, families, and surrogates navigate this terrain with eyes open. Like most fundamental human experiences, it is a painful but deeply rewarding journey. As such, this is no occasion for autopilot. To paraphrase the old man: the answer, dear reader, is not in our computer, but in ourselves. 
Anthony DiGiorgio, DO, MHA is an assistant professor of neurosurgery at the University of California, San Francisco and a senior affiliated scholar with the Mercatus Center at George Mason University.
John Paul Kolcun, MD, is a resident in neurosurgery at Rush University Medical Center and the co-host of the “Neurosurgery Podcast.”

en_USEnglish