iDoctor: Some Preliminary Thoughts on AI Liability in Health Environments

By Cameron Simmons and Aaron Brantly

In the middle of the night you awake with a sudden stomach pain and you feel like you can’t move you instinctually grab your phone and call 911. An ambulance arrives and rushes you to the hospital where you begin to receive treatment from an emergency room doctor. Now you might not understand everything about that doctor that is treating you, for example, you don’t know where they attended medical school, nor do you know their grade point average upon graduation, but you are certain that the medical field has strict guidelines and procedures and that to treat you that physician must have passed a medical licensing exam. But what if that doctor was replaced with a computer? Would you still feel comfortable? Computers are used in almost every aspect of our daily lives but would you trust one with yours? 

Before answering that question let’s take a step back. If a medical procedure goes wrong due to malpractice the patient or relatives of the patient may sue the provider. Medical legal suits are quite common in the United States. In 2020 there were 9,295 cases that resulted in malpractice payments according to the National Practitioner Databank. While there are no federal laws mandating physicians maintain malpractice insurance, most states require physicians to purchase insurance. Insurance covers the physician and is meant to protect both the physician and the patient through a means of legal recourse for acts of malpractice. In cases of malpractice where the physician is at fault the insurance and legal recourse are undertaken. Within the existing legal system it is the physician, the practice, the hospital or another similar entity that is legally liable in cases of malpractice. Yet in the absence of a physician, a human caregiver, the legal liability structure of malpractice is more uncertain.   

When considering the legal and ethical issues associated with AI implementation it is first  important to identify that AIs are being implemented in medicine for two primary reasons. First, they excel at pattern matching and recognition enabling them to detect abnormalities that might often go unnoticed by medical professionals. Pattern recognition can happen across a number of medical subdisciplines ranging from general practitioners (GP) to radiologists, oncologists, and many others. Patterns can be image based, test based, or reside in lengthy patient records or across multiple records. The second reason AI is being implemented is to help reduce the tedious tasks of medical professionals. Where AIs excel is in accomplishing specialized, complicated, but generally highly specified tasks that are subject to automation. 

A common feature of many visits to a GPs office are a number of lengthy forms about one’s health, physiological tests (blood pressure, heart rate etc.), blood and urinalysis. These tests provide datapoints that inform GP decisions about the health of a given patient. The GP is then responsible for communicating to that patient how the data led them to arrive at care decisions. Automating the decision process or augmenting it with data from similar patients or other datapoints from wearables, or a host of other sensors, AIs might provide care options to patients that would be outside the scope of a normal GPs cognitive modeling framework (beyond their medical training). Decisions made by an AI could be complex and might necessarily need to be communicated both to a physician who arbitrates a final health decision or a technician who can communicate how care recommendations are arrived at. A large part of the patient physician relationship and the success care interventions revolves around what is referred to as a patient-centric approach. This approach is rooted in a two-way communication between the physician and the patient. While eliminating or substituting an AI in some instances might reduce certain human capital costs it is likely to undermine the critical component of care, communication, that results in more effective and optimal outcomes. 

The result of AI use in care might be quite different from what AI implementers hope. Rather than eliminating physicians to reduce costs, physicians might be augmented by AIs that help the arrive at better care options they can then communicate to patients. Augmentation is also challenging to the egal regime currently in place in the United States concerning malpractice. If a decision made by an AI and then subsequently communicated by a physician results in improper care, it is uncertain who should be liable. If the augmentation of the decision is transparent, and the physician is able to fully understand how the decision was derived then malpractice liability should fall on the physician, yet if the physician is leveraging an AI that is not transparent but presents care options that might result in malpractice is the physician still liable? 

This is a brave new world. One in which physicians are expected to work through care solutions in ways often not addressed in their formal training. What would be required of physicians in medical school to prepare them for AI decision augmentation to minimize potential malpractice and to ensure they provide quality care? In the short term there is little likelihood of AIs replacing physicians in most care environments. Yet, it is conceivable that care environments will experience increasing automation. Most physicians are simply not trained to deal with complex computer systems or advanced statistical models that might be in use to augment their care processes and decisions. This hints at a problem for liability as well. If the inclusion of new modalities of care are implemented for hospitals and medical practices for which physicians were not trained, is it reasonable to continue to hold them liable for certain care decisions?  

Answering the question of legal liability involving an AI doctors or even AI augmentation opens the door to a new range of legal and ethical issues. If an AI doctor were implicated in malpractice who would be liable? Would it be the hospital that is using the AI program? Would it be the doctor for not double checking the diagnosis of the AI? Would it be the developers of the program for selling a faulty program? Each of these questions and likely many more are must be confronted in the years to come as AI is increasingly incorporated into medical environments. What we likely do not want is a license agreement that operates in the same way as all the agreements that we sign with social media, or other online digital providers today. Under such circumstances, at a time of medical need with limited ability to weigh costs and benefits, having a vulnerable party, the patient, sign away liability rights from an AI doctor is likely both unethical and falls outside of most state legal statutes. Confronting issues of technology and healthcare, specifically the utilization of AI in healthcare dates to the 1990s and examinations of tort liability in AI and expert systems. Understanding of the liability issues associated with the incorporation of AI continues as a point of discussion in the medical community and has been cited regularly in a variety of journals including the Journal of the American Medical Association. Building on this dialogue in the coming years will be critical as AI becomes increasingly prevalent in care environments. Addressing issues such as liability are only part of a broader set of issues associated with the inclusion of AI in medical care.