The use of artificial intelligence (AI) in healthcare is not “coming soon;” it is already here. AI is being used to diagnose diseases such as cancer, prepare chart notes, obtain pre-authorization for services and generate treatment plans for patients, among other things. For certain tasks like the above, AI can reduce providers’ administrative burdens and improve diagnosis and treatment.
As with any other tool, however, AI has its pros and cons. From a legal perspective, practitioners need to be aware of the risks of using AI. The following discussion is not intended to discourage practitioners from implementing AI in their practices. Such should be done, however, in a way that reduces legal risk.
First, the improper use of AI may create liability for violations of privacy and security. Healthcare providers must avoid using any AI program that is public facing, like ChatGPT. Patient information that is put into public facing AI is not private. There have been well-publicized instances of serious security breaches where AI was used in unsecured platforms. Providers who use AI must ensure that it meets HIPAA privacy and security requirements.
Second, risk can exist when AI programs play a part in the informed consent process. These programs can present the consent form to the patient and even interactively answer questions and give explanations about consent issues. Here, however, as with most uses of AI when providing health care, a human must be in the loop. In my opinion, the failure of a doctor to personally engage in the informed consent process likely constitutes malpractice in most situations. Although AI is good at providing opinions and conclusions, its ability to explain how it arrived at these conclusions is limited. Thus, while AI may recommend that a patient undergo a particular procedure, it may not do well at giving the rationale for such a recommendation. In addition, if the doctor uses AI in diagnosing the patient and making treatment recommendations that are used by the doctor, this must be revealed to the patient in the informed consent process, preferably in writing.
Third, there is uncertainty regarding the circumstances in which the use of AI may constitute malpractice. For example, suppose a doctor relies upon AI to diagnose a condition and propose a surgical procedure, but another treatment was actually more appropriate. May the doctor absolve himself from malpractice liability because he relied upon AI? Though few cases exist at this writing, the answer is probably no. The doctor’s use of a tool, such as AI, does not absolve the doctor. Having said that, the doctor may bring the AI vendor/manufacturer into the litigation as a third party and potentially shift some or all the liability on them. The use of AI thus adds a new element of complexity into malpractice cases.
On the other hand, however, what if AI is available but the doctor chooses not to use it and as a result commits malpractice? For example, in some situations, AI is more accurate in certain radiological interpretations than humans. If AI is available, and the radiologist or treating doctor fails to use it and a tumor is missed, is there liability? Again, no cases yet address this specific situation, but liability could exist. Time will tell.
Fourth and finally, the use of AI can create liability for coding and billing errors. Suppose that a doctor uses AI in her billing software package and AI recommends the wrong CPT billing code for an extensive period. This may expose the doctor to a large potential overpayment or, worse, to allegations of submitting false claims. If an audit occurs, the doctor will incur financial liability and cannot point to AI as an excuse. Use of AI may absolve the doctor of liability for billing fraud but would not be a defense to having to refund the overpayment.
AI in healthcare has opened up a new horizon with limitless possibilities to benefit both patients and doctors. With these advantages, however, also comes the risk of misuse and legal liability. As with any complex tool, AI must be used with training, care, caution and human supervision. Failure to do so could subject the healthcare practitioner to liability.
Kevin West is a shareholder and chairperson of the firm’s health law practice. He is a trusted attorney who represents healthcare providers in a broad variety of legal matters such as audits, investigations, regulatory compliance and licensure discipline. He also is experienced in employment matters and litigation. To speak to Kevin about this or related matters, call (208) 562-4900 or send an email to kwest@parsonsbehle.com.