This month, the American Medical Association (AMA) reported that 81% of medical providers are using AI in their practices, including a significant increase in the use of AI-generated clinical documentation. This amount has more than doubled over the last three years. Of course, there is good reason for this increase. Charting is one of the most burdensome and time-consuming parts of practice. Keeping up with ever evolving Medicare and Current Procedural Terminology (CPT) coding guidelines is wearying and often results in provider burnout.
AI is also helpful in other areas of practice. In addition to chart note creation, AI can assist with the intake process, the coding and billing process, medication reviews, writing prescriptions and coordinating care with other providers. It has also become particularly useful for medical research. While AI technology offers clear benefits, its use introduces potential legal implications that providers should not ignore.
AI charting opens providers up to all the pitfalls of AI: hallucinations, exaggerations and inherent biases. From a legal perspective, the primary concern is that AI-generated charting may create clinical details that were not actually observed or performed. Even subtle inaccuracies, such as overstated exam findings or inflated time entries, can have consequences when those records are submitted for reimbursement to Medicare.
The Trump administration has recently initiated a self-titled “crackdown” on healthcare fraud, which will be headed by Vice President JD Vance and Secretary of Health and Human Services (HHS) Robert F. Kennedy, Jr. For example, in January of this year, Centers for Medicare & Medicaid Services (CMS) announced a new initiative to “expand and significantly accelerate” Risk Adjustment Data Valuation audits. Not only does this “crackdown” generate concern for providers subject to government audits, AI-generated documentation errors can also implicate the False Claims Act. Under the False Claims Act, submitting false or misleading claims to government payors—such as Medicare contractors—may expose providers to substantial liability, including treble damages and civil penalties. Importantly, liability does not require intentional fraud; “reckless disregard” for the truth is sufficient.
Using public-facing AI is another potential legal tripwire. Any AI software that providers use must not be public facing, such as ChatGPT. Inputting identifying patient information to a public facing AI software system risks running afoul of HIPAA, because public facing AI systems store prompts and outputs. Once a patient’s protected health information (PHI) is entered into a system, the user loses control of how that information is used. Public-facing tools do not guarantee encryption required for PHI.
Additionally, reliance on AI’s technology does not insulate providers from responsibility. Providers cannot simply argue that AI made the mistake or caused the charting inaccuracy. The U.S. Department of Justice has consistently taken the position that providers are accountable for the accuracy of their own records.
That said, the age of AI has arrived, and providers can, and should, embrace it. AI can be safely used with appropriate safeguards. Providers should implement clear policies and procedures governing use of AI. This includes requiring thorough human review of all generated documentation. Providers must scrupulously review each note prior to signing or finalizing them. It also includes conducting periodic audits to identify inaccuracies or concerning trends with AI-generated charting. Safeguards should also include using only closed systems that are securely stored on the provider’s network or in the cloud.
AI-generated documentation is not inherently problematic, but providers must approach its implementation thoughtfully and with an active oversight plan to ensure compliance with all state and federal regulations. Oversight plans are essential to ensuring that efficiency gains do not come at the expense of Medicare compliance.

