This article was authored by Amy Hill (Research Consultant).

Healthcare professionals are finding AI to be nothing short of an asset in producing efficient communication and data organization on the job.  Clinicians utilize AI for managing medical records, patient medications, and various medical writing and data organization-based tasks.  AI has the capacity to provide clinical-grade language processing and time-saving strategies that simplify ICD-10 coding and assist clinicians in completing clinical notes faster and in a more timely manner.

While AI’s advancements have served as game-changers in increasing workday efficiency, clinicians must be cognizant of the perils of using AI chatbots as a means to communicate with patients.  As background, AI chatbots are computer programs designed to simulate conversations with humans.  In principle, these tools facilitate communication between patients and healthcare providers by offering continuous access to medical information, automating processes such as appointment scheduling and medication reminders, assessing symptoms, and recommending care and treatment.  

When patient medical records and sensitive information are involved, however, how do clinicians find the balance between utilizing AI chatbots to their benefit and exercising discretion with sensitive patient data to avoid HIPAA violations?  Given AI’s numerous data collection mechanisms, including its tracking of browsing activity and its ability to access individual device information, what can be done to ensure that patient information is never subjected to even the shortest-lived bugs or breaches?  Can AI companies assist clinicians in ensuring that patient confidentiality is preserved?

First, opt-out features and encryption protocols are two ways AI protects user data, but tech companies collaborating with healthcare providers in creating HIPAA-compliant AI software would be even more beneficial to the medical field.  Second, it is imperative for healthcare professionals to acquire patient consent and anonymize any patient data prior to recruiting the help of an AI chatbot.  Healthcare providers utilizing legal safeguards, such as requiring patients to sign releases expressing consent that medical records may be used for research, in addition to proper anonymization of patient data used for research, may mitigate legal risks associated with HIPAA compliance.

For further assistance in managing the risks associated with AI, healthcare providers can turn to the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) to evaluate risks related to AI systems.  NIST, a non-regulatory Federal agency within the U.S. Department of Commerce, published this voluntary guidance to help entities manage the risks of AI systems and promote responsible AI development.  

Leveraging the vast capabilities of artificial intelligence, alongside robust data encryption and strict adherence to HIPAA compliance protocols, will enhance the future of healthcare for patients and healthcare providers alike.