
Under the Health Insurance Portability and Accountability Act (HIPAA), the use of generative AI systems introduces additional considerations when protected health information (PHI) is involved. Depending on how these systems are used and configured, PHI may be processed, transmitted, or handled in ways that require safeguards under HIPAA.
This matters because:
The Health Insurance Portability and Accountability Act (HIPAA) is a United States law that governs how protected health information (PHI) is used, disclosed, and safeguarded.
It applies to:
It also establishes obligations for business associates that process PHI on behalf of covered entities.
HIPAA requires that:
PHI must be protected through administrative, physical, and technical safeguards under the HIPAA Privacy Rule and Security Rule, and only used or disclosed as permitted.
In generative AI workflows:
Use of third-party systems does not remove responsibility for protecting PHI.
Under HIPAA, organizations handling PHI are responsible for:
These responsibilities apply regardless of whether PHI is processed internally or through third-party systems.
Generative AI systems may introduce additional considerations in how PHI is handled:
These factors can make it more complex to ensure that PHI is handled in accordance with HIPAA safeguards.
PHI must only be used or disclosed for permitted purposes. AI usage may introduce new contexts of use.
Users may share more PHI than necessary when interacting with AI systems.
AI systems may require additional controls to ensure PHI is protected during processing.
Use of AI systems may involve external providers, which may require appropriate safeguards and agreements.
In practice, PHI may be used in generative AI workflows as part of routine tasks:
These actions are often performed for efficiency. However, they may involve:
Under the HIPAA Privacy Rule, individuals have rights over their health information, including:
In AI workflows, fulfilling these rights may require additional consideration:
HIPAA requires organizations to assess risks to PHI and implement safeguards accordingly.
Generative AI may require additional evaluation depending on the use case, particularly where:
Organizations may need to assess whether additional safeguards or agreements are required before using AI systems with PHI.
Individually, these considerations may be manageable. In combination, they can create situations where:
This can make it more complex to consistently ensure alignment with HIPAA requirements.
When PHI is included in prompts, it may involve:
Without appropriate safeguards, these interactions may be difficult to monitor, control, or document.
To support alignment with HIPAA requirements, organizations may implement controls that operate before and during AI usage.
These may include:
Such measures can help organizations manage how PHI is handled in AI workflows.
Wald provides controls that can be used to manage how PHI is handled in generative AI workflows.
This includes:
These capabilities can support organizations in applying governance controls to AI usage.
1. Is generative AI HIPAA compliant?
Generative AI can be used in a HIPAA-aligned way depending on how it is configured, whether appropriate safeguards are implemented, and whether required agreements such as a Business Associate Agreement (BAA) are in place.
2. Can PHI be entered into AI tools like ChatGPT?
PHI should only be shared with systems that implement HIPAA-required safeguards and, where applicable, are covered by a Business Associate Agreement (BAA).
3. Do AI providers need a Business Associate Agreement (BAA)?
If an AI provider processes PHI on behalf of a covered entity, a BAA may be required depending on the use case.
4. Why is AI governance important for HIPAA?
AI governance helps organizations control how PHI is used, ensure safeguards are applied, and maintain visibility into data handling.