HIPAA and AI Governance

Table of Contents

Secure Your Employee Conversations with AI Assistants
Book A Demo

Is Generative AI Compliant with HIPAA?

Under the Health Insurance Portability and Accountability Act (HIPAA), the use of generative AI systems introduces additional considerations when protected health information (PHI) is involved. Depending on how these systems are used and configured, PHI may be processed, transmitted, or handled in ways that require safeguards under HIPAA.

This matters because:

  • AI systems may process sensitive health data outside controlled environments
  • PHI shared in prompts may be transmitted to third-party systems
  • HIPAA applies to how PHI is used, disclosed, and protected

What the Health Insurance Portability and Accountability Act (HIPAA) Regulates

The Health Insurance Portability and Accountability Act (HIPAA) is a United States law that governs how protected health information (PHI) is used, disclosed, and safeguarded.

It applies to:

  • healthcare providers
  • health plans
  • healthcare clearinghouses

It also establishes obligations for business associates that process PHI on behalf of covered entities.

HIPAA requires that:

PHI must be protected through administrative, physical, and technical safeguards under the HIPAA Privacy Rule and Security Rule, and only used or disclosed as permitted.

Key Terms (Simplified)

  • Protected Health Information (PHI)
    Any individually identifiable information related to an individual’s health status, treatment, or payment for healthcare, when held or transmitted by a covered entity or business associate
  • Covered Entity
    A healthcare provider, insurer, or organization directly subject to HIPAA
  • Business Associate
    A third party that processes PHI on behalf of a covered entity

In generative AI workflows:

  • the organization typically acts as a covered entity or business associate
  • the AI provider may act as a business associate if it processes PHI on behalf of a covered entity and appropriate agreements, such as a Business Associate Agreement (BAA), are in place

Use of third-party systems does not remove responsibility for protecting PHI.

Responsibilities of Organizations

Under HIPAA, organizations handling PHI are responsible for:

  • limiting use and disclosure of PHI to permitted purposes
  • implementing administrative, physical, and technical safeguards
  • ensuring appropriate agreements (such as Business Associate Agreements) are in place where required
  • monitoring access to PHI
  • reporting certain breaches involving PHI

These responsibilities apply regardless of whether PHI is processed internally or through third-party systems.

Why Generative AI Changes Risk

Generative AI systems may introduce additional considerations in how PHI is handled:

  • data may be transmitted to external systems
  • inputs may be processed or retained depending on provider configuration and contractual terms
  • processing may occur in systems that are not configured to meet HIPAA safeguard requirements
  • visibility into how PHI is handled may be limited or vary

These factors can make it more complex to ensure that PHI is handled in accordance with HIPAA safeguards.

Where AI Interacts with HIPAA Requirements

Permitted Use and Disclosure

PHI must only be used or disclosed for permitted purposes. AI usage may introduce new contexts of use.

Minimum Necessary Standard

Users may share more PHI than necessary when interacting with AI systems.

Safeguards

AI systems may require additional controls to ensure PHI is protected during processing.

Third-Party Processing

Use of AI systems may involve external providers, which may require appropriate safeguards and agreements.

What Teams Actually Do (and Where Risk Starts)

In practice, PHI may be used in generative AI workflows as part of routine tasks:

  • a clinician drafts notes using patient data in an AI tool
  • a support team summarizes patient communications
  • an operations team analyzes healthcare data for reporting
  • a billing team processes records containing patient identifiers

These actions are often performed for efficiency. However, they may involve:

  • sharing PHI with systems that are not configured to meet HIPAA requirements
  • processing data in contexts where safeguards may not be clearly defined or consistently enforced
  • limited visibility into how PHI is handled after submission

Patient Rights vs Generative AI

Under the HIPAA Privacy Rule, individuals have rights over their health information, including:

  • access to their records
  • ability to request corrections

In AI workflows, fulfilling these rights may require additional consideration:

  • identifying where PHI appears across systems
  • ensuring data can be corrected or updated where applicable
  • maintaining visibility into processing activities

Risk Assessment and Generative AI

HIPAA requires organizations to assess risks to PHI and implement safeguards accordingly.

Generative AI may require additional evaluation depending on the use case, particularly where:

  • PHI is processed outside internal systems
  • third-party services are involved
  • visibility into data handling is limited

Organizations may need to assess whether additional safeguards or agreements are required before using AI systems with PHI.

Why AI Usage Becomes Difficult to Govern

Individually, these considerations may be manageable. In combination, they can create situations where:

  • PHI is shared without consistent controls
  • processing is not fully visible
  • responsibilities are distributed across systems and teams

This can make it more complex to consistently ensure alignment with HIPAA requirements.

The Core Problem: Prompts May Involve PHI Processing

When PHI is included in prompts, it may involve:

  • transmission of PHI to external systems
  • processing of that data for specific tasks
  • potential access by third-party providers

Without appropriate safeguards, these interactions may be difficult to monitor, control, or document.

How AI Governance Supports HIPAA Alignment

To support alignment with HIPAA requirements, organizations may implement controls that operate before and during AI usage.

These may include:

  • identifying and limiting PHI shared with AI systems
  • enforcing policies on permitted use
  • maintaining visibility into AI usage
  • implementing safeguards to protect PHI

Such measures can help organizations manage how PHI is handled in AI workflows.

Where Wald.ai Fits

Wald provides controls that can be used to manage how PHI is handled in generative AI workflows.

This includes:

  • detection of sensitive health data in prompts and inputs
  • contextual redaction and sanitization before data is sent to AI systems
  • enforcement of usage policies across teams
  • visibility into AI interactions for monitoring and review

These capabilities can support organizations in applying governance controls to AI usage.

FAQs

1. Is generative AI HIPAA compliant?

Generative AI can be used in a HIPAA-aligned way depending on how it is configured, whether appropriate safeguards are implemented, and whether required agreements such as a Business Associate Agreement (BAA) are in place.

2. Can PHI be entered into AI tools like ChatGPT?

PHI should only be shared with systems that implement HIPAA-required safeguards and, where applicable, are covered by a Business Associate Agreement (BAA).

3. Do AI providers need a Business Associate Agreement (BAA)?

If an AI provider processes PHI on behalf of a covered entity, a BAA may be required depending on the use case.

4. Why is AI governance important for HIPAA?

AI governance helps organizations control how PHI is used, ensure safeguards are applied, and maintain visibility into data handling.

Secure Your Employee Conversations with AI Assistants
Book A Demo