What is HIPAA (Agent Context)?

3 min read Updated

HIPAA in an agent context refers to the application of the Health Insurance Portability and Accountability Act to AI agents — specifically how agents accessing protected health information (PHI) through MCP tools must enforce access controls, audit logging, and data protection.

WHY IT MATTERS

HIPAA governs the protection of protected health information (PHI) in the United States. It applies to covered entities (healthcare providers, health plans, healthcare clearinghouses) and their business associates — any organisation that handles PHI on their behalf. When AI agents access health data through MCP tools, HIPAA's Security Rule and Privacy Rule apply in full.

The Security Rule requires administrative, physical, and technical safeguards. For AI agents, the technical safeguards are most directly relevant: access controls (§164.312(a)) requiring unique user identification and role-based access, audit controls (§164.312(b)) requiring recording and examination of system activity, and transmission security (§164.312(e)) requiring encryption of PHI in transit.

AI agents amplify HIPAA risks in specific ways. An agent with access to an electronic health records (EHR) system via MCP could, in a single session, access hundreds of patient records — something a human clinician would rarely do. This pattern of access would constitute a breach of the minimum necessary standard (§164.502(b)), which requires that PHI access be limited to the minimum necessary to accomplish the intended purpose.

HIPAA violations carry penalties of up to $1.5 million per violation category per year, and can include criminal penalties. The Office for Civil Rights (OCR) has shown increasing willingness to enforce against technology-enabled breaches. An AI agent that accesses PHI without proper controls is not a hypothetical risk — it is an active compliance liability.

HOW POLICYLAYER USES THIS

Intercept enforces HIPAA technical safeguards at the MCP proxy layer. YAML policies implement the minimum necessary standard by restricting which health data fields an agent can access and under what conditions. Access controls ensure agents can only reach PHI through authorised tool calls with proper context. Every access decision is logged with full audit detail — satisfying §164.312(b) audit control requirements. Policies can enforce encryption requirements by blocking tool calls that would transmit PHI over unencrypted channels.

FREQUENTLY ASKED QUESTIONS

Does HIPAA apply to AI agents directly?
HIPAA applies to your organisation as a covered entity or business associate. Your AI agent is a system component — like a database or application. You are responsible for ensuring it meets HIPAA requirements, including access controls, audit logging, and the minimum necessary standard.
What is the minimum necessary standard for AI agents?
The minimum necessary standard (§164.502(b)) requires that access to PHI be limited to what's needed for the specific task. For agents, this means YAML policies should restrict tool calls to only the patient records and data fields required for the agent's purpose — not broad database access.
Do I need a Business Associate Agreement for an AI agent?
If a third-party AI service processes PHI on your behalf, yes — a BAA is required. If you run the agent on your own infrastructure and it accesses PHI through your systems, the BAA requirement applies to the infrastructure and service providers involved, not the agent itself.

FURTHER READING

Enforce policies on every tool call

Intercept is the open-source MCP proxy that enforces YAML policies on AI agent tool calls. No code changes needed.

npx -y @policylayer/intercept
github.com/policylayer/intercept →
// GET IN TOUCH

Have a question or want to learn more? Send us a message.

Message sent.

We'll get back to you soon.