Claude for healthcare - making HIPAA compliance work without enterprise budgets
Mid-size healthcare organizations face an impossible choice between modern AI tools and HIPAA compliance. Claude works in healthcare, but you need a Business Associate Agreement and proper safeguards. Here is how to implement defensible controls without enterprise budgets or dedicated compliance staff.

The development team wants Claude. The compliance officer has questions. And the budget cannot support enterprise healthcare platforms.
This is a real problem, and the answer isn’t to avoid AI entirely.
The compliance question nobody prepares for
Here’s what usually happens. A developer discovers Claude can draft clinical documentation in seconds. They start pasting patient notes into Claude.ai to test a workflow. A few months later, compliance finds out. Now you have a HIPAA incident on your hands.
The frustrating part? This is entirely avoidable with a couple hours of upfront work.
HIPAA classifies cloud services that handle protected health information as business associates. When patient data goes to Claude, Anthropic legally becomes your business associate. That triggers specific obligations on both sides, and it kicks in the moment PHI touches the API.
The standard Claude.ai chat interface can’t be used with PHI. Your team can’t paste patient notes into it to test a workflow or debug a feature. BAAs are available only for Anthropic’s API and Enterprise products, not consumer or Pro plans. Any PHI that touches a non-covered product is a violation.
Full stop.
Business associate agreements are non-negotiable
Anthropic offers Business Associate Agreements for their API products. HHS Office for Civil Rights enforces missing BAAs aggressively, and multi-million dollar settlements are common for organizations that skip this step. Most violations OCR finds involve missing BAAs or inadequate vendor oversight.
The faster path for most mid-size organizations is going through a cloud platform that already has HIPAA infrastructure in place. AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure all sign BAAs and handle the technical safeguards HIPAA demands. Claude is currently the only frontier AI model available across all three major cloud platforms, which gives you flexibility to work within vendor relationships you already have.
Going directly to Anthropic involves a review process. It works better for organizations with clear use cases and real technical capability. Either way, the BAA has to come before any PHI touches the API.
Not after.
Protected health information is harder to define than it sounds
Most healthcare organizations assume de-identification means removing names. It doesn’t.
HIPAA’s Safe Harbor method requires eliminating all 18 specific identifiers. Names, yes, but also all dates except year, geographic areas smaller than state level, phone numbers, email addresses, medical record numbers, device identifiers, and biometric data. Miss one and your data is still PHI.
The problem gets genuinely tricky with rare conditions and small populations. A 47-year-old patient in rural Montana with a rare genetic disorder is identifiable even without a name. Age plus location plus diagnosis creates a unique fingerprint. HHS guidance acknowledges that properly de-identified data still carries some re-identification risk. Does “some re-identification risk” count as de-identified enough to skip HIPAA requirements entirely? That’s a question your compliance officer will have strong feelings about.
I think for most mid-size healthcare organizations, the practical answer is simpler: don’t bother trying to de-identify at all. Keep PHI as PHI, implement proper controls, get your BAA, and document everything. Trying to strip data down to avoid HIPAA requirements usually creates more risk than it removes.
There is an alternative worth knowing about: Expert Determination, where a qualified statistician assesses re-identification risk and documents that it’s very small. Costs more than Safe Harbor, but it’s the right call when you need richer data for AI training or analysis.
Three controls that hold up under scrutiny
HIPAA Security Rule requirements cover administrative, physical, and technical safeguards. For AI tools specifically, three areas actually matter.
Access controls come first. Who can send PHI to Claude? Which tools are authorized? This can’t be a policy document sitting on a shared drive somewhere. The latest HIMSS guidance on AI governance reinforces this: strong access controls and audit trails are the foundation of compliant AI usage. Create separate development environments that never touch real PHI. Synthetic test data should be the default for all development work. Any function that accesses patient information needs code review before it ships.
Training comes second, and generic HIPAA videos won’t cut it. Your developers need to recognize PHI in your specific systems. What does a patient identifier look like in your EHR database? Which API endpoints return protected information? At what point does aggregated data still count as PHI? Build runbooks for situations your team actually encounters. What do you do when troubleshooting requires reviewing a patient’s record? What approval is needed before deploying code that touches patient data?
Logging comes third. The HIPAA Security Rule requires audit controls to record access to electronic PHI. For Claude usage, this means capturing which users sent what types of queries, when, and what data those queries involved. You don’t need expensive SIEM platforms. Basic API logging with retention policies and periodic review meets the requirement.
Two things worth flagging clearly: a signed BAA doesn’t make your usage compliant. It makes the service eligible for your compliant usage. And cloud doesn’t mean compliant by default. Encryption, access controls, and logging all require active configuration. Default settings are almost always wrong.
What defensible compliance actually looks like
HIPAA doesn’t require perfection. It requires reasonable safeguards based on your size, complexity, and budget. There’s a good breakdown from 360 Advanced on why smaller healthcare organizations struggle: not because they lack sophistication, but because they try to implement enterprise controls they can’t sustain.
The risk assessment comes first. HHS provides guidance, but the practical questions are direct: Where is your PHI? Who needs access? What happens if it leaks? Which controls reduce risk most for your budget?
Document your decisions. When OCR audits you, they want to see thoughtful risk management. Why did you choose AWS Bedrock over direct API access? How did you determine your logging approach provides adequate audit trails? OCR enforcement actions focus heavily on organizations that skipped risk assessments entirely or ignored identified risks without explanation for doing so.
Test your controls. Can you reconstruct a patient query from your audit logs? Can you identify when someone accessed PHI inappropriately? Try using your own systems in ways that should be blocked or logged, then verify the controls actually worked. This doesn’t require penetration testing or formal audits. It requires curiosity about whether your own safeguards function as designed.
A mid-size clinic with 200 employees faces different requirements than a hospital system with 10,000 staff. Find the approach that fits your actual scale, not somebody else’s.
Using Claude through a HIPAA-compliant platform with a proper BAA isn’t a regulatory gamble. It’s a documented decision about improving patient care while handling PHI responsibly. Which, if you read the regulation carefully, is exactly what it was designed to enable.
About the Author
Amit Kothari is an experienced consultant, advisor, coach, and educator specializing in AI and operations for executives and their companies. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.
Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.