Healthcare · AI

HIPAA-Aware AI Workflows for Medical Practices

Medical practices want AI productivity for the same reason every other business does: drafting, summarization, search, and pattern matching cost more human hours than they should. But healthcare carries a specific constraint most other industries don't. Protected Health Information (PHI) is regulated under the HIPAA Security Rule, vendor relationships need Business Associate Agreements (BAAs), and a careless prompt can become a documented breach. The good news: a well-designed AI program for a medical practice keeps PHI out of generative AI entirely while still capturing 80 percent of the productivity benefit on the administrative side.

Quick answer:

A HIPAA-aware AI program for a medical practice has three rules: (1) PHI is excluded from generative AI scope by sensitivity-label policy, not by employee discretion. (2) Sanctioned AI tools (Microsoft 365 Copilot, Anthropic Claude, OpenAI Enterprise) deploy with BAAs in place where the vendor offers them, and with explicit DPAs where they don't. (3) Administrative AI workflows (vendor management, contract review, internal communications, intake form processing) are explicitly approved; clinical workflows that touch PHI are explicitly excluded until vendor-side capabilities catch up to the regulatory expectations.

HIPAA Security Rule Applied to AI Tooling

HIPAA does not specifically address AI. It addresses ePHI (electronic Protected Health Information), and the Security Rule requires covered entities and their business associates to maintain administrative, physical, and technical safeguards over ePHI. AI tools that process ePHI are subject to the same requirements as any other software handling that data. In practice this means three concrete obligations:

  • Vendor risk. Any AI vendor that may process ePHI must be a business associate under HIPAA, with a signed BAA covering the data flow. Most consumer-tier AI products are explicitly not BAA-eligible.
  • Technical safeguards. Access control, audit logging, and integrity controls apply to ePHI inside AI tooling the same way they apply to your EMR.
  • Workforce training. Employees handling ePHI need explicit training on which AI tools are sanctioned and what data may not be entered.

The simpler version: if a tool isn't covered by a BAA and isn't operated under your existing technical safeguards, it can't legally process your patients' health information. The breach risk shifts from theoretical to documented the first time an employee pastes a chart note into a public AI tool.

PHI Scope-Out: What Stays In vs Out of AI

The cleanest design for a medical practice's AI program is to declare PHI explicitly out of scope for generative AI. This is not a "no AI" position. It's a "no PHI in generative AI" position, which is genuinely defensible to auditors, insurance carriers, and patients.

Out of scope (no AI processing)

  • Patient charts, encounter notes, lab results, imaging interpretations
  • Insurance claims and clearinghouse data with patient identifiers
  • Patient-identified intake forms (the form is fine; the patient identifier is not)
  • Pharmacy interactions and medication histories
  • Any document that combines a patient identifier with a clinical fact

In scope (sanctioned AI processing acceptable)

  • Vendor contracts and procurement documents
  • Internal communications and policy memos
  • Patient-facing communications drafted in the abstract (templates, FAQs, education content)
  • Administrative workflows where PHI is not present (staffing, scheduling templates, supplies)
  • Marketing copy, training content, and public-facing material

This split is enforced through sensitivity labeling at the document level, not through employee discretion. When a document carries a "PHI - High Sensitivity" label, sanctioned AI tools refuse to process it. When a document carries a "Internal - Administrative" label, AI tools work normally. The discipline lives in the labeling, not in the moment-by-moment judgment of busy administrative staff.

Four Legitimate Administrative AI Workflows

Even with PHI scope-out, medical practices have plenty of administrative work where AI delivers real productivity. The four highest-value workflows we see across our healthcare client conversations:

1. Vendor and contract review

Practices receive constant vendor proposals: EMR upgrades, billing platforms, supplier contracts, equipment leases, BAA negotiations. Reading and summarizing these is slow human work. A sanctioned AI tool can extract key terms, flag unusual clauses, and compare against your standard terms in seconds. PHI is not present in these documents, so the workflow is in scope.

2. Internal communications drafting

Staff memos, policy updates, training reminders, internal newsletters: communication overhead that consumes the practice manager's day. Microsoft 365 Copilot integrated with Outlook and Word can draft and refine these in a fraction of the time. As long as the content stays administrative and avoids referencing specific patients, the workflow is in scope.

3. Patient-facing template generation

FAQs, pre-visit instructions, post-procedure care guides, education content: AI can draft these in the practice's voice, then your clinical team reviews and finalizes. The drafts are abstract and patient-agnostic. The clinical team adds the medical accuracy review. This is one of the highest-ROI AI workflows we see in healthcare practices because the templates get re-used hundreds of times per month.

4. Insurance verification workflow improvements

Insurance verification involves multi-system check-and-confirm work that touches PHI. The verification itself stays inside your existing PHI-cleared systems. But the meta-workflow (intake form design, exception handling procedures, staff training material) can absolutely use AI. We see practices reclaim 8-12 staff hours per week by AI-improving the verification process design without putting any PHI into AI tooling.

BAA-Gated AI Vendor Checklist

If a workflow does need to touch PHI, the AI vendor needs to be a business associate. The checklist most healthcare practices need before signing:

  • BAA available. The vendor explicitly offers a BAA for the relevant tier of service. Microsoft offers a BAA for Microsoft 365 (covering Copilot at the appropriate tier). OpenAI offers a BAA for ChatGPT Enterprise (separate from consumer ChatGPT). Anthropic offers a BAA for Claude on enterprise contracts.
  • BAA scope covers the data flow. The BAA explicitly names the AI tool, the data scope, and the processing activities. A generic Microsoft 365 BAA may or may not cover Copilot's specific processing; verify.
  • Subprocessor list reviewed. AI vendors use subprocessors (cloud infrastructure, data centers, etc.). The BAA should list them. Each should be acceptable to your compliance program.
  • Data residency. If your practice has specific residency requirements (state laws, payer contracts), the AI vendor must commit to keeping data in compliant jurisdictions.
  • Training opt-out. The vendor must commit, in writing, that your prompts and outputs are not used to train models. Enterprise tiers typically include this; consumer tiers typically don't.
  • Audit rights. Your BAA should preserve your right to audit the vendor's controls or to receive third-party audit attestations (SOC 2, HITRUST, etc.).

Sensitivity Labels and DLP for PHI Containment

Microsoft Purview sensitivity labels are the practical enforcement mechanism for PHI scope-out. The minimum viable label schema for a medical practice:

  • Public. Marketing material, public website content. AI tools may process freely.
  • Internal. Internal communications, vendor contracts, policy documents. Sanctioned AI tools may process. Employees can apply this label manually or by default for documents in administrative folders.
  • Confidential. Internal sensitive: HR documents, financial records, signed contracts. Sanctioned AI tools may process with audit logging. DLP prevents external sharing.
  • PHI - High Sensitivity. Anything containing protected health information. AI tools refuse to process. DLP blocks paste, share, and copy outside designated PHI-cleared locations.

The labeling is partially automated (auto-classification based on content patterns) and partially manual (clinical staff applying labels at document creation). The DLP rules enforce the labels: when a user attempts to paste PHI-labeled content into a generative AI tool, the action is blocked or warned per your policy.

This is the same architecture we deploy in our Microsoft Copilot deployment for healthcare practices. The discipline is in the labels and the DLP rules; the AI tool itself respects them.

What This Looks Like Operationally

For a medical practice with 25 to 75 staff, a HIPAA-aware AI program has these moving parts:

  • An AI Acceptable Use Policy that explicitly addresses PHI scope-out, sanctioned tools, and BAA requirements. Distributed annually with HIPAA training.
  • Microsoft 365 Copilot deployed with sensitivity labels and DLP rules that enforce PHI scope-out. Or, if Microsoft Copilot is not the right fit, a comparable enterprise AI tool with a signed BAA.
  • Browser-layer policy (via tools like Island Browser) that blocks access to unsanctioned AI tools from work devices.
  • Quarterly governance review covering AI tool usage, BAA renewals, sanctioned-tool list updates, and any incidents or near-misses.
  • An incident response procedure for the case where PHI does end up in an AI tool: notification, scope assessment, remediation, and breach analysis under the HIPAA Breach Notification Rule.

None of this requires the practice to hire dedicated AI staff. It does require a partner who can run the program day-to-day. The structure is exactly what our Managed AI Agreement covers, with the healthcare-specific overlay built in.

Where to Start

The right entry point is a 30-minute AI Readiness Assessment. We pull a 90-day AI Usage Report from your security stack to see which AI tools your team is currently using. We review your BAA inventory against AI vendors. We score your AI maturity across six dimensions including HIPAA-specific factors. The output is a 90-Day Plan that's specific to your practice and that you can act on regardless of who runs it.

Healthcare practices we work with frequently find on the AI Usage Report that staff are already using consumer ChatGPT or Claude for administrative tasks, often with PHI references that nobody flagged. The assessment turns invisible HIPAA exposure into a documented inventory in one conversation.

Run a HIPAA-Aware AI Program at Your Practice

Unió Digital delivers a free 30-minute AI Readiness Assessment for Arizona medical practices. Includes the AI Usage Report, BAA inventory review, and a 90-day plan that respects your HIPAA perimeter. No commitment.

Book the Free Assessment