If you work in healthcare, you've probably heard a vendor pitch that included some version of this line: "Don't worry, we're HIPAA compliant."

And maybe that made you feel better. It probably shouldn't have.

Here's the thing — "HIPAA compliant" is not a certification. There is no federal agency handing out gold stars to software companies after an audit. When a vendor tells you they're HIPAA compliant, what they really mean is they believe they meet the requirements. That's a very different thing, and understanding that difference could save your practice from a very expensive, very embarrassing data breach.

Let's talk about what healthcare businesses — especially small and mid-size practices here in Tampa Bay — actually need to know before they start adding AI tools to their workflow.


HIPAA Compliance Is Your Responsibility, Not Your Vendor's

This is the part nobody wants to hear. Even if you're using a tool that genuinely does everything right, the compliance responsibility lives with you. You are the covered entity. Your vendors are business associates. And that relationship has to be formalized.

Before you plug any AI tool into a workflow that touches patient data, you need a Business Associate Agreement (BAA). This is a legally required contract that outlines how your vendor will handle Protected Health Information (PHI), what they'll do if there's a breach, and what limits exist on how they can use that data.

No BAA? You're already out of compliance — regardless of how "secure" the software is.

Some major AI platforms — including the standard consumer versions of tools like ChatGPT — do not offer BAAs. That means if you're copying and pasting patient notes into a free AI tool to help draft a summary, you may be creating a HIPAA violation right now. Not hypothetically. Right now.


What Actually Counts as PHI (And Why It Matters for AI)

Protected Health Information isn't just someone's diagnosis or medication list. PHI is any information that could be used to identify a patient in connection with their health data. That includes:

  • Names
  • Dates (including appointment dates, birthdates, admission dates)
  • Phone numbers and email addresses
  • Geographic data more specific than a state
  • Photos
  • Account numbers
  • Any other identifier that could link back to a specific person

Why does this matter for AI specifically? Because AI tools are hungry for context. The more specific the information you feed them, the better they perform. That creates a natural tension — your instinct is to give the tool everything it needs to do a good job, but the more specific you get, the more likely you are to be handing over PHI.

A front-desk coordinator at a St. Pete dermatology office might think nothing of asking an AI chatbot to help draft a follow-up message that mentions a patient's upcoming Mohs surgery. But if that tool doesn't have a BAA in place, that single interaction is a potential violation.


The Tools That Actually Work in Healthcare Settings

Let me give you some concrete direction here, because this isn't just about what not to do.

There are AI platforms built specifically for healthcare, and there are general-purpose platforms that have created HIPAA-eligible tiers with proper BAAs. Here's what the landscape looks like:

Microsoft Azure OpenAI Service offers enterprise-grade AI capabilities and will sign a BAA. Several larger health systems in the Tampa Bay area are already building internal tools on this infrastructure.

Google Workspace for Healthcare (with the right configuration and agreement) can support HIPAA-compliant use cases, including AI-assisted drafting through their enterprise tools.

Nuance DAX is a well-established ambient clinical documentation tool that's designed from the ground up for healthcare compliance. It's not cheap, but it's purpose-built.

AWS HealthLake is Amazon's HIPAA-eligible data service that allows healthcare organizations to store, transform, and analyze health data using AI.

For smaller practices — a two-provider family medicine office in Clearwater, a behavioral health group in Brandon — these enterprise solutions might be more than you need or can budget for. That's where working with someone who knows the landscape helps, because there are lighter-weight options that still take compliance seriously, and knowing which ones have actually gone through the process of signing BAAs matters a lot.


Common AI Use Cases and Their Compliance Risk Level

Not all AI use cases carry the same risk. Here's a quick breakdown:

Lower risk (with proper agreements in place):

  • Drafting general patient education materials (not patient-specific)
  • Internal staff training content
  • Administrative templates that don't include PHI
  • Analyzing de-identified, aggregate data for operational improvements

Medium risk (requires careful setup and BAAs):

  • AI-assisted clinical documentation
  • Appointment reminder automation
  • Insurance pre-authorization workflows
  • Transcription of patient calls or visits

Higher risk (needs serious vetting):

  • Any AI tool that stores or processes patient records
  • Chatbots that interact directly with patients about their care
  • Diagnostic support tools
  • Anything integrated directly with your EHR

The higher the risk level, the more you need to verify — not just that a vendor claims compliance, but how they store data, where it lives, who has access, and how they respond to a breach.


Questions to Ask Any AI Vendor Before You Sign Anything

I'm going to give you a short list you can actually use. Take it to your next vendor demo.

  • Will you sign a BAA? (If they hesitate or say it's not necessary, walk away.)
  • Where is patient data stored, and is it encrypted at rest and in transit?
  • Do you use customer data to train your models? (This one is huge — some tools use your inputs to improve their AI, which creates serious PHI exposure.)
  • What is your breach notification process and timeline?
  • Have you completed a third-party security audit? Can I see the results?
  • What access controls exist to limit who on your team can see our data?

A vendor that's actually doing this right will not be bothered by these questions. They'll have answers ready. The ones who fumble or redirect are telling you something important.


De-identification Is More Complicated Than You Think

One common workaround people try is de-identifying data before feeding it to an AI tool. The logic makes sense — if there's no PHI, the HIPAA rules don't apply.

The problem is that HIPAA has a very specific definition of de-identification. You can't just remove a name and call it good. There are 18 specific identifiers that must be removed, and even then, if there's any reasonable basis to believe the information could be re-identified, it still counts as PHI.

For most small practices, building a reliable de-identification process is harder than just using a compliant tool with a BAA. Don't create a false sense of security by half-removing information.


The Honest Bottom Line

AI can absolutely help healthcare practices — with documentation burden, administrative overhead, patient communication, operational efficiency. I've seen it make a real difference for practices that implement it thoughtfully.

But the healthcare space is not one where you want to move fast and figure out compliance later. The OCR (HHS Office for Civil Rights) has levied multi-million dollar fines against small practices. A breach doesn't just cost money — it costs patient trust, and in a market like Tampa Bay where people have real choices about their providers, that matters.

The good news is that doing this right isn't as complicated as the vendor landscape makes it seem. You don't need to be a compliance attorney or a cybersecurity expert. You need a clear picture of what tools touch PHI, agreements in place with every vendor that does, and a basic understanding of where your risks are.


If you're a healthcare practice in the Tampa Bay area trying to figure out where AI fits — and where it doesn't — I'd genuinely enjoy talking it through with you. I offer a free consultation, no pitch, no pressure. We'll just look at what you're actually dealing with and whether AI makes sense for your situation.

[Schedule a free consultation here] — or reach out directly if you've got a specific question you want to talk through first.