Compliance

HIPAA-Compliant AI: A Guide for Tampa Bay Medical Practices

AI is no longer an experimental technology for Tampa Bay medical practices. It is a practical tool that forward-thinking practices are deploying for clinical documentation, patient communication, medical coding, and administrative efficiency. But for every practice using AI effectively and safely, there are several more where physicians and staff are using AI tools informally, without compliance review, without Business Associate Agreements, and without the organization's knowledge.

That gap between AI reality and AI governance represents serious regulatory exposure. The Office for Civil Rights (OCR) has made clear that AI systems handling protected health information (PHI) are subject to full HIPAA requirements. The question for Tampa Bay medical practices is not whether to comply, but how to implement AI in a way that delivers its efficiency benefits while meeting all applicable legal requirements.

This guide provides a step-by-step framework for Tampa Bay practices adopting AI in 2026. It covers the compliance requirements, the deployment decisions that determine your compliance posture, the technical safeguards required, staff training obligations, and a practical checklist you can use to assess your current AI compliance status.

Step 1: HIPAA Risk Assessment for AI Systems

Every HIPAA-covered entity is required to conduct a comprehensive risk assessment, and that assessment must include all systems that handle PHI — including AI systems. If your practice has not yet updated its risk assessment to include AI, that is the first priority.

The AI-specific risk assessment should document:

The risk assessment should be completed before deploying any new AI systems and updated whenever you add, modify, or decommission an AI system. HIPAA compliance requires that the risk assessment be a living document, not a one-time exercise.

Step 2: Data Classification — Know What Is PHI

Effective AI governance requires clear data classification. Staff cannot make correct decisions about which AI tools to use for which tasks if they do not know what constitutes PHI and what does not.

What is PHI in the context of AI use. Protected health information is any individually identifiable health information. In the AI context, this means: any text, image, audio, or document that contains a patient's name, date of birth, address, phone number, Social Security number, account number, MRN, or any other identifier combined with health information. This includes clinical notes, patient messages, lab results, imaging reports, billing records, and appointment information.

What is not PHI. De-identified information — health information with all identifiers removed — is not PHI and can be used with cloud AI services without HIPAA concern. Generic medical questions with no patient identification are not PHI. Staff can ask an AI tool "what is the first-line treatment for uncomplicated UTI in a non-pregnant adult?" without any HIPAA concern because no patient is identified.

The gray area. The tricky category is information that seems general but contains identifying details. A question like "I have a 67-year-old male patient with Type 2 diabetes, hypertension, and CKD stage 3 who presents with..." does not include a name, but depending on your practice size and patient population, it may be possible to re-identify this patient. The safe rule: if the scenario describes a real patient's situation, treat it as PHI.

Publish a simple, one-page data classification guide for staff. Create clear categories: PHI (must use approved private AI only), potentially re-identifiable (use approved AI only, remove specific identifiers), and non-PHI (approved cloud AI acceptable). Train staff on these categories and test their understanding in your annual HIPAA training.

Step 3: BAA Requirements and Vendor Evaluation

If your practice uses any cloud AI service that handles PHI, you must have a Business Associate Agreement with that vendor before any PHI is shared. This is non-negotiable.

Evaluating AI vendor BAAs. Not all BAAs provide equal protection. When reviewing an AI vendor's BAA, verify these provisions:

The private LLM alternative eliminates the cloud vendor BAA requirement entirely. When the AI runs on your own hardware within your network, there is no third-party processing the PHI, and no third-party BAA is required for the AI processing itself. This simplifies compliance and eliminates the ongoing risk that a vendor's BAA terms may change.

Step 4: Technical Safeguards — Building the Secure AI Environment

HIPAA's Security Rule requires technical safeguards for any system handling PHI. For AI systems, the required technical safeguards include:

Access controls. Only authorized users and systems should be able to interact with the AI. Implement role-based access control integrated with your existing identity management system (Active Directory, Microsoft Entra ID). Physicians may have full access to clinical AI features. Front desk staff may access only administrative AI functions. Access should require authentication — at minimum username and password, ideally multi-factor authentication for any AI access from outside the practice network.

Encryption in transit. All communication between clinical workstations and the AI system must be encrypted using TLS 1.2 or higher. This applies to both private on-premises AI (traffic within your network) and cloud AI (traffic to the vendor's servers). Verify that your AI implementation enforces encrypted connections and does not allow unencrypted fallback.

Encryption at rest. If the AI system stores any PHI locally — cached prompts, conversation logs, output documents — that storage must be encrypted. For private on-premises AI servers, encrypt the server's storage volumes using AES-256. For cloud AI, verify the vendor's at-rest encryption specifications in the BAA and technical documentation.

Audit logging. Every interaction with the AI that involves PHI must be logged. The audit log should capture: user identity, timestamp, type of request (documentation, coding, messaging), and a reference to which patient or encounter the AI was used for. HIPAA requires retaining audit logs for a minimum of six years. Audit logs serve as evidence of appropriate use during OCR investigations and AHCA inspections.

Automatic session termination. AI interfaces should automatically terminate sessions after a defined period of inactivity. This prevents unauthorized access to a logged-in AI session on an unattended workstation. A 15-minute inactivity timeout is typical for clinical AI interfaces.

Network segmentation (for private AI). If you deploy a private on-premises AI server, place it on a dedicated network segment (VLAN) accessible only from authorized clinical workstations. This limits lateral movement risk if another system on your network is compromised.

Step 5: Administrative Safeguards — Policies, Procedures, and Governance

Technical controls alone are not sufficient for HIPAA compliance. The Security Rule requires administrative safeguards including written policies, workforce training, and designated responsibility.

AI Acceptable Use Policy. Every Tampa Bay practice deploying AI needs a written AI Acceptable Use Policy. The policy should specify: which AI tools are approved for use with PHI, which are approved for non-PHI use only, and which are prohibited entirely. It should define who can approve new AI tools (typically the Privacy Officer and IT), require that all approved AI tools be documented in the risk assessment, and prohibit the use of unapproved AI tools with any patient information.

Designated AI compliance responsibility. Assign clear ownership for AI compliance. In smaller practices, this is typically the Privacy Officer (often the Practice Administrator). In larger practices, consider a dedicated AI Governance role or committee. This person is responsible for maintaining the AI tool inventory, reviewing new AI tool requests, monitoring compliance, and leading the annual AI compliance review.

Incident response procedures for AI. Update your incident response plan to include AI-specific scenarios: What happens if the AI system is compromised and PHI is exposed? What if a staff member is found to have been using an unapproved AI tool with patient data? What if the AI vendor notifies you of a breach? Each scenario requires a documented response procedure with defined timelines and responsible parties.

Change management for AI. Any material change to an AI system — upgrading to a new model version, changing integration architecture, adding new PHI data feeds — should trigger a review of the risk assessment and, if significant, an update to the BAA and compliance documentation.

Step 6: Staff Training on AI and HIPAA

Staff are the most common source of HIPAA AI violations, not through malicious intent but through lack of awareness. A physician who uses standard ChatGPT to help draft a patient note is not trying to violate HIPAA — they are trying to save time and simply do not know the compliance implications.

Effective AI HIPAA training covers:

Training should be role-specific. Clinical staff need to understand documentation AI and patient communication AI. Billing staff need to understand coding AI and the PHI handling requirements for billing-related AI tools. Administrative staff need to understand approved vs unapproved AI for the administrative tasks they perform.

Document all training completion for HIPAA records. Annual refresher training should include updates on any new AI tools deployed and any changes to AI use policies since the last training cycle.

Step 7: Notice of Privacy Practices Update

HIPAA requires that patients receive notice of how their information is used. If you are using AI to process patient information for treatment, payment, or healthcare operations purposes, your Notice of Privacy Practices (NPP) should be updated to reflect this use.

Most AI use in clinical settings falls under HIPAA's "healthcare operations" exception, which allows PHI use for activities like quality assessment, staff training, and administrative management. AI-assisted clinical documentation and coding fall within this exception and typically do not require individual patient consent beyond the updated NPP.

Add language to your NPP such as: "We may use artificial intelligence tools to assist with clinical documentation, administrative processing, and quality improvement. All AI tools used with patient information comply with applicable HIPAA requirements and operate under Business Associate Agreements or within our secure private network infrastructure." Have your HIPAA counsel review this language before implementation.

HIPAA AI Compliance Checklist for Tampa Bay Medical Practices

Use this checklist to assess your current AI compliance posture. Every item should be in place before deploying AI with PHI.

Tampa Bay medical practices that work through this checklist systematically are in a strong position for both AHCA facility inspections and OCR investigations. The practices that struggle with AI compliance are those that deployed AI tools first and thought about compliance second. Starting with this framework ensures that your AI investments deliver their intended efficiency benefits without creating regulatory liability that can quickly exceed the value of the efficiency gains.

For practices that want expert guidance, AI governance services from a partner experienced in both healthcare compliance and AI deployment provide the fastest path to a fully compliant AI environment. A well-structured AI governance engagement covers the risk assessment, policy development, technical implementation, and staff training — everything on this checklist — in a coordinated eight-to-twelve-week engagement that leaves your practice with documented, audit-ready AI compliance. See our dedicated page for AI services in Tampa for how we approach this work with Tampa Bay healthcare clients.

Get Your Tampa Practice AI-Compliant

BluetechGreen provides end-to-end HIPAA AI compliance for Tampa Bay medical practices. We cover risk assessment, BAA review, private AI deployment, policy development, and staff training — everything you need to use AI safely and legally.

HIPAA AI Compliance Services
AH

Anthony Harwelik

Principal Consultant & Founder at BluetechGreen with 25+ years in enterprise IT. Specializes in Microsoft Intune, Entra ID, endpoint security, and cloud migrations. Based in St. Petersburg, FL, serving Tampa Bay and Northern NJ.

Connect on LinkedIn