AI GOVERNANCE

Configure AI governance. Don't build it.

AnyBio is the governance layer for clinical AI — PHI-safe routing, compliance gates, audit trails, human-in-the-loop review. Configured to your prompts and policies. Designed for the new FDA rules.

THE REGULATORY REALITY

The rules have changed.

Two recent shifts — FDA's QMSR (now in effect) and the 2026 HIPAA Security Rule update — are the biggest health-data-compliance change in a decade. AI systems operating on protected health information must now demonstrate:

  • Auditability

    Every AI decision must be traceable to its inputs, model, and outputs.

  • Access controls

    Approval workflows for AI-generated content.

  • PHI safety

    Protected health information must never be exposed to non-compliant models or services.

  • Human oversight

    AI-suggested actions require human-in-the-loop review before execution.

  • Provenance

    Full chain of custody from data source to AI output to clinical action.

THE AI SAFETY IMPERATIVE

Trust is not optional.

An AI hallucination in a consumer app is an inconvenience. In clinical care, it can cause harm. AnyBio's governance layer is built into the platform — not because regulators require it (they do), but because patients are downstream of every AI output.

COMPLIANCE GATES

Three gates on every AI output.

AI OUTPUT

Governance layer

PHI-Safe Model Routing
Human-in-the-Loop Review
LLM Cost and Usage Transparency
Compliance Gates
Agent Types with Scoped Authority
Full Audit Trail
Allow

Meets all criteria. Output delivered as-is.

Rewrite

Clinical claims rewritten to wellness-appropriate language. Both versions preserved.

Block

PHI leak or harmful content. Surfaced for human review.

GOVERNANCE IN PRACTICE

What governance looks like in practice.

PHI-safe model routing

The platform automatically routes AI inference to BAA-covered, HIPAA-compliant models selected by policy. Protected health information never touches non-compliant providers. Model routing is policy-driven and auditable.

Patient data only reaches models we have a BAA with. Period.

Compliance gates

Before any AI-generated content reaches a patient or clinical record, it passes through compliance gates. Claims that exceed the agent's authority are automatically rewritten to wellness-appropriate language. Both versions are preserved in the audit trail.

Claims that exceed scope are rewritten — both versions logged.

Full audit trail

Every agent run is logged: what data it saw, which model it used, what it produced, whether a human approved it, and what actions were taken. This is not optional logging — it is structural. The audit trail is the product.

Every AI step, every encounter — auditor-ready in seconds.

Human-in-the-loop review

AI agents in AnyBio suggest — they do not act. Suggested actions, clinical summaries, and triage recommendations require human review and approval before execution. Approve, reject, or modify with notes. Every review decision is tracked.

AI triages. Clinicians decide. Every review is logged.

Agent types with scoped authority

Each AI agent type (summarizer, advisor, triage assistant, coach) has explicitly scoped authority. An agent cannot exceed its defined capabilities. Prompts are version-controlled and tied to specific clinical contexts.

Each agent has a role. None can exceed it.

LLM cost & usage transparency

Every LLM call is tracked: model, provider, tokens consumed, cost, latency. Organizations see exactly which models are running, how much they cost, and what they produced. Budget limits prevent runaway AI spend.

Per-model cost caps. Track every token, every call.

PHI-SAFE MODEL ROUTING

PHI never leaves the safe zone.

PHI requests

PHI REQUEST

Heart Rate: 106 bpm

Patient ID: 4287

HIPAA Tier

BAA-Covered Model Providers

HIPAA-compliant

Non-PHI requests

NON PHI REQUEST

Summarize ECG Trends

General Tier

Broader AI Models

No Patient Data

EVERY DECISION, DOCUMENTED

The audit trail is the product.

What auditors actually see — every step, every decision, every reviewer.

AnyBio

Audit Trail · Patient_01

Verified
1.Inputs seen

ECG stream · episode metadata · patient context

source:   ring_0000 · BLE · 250Hz

episode:   ep_0000 · 13:30–14:00

patient:   pt_0000 · PHI tagged

hash:   0000x00 · verified

2.Model called

anyBio-clinical-v2 · HIPAA tier · BAA-covered

14:08:01
3.Output produced

“Resting tachycardia detected. HR sustained above 100 bpm...”

14:09:01
4.Compliance gate

PHI check · scope · confidence · decision: Human Review

14:10:07
5.Human reviewer

dr_JohnDoe · flagged for review · 14:10

14:14:01
6.Final action

FHIR synced · EHR updated · episode closed

14:31:01
CERTIFICATIONS

Built for regulated environments.

HIPAA Compliant

BAA executed. PHI encryption at rest and in transit.

SOC 2 Type II

Audit in progress. Comprehensive scope: all 5 Trust Services Criteria — Security, Availability, Confidentiality, Processing Integrity, Privacy.

FDA QMSR

Designed for the new QMSR requirements — audit trails, governance, and human oversight for AI systems.

FHIR-Native

Standards-based interoperability with any FHIR R4-compliant EHR.

FOR BUILDERS AND PROVIDERS

Governance is what turns a prototype into production.

For builders

AI tools build the prototype. Without governance, it stays a prototype — vaporware that fails IT review. With AnyBio underneath, it becomes a product that ships to clinical care.

For providers

Your innovation team can say “yes” to AI without your legal and IT teams saying “no.” Every agent runs under policy — PHI-safe routing, compliance gates, full audit trail. They review the policy, not the prototype.

Define a policy. We handle the rest.

PHI-safe routing, compliance gates, audit trail — ready to flex.

Governed AI in clinical environments