A standardized framework for responsible AI deployment, regulatory compliance, and bias accountability across client engagements.
AI regulation is accelerating. Federal legislation (the AMERICA AI Act), state laws in California, Colorado, and Texas, and industry-specific requirements like HIPAA now impose real obligations on businesses deploying AI systems. This framework ensures every Coastal AI engagement meets or exceeds current regulatory requirements from day one.
Every AI system built or recommended by Coastal AI is classified at project kickoff. Classification determines the compliance requirements that apply.
| Risk Level | Definition | Examples | Requirements |
|---|---|---|---|
| Standard | AI that assists with general business tasks, content, or internal tooling. No consequential decisions about individuals. | Content generation, chatbot FAQ, internal analytics dashboards | Transparency label, basic documentation |
| Elevated | AI that influences business decisions involving customer data, lead prioritization, or service delivery. | Lead scoring, AI-driven outreach, customer segmentation, appointment prioritization | Bias audit, transparency disclosures, data handling review, documentation |
| High-Risk | AI that makes or directly influences consequential decisions about healthcare, employment, lending, housing, or criminal justice. | Patient intake scoring, hiring screeners, care recommendation engines, insurance qualification | Full bias audit, regulatory filing, impact assessment, legal review, ongoing monitoring |
All Elevated and High-Risk AI systems undergo bias audits before deployment and on a recurring basis.
| Risk Level | Audit Frequency | Scope |
|---|---|---|
| Standard | Annual | Documentation review, output spot-check |
| Elevated | Semi-annual | Full output analysis, demographic variance check, data review |
| High-Risk | Quarterly | Full audit with third-party review option, regulatory documentation update |
Transparency is non-negotiable across all AI systems. Requirements scale with risk level.
Any AI system processing Protected Health Information (PHI) must comply with HIPAA. This includes:
For clients operating across multiple states, Coastal AI maps applicable AI regulations by location. The strictest applicable law governs each location.
| State | Law | Effective | Key Requirements | Applies To |
|---|---|---|---|---|
| Federal | AMERICA AI Act (draft) | TBD (draft March 2026) | Bias audits for high-risk AI, protected class protections including political affiliation | High-Risk |
| California | SB 942 (AI Transparency) | Jan 1, 2026 | AI content disclosure, detection tools for providers with 1M+ monthly users | Elevated High-Risk |
| California | AB 2013 | Jan 1, 2026 | Training data disclosure for generative AI developers | Elevated High-Risk |
| California | AB 489 | 2026 | AI cannot imply it holds a healthcare license | High-Risk |
| Colorado | SB 24-205 (AI Act) | Feb 1, 2026 | Impact assessments, risk management, consumer notice for high-risk AI in consequential decisions | High-Risk |
| Texas | TRAIGA | Jan 1, 2026 | Written disclosure before AI is used in patient diagnosis or treatment | High-Risk |
If your business operates in multiple states, each location must comply with the laws of its state. A franchise in Colorado has different obligations than one in Florida. Coastal AI maps these requirements per location as part of every engagement and updates this map as new laws take effect.
This framework is applied to every Coastal AI client engagement. The process:
Every engagement that includes AI components delivers these compliance artifacts, scaled to the risk level of your system:
For clients operating across multiple states or industries with heightened regulatory exposure, additional deliverables are available on request: