Financial Services - Overview
The most governed industry in the dataset -- 55% have formal AI policies. But activity is only 29%. This is not a gap problem. It is an optimization problem. The compliance infrastructure exists for pre-AI workflows. The challenge is extending it to generative AI without re-architecting from scratch.
Financial services is the most governed industry in the dataset -- 55% have formal AI policies. Activity is only 29%. This is not a gap problem. It is an optimization problem. The compliance infrastructure -- OCC model risk management (SR 11-7), BSA/AML, Dodd-Frank, Basel III -- was built for pre-AI workflows. Extending it to generative AI outputs without re-architecting from scratch is the challenge. Every customer-facing disclosure, every risk narrative, every regulatory filing that touches a model needs the same auditability that existing frameworks demand. The industry does not need to be convinced that governance matters. It needs tooling that maps AI outputs to the evidentiary standards it already enforces -- without adding another approval layer to an already dense compliance stack.
This industry includes 6 segments in the Ontic governance matrix, spanning risk categories from Category 1 — Assistive through Category 2 — Regulated Decision-Making. AI adoption index: 6/5.
Financial Services - Regulatory Landscape
The financial services sector is subject to 38 regulatory frameworks and standards across its segments:
- ACA
- AMLA 2020
- BSA/AML
- Basel III/IV
- CFPB UDAAP
- CFTC (if derivatives)
- CMS/HHS guidance
- CRA
- Cross-border: MiFID II, PSD2
- DOL fiduciary rules
- Dodd-Frank
- ERISA
- FFIEC IT Examination Handbook
- GLBA
- GLBA/Reg SP (privacy)
- HIPAA
- HIPAA (if health insurance)
- MHPAEA
- Multi-state insurance codes
- NAIC model laws
- NAIC model laws and accreditation standards
- No Surprises Act
- OCC heightened standards
- OCC/FDIC/Fed supervisory guidance
- ORSA (Own Risk and Solvency Assessment)
- Reg E
- SEC/FINRA (if broker-dealer)
- SEC/FINRA rules
- SR 11-7 (model risk)
- State DOI examination authority
- State DOI market conduct examination standards
- State banking regulations
- State insurance codes
- State money transmitter laws
- State prompt-pay and claims-handling statutes
- State rate filing requirements
- State unfair claims practices acts
- UDAAP
The specific frameworks that apply depend on the segment and scale of deployment. Cross-industry frameworks (GDPR, ISO 27001, EU AI Act) may apply in addition to sector-specific regulation.
Financial Services - Financial Services -- Fintech Startup
Risk Category: Category 1 — Assistive Scale: SMB Applicable Frameworks: SEC/FINRA (if broker-dealer), State money transmitter laws, CFPB UDAAP, GLBA/Reg SP (privacy)
Moving fast does not exempt fintech from CFPB examination standards.
The Governance Challenge
Fintech startups use AI to draft investor memos, synthesize market data, and generate customer-facing communications. The speed advantage is real. The regulatory surface is also real — SEC/FINRA obligations attach if broker-dealer activity is involved, CFPB UDAAP standards apply to consumer-facing outputs, and GLBA governs data handling. Most fintechs discover regulatory exposure at the Series B diligence stage, when it becomes a valuation problem.
Regulatory Application
CFPB UDAAP enforcement does not distinguish between human-authored and AI-generated customer communications. SEC/FINRA rules apply to any AI-assisted investment communication. State money transmitter laws add jurisdiction-specific requirements. GLBA/Reg SP governs customer data in AI systems. Regulatory exposure scales with the product, not the headcount.
AI Deployment Environments
- Studio: Investor and board memo drafting | Market scan & synthesis | Internal metrics commentary
- Refinery: Customer email / chat drafting governance | Lightweight disclosure and fee explanation checks
Typical deployment path: Studio → Studio → Refinery
Evidence
- CFPB issued AI-specific UDAAP guidance in 2023
- 55% of financial services firms have formal AI policies; fintechs lag significantly
- Series B diligence increasingly includes AI governance risk assessment
Financial Services - Banking -- Regional / Mid-Market
Risk Category: Category 2 — Regulated Decision-Making Scale: Mid-Market Applicable Frameworks: BSA/AML, Reg E, UDAAP, FFIEC IT Examination Handbook, GLBA, CRA, State banking regulations, OCC/FDIC/Fed supervisory guidance
The examiner will ask how AI-generated disclosures are governed. The answer needs to exist before the exam.
The Governance Challenge
Regional banks deploy AI for advisor productivity, customer-facing disclosures, product explanations, and KYC/AML screening summaries. The compliance infrastructure is mature — BSA/AML, UDAAP, FFIEC standards are deeply embedded. But that infrastructure was built for human-authored outputs. When a model drafts a fee disclosure that omits a material term, the existing workflow catches it only if a human reviewer notices. OCC and FDIC are signaling that AI-generated customer content will be examined under the same standards as human-authored content.
Regulatory Application
OCC Bulletin 2023-17 and FFIEC IT Examination Handbook updates signal AI- generated content will be examined under existing UDAAP and disclosure standards. SR 11-7 model risk management applies to models influencing customer-facing decisions. BSA/AML examination procedures require screening decisions be documentable and reproducible. State banking regulators follow federal guidance with jurisdiction-specific requirements.
AI Deployment Environments
- Studio: Advisor productivity tools | Internal knowledge search | Draft recommendations
- Refinery: Customer-facing disclosures | Product and fee explanations | KYC/AML screening summaries
- Clean Room: Board and regulator briefing packs with chain-of-custody for scenarios and stress tests
Typical deployment path: Refinery → Refinery → Clean Room
Evidence
- OCC Bulletin 2023-17 (interagency third-party risk management) governs AI vendors as critical third parties
- FFIEC IT Examination Handbook updated for AI/ML risk assessment
- SR 11-7 model risk management applies to GenAI customer-facing outputs
- Average UDAAP enforcement action costs $5-50M in penalties and remediation
Financial Services - Banking -- Global / Systemic
Risk Category: Category 2 — Regulated Decision-Making Scale: Enterprise Applicable Frameworks: BSA/AML, AMLA 2020, Dodd-Frank, Basel III/IV, SEC/FINRA rules, CFTC (if derivatives), OCC heightened standards, SR 11-7 (model risk), Cross-border: MiFID II, PSD2
SR 11-7 was written for quantitative models. Examiners are applying it to every AI-generated regulatory narrative.
The Governance Challenge
Global banks deploy AI across internal research, policy drafting, regulatory filings, supervisory reporting, and model documentation. The regulatory stack is the densest in any industry — BSA/AML, AMLA 2020, Dodd-Frank, Basel III/IV, SEC/FINRA rules, OCC heightened standards. SR 11-7 model risk management now applies to generative AI outputs influencing customer decisions or regulatory reporting. Cross-border operations add MiFID II and PSD2 obligations. The challenge is not building a new governance framework — it is extending the existing one to AI outputs without creating a parallel compliance bureaucracy.
Regulatory Application
SR 11-7 applies to any model influencing material decisions — including generative AI producing regulatory narratives and client communications. BSA/AML and AMLA 2020 require reproducible screening decisions. Dodd-Frank stress testing narratives that touch AI require the same auditability as quantitative model outputs. SEC/FINRA rules apply to AI-assisted investment communications. Cross-border operations trigger MiFID II best execution documentation and PSD2 strong customer authentication requirements.
AI Deployment Environments
- Studio: Internal research copilots | Draft policy and product documents
- Refinery: Regulatory filings | Supervisory reporting narratives | Model documentation
- Clean Room: Algorithmic trading governance | SEC/FINRA-defensible output | Systemic risk reporting with audit trail
Typical deployment path: Refinery → Refinery → Clean Room
Evidence
- OCC heightened standards apply to AI at systemically important institutions
- Basel III/IV model risk requirements extend to AI-influenced capital calculations
- Algorithmic trading governance under SEC/FINRA requires output-level audit trails
- Cross-border regulatory fragmentation multiplies compliance surface
Financial Services - Insurance -- Regional Carrier
Risk Category: Category 2 — Regulated Decision-Making Scale: Mid-Market Applicable Frameworks: State insurance codes, NAIC model laws, State DOI market conduct examination standards, HIPAA (if health insurance), State unfair claims practices acts
The DOI examiner will ask how AI influenced the claims determination. The policy file needs the answer.
The Governance Challenge
Regional carriers deploy AI for underwriting assistance, agent productivity, customer-facing coverage explanations, claims determination rationales, and renewal decision narratives. State insurance codes and NAIC model laws govern every customer-facing output. State DOI market conduct examinations will examine AI-generated claims determinations under the same standards as human-authored ones. When an AI-generated claims denial rationale is challenged, the carrier needs to produce the reasoning chain — not just the outcome.
Regulatory Application
State insurance codes govern every AI-generated customer communication. NAIC model laws provide the governance baseline. State DOI market conduct examination standards apply to AI-assisted underwriting and claims decisions. HIPAA applies to health insurance AI workflows. State unfair claims practices acts apply to AI-generated claims determinations and denial rationales.
AI Deployment Environments
- Studio: Underwriting assistant | Agent / broker productivity copilots
- Refinery: Customer-facing coverage explanations | Claims determination rationales | Renewal decision narratives
- Clean Room: Regulatory examination packages | Portfolio-level risk narrative governance
Typical deployment path: Refinery → Refinery → Clean Room
Evidence
- 55% of financial services firms have formal AI policies; regional carriers lag
- State DOI market conduct examinations incorporating AI governance questions
- NAIC model bulletin on AI governance adopted in December 2023
- Claims determination challenges are the most common DOI examination trigger
Financial Services - Insurance -- National Carrier
Risk Category: Category 2 — Regulated Decision-Making Scale: Enterprise Applicable Frameworks: Multi-state insurance codes, NAIC model laws and accreditation standards, State rate filing requirements, ORSA (Own Risk and Solvency Assessment), State DOI examination authority
Multi-state rate filings generated by AI require the same actuarial defensibility as human-prepared ones.
The Governance Challenge
National carriers deploy AI across internal model commentary, multi-state regulatory filings, actuarial model explanations, and rate justification narratives. Operations span dozens of state jurisdictions, each with its own insurance code, rate filing requirements, and DOI examination authority. ORSA requirements add enterprise risk governance. When an AI-generated rate filing narrative is challenged by a state DOI, the carrier must demonstrate the actuarial basis, the model's contribution, and the human review chain — across every jurisdiction simultaneously.
Regulatory Application
Multi-state insurance codes create jurisdiction-specific AI governance requirements. NAIC model laws and accreditation standards provide the baseline. State rate filing requirements apply to AI-generated justification narratives. ORSA requirements demand enterprise-level AI risk governance. State DOI examination authority extends to AI-assisted underwriting and pricing decisions.
AI Deployment Environments
- Studio: Internal model and portfolio commentary drafting
- Refinery: Multi-state regulatory filings | Actuarial model explanations
- Clean Room: Rate filing defensibility | DOI examination readiness | SIU fraud investigation output
Typical deployment path: Refinery → Refinery → Clean Room
Evidence
- NAIC model bulletin on AI governance adopted in December 2023
- ORSA requirements now include AI risk assessment in multiple states
- Multi-state DOI examination coordination increasing for AI-related issues
- Rate filing challenges are the highest-cost regulatory event for national carriers
Financial Services - Insurance & Benefits Admin -- TPAs / PBMs
Risk Category: Category 2 — Regulated Decision-Making Scale: Mid-Market-Enterprise Applicable Frameworks: ERISA, ACA, MHPAEA, No Surprises Act, State insurance codes, CMS/HHS guidance, HIPAA, State prompt-pay and claims-handling statutes, DOL fiduciary rules
An AI-generated denial letter that omits an appeal right is still a fiduciary breach.
The Governance Challenge
TPAs and PBMs deploy AI for benefit explanation drafting, plan comparison summaries, member-facing coverage explanations, and denial/appeal letter generation. ERISA fiduciary obligations apply to every benefit determination communication. ACA, MHPAEA, and the No Surprises Act impose specific disclosure requirements. State prompt-pay and claims-handling statutes add jurisdiction-specific standards. When an AI-generated denial letter omits a required appeal right, mischaracterizes a coverage limitation, or fails to meet MHPAEA parity disclosure requirements, the TPA faces DOL fiduciary liability and state regulatory action simultaneously.
Regulatory Application
ERISA imposes fiduciary standards on AI-generated benefit determination communications. ACA requires specific essential health benefit disclosures. MHPAEA parity requirements apply to AI-assisted mental health and substance use coverage determinations. No Surprises Act governs AI-generated billing communications. State insurance codes and prompt-pay statutes add jurisdiction- specific requirements. DOL fiduciary rules apply to all plan administration communications. HIPAA governs member health data in AI systems.
AI Deployment Environments
- Studio: Internal benefit explanation drafting | Plan comparison summaries
- Refinery: Member-facing coverage explanations | Denial / appeal letter governance
- Clean Room: Regulator and large-client evidence packs for disputed benefit determinations
Typical deployment path: Refinery → Refinery → Clean Room
Evidence
- DOL fiduciary breach penalties for benefit determination failures are significant
- MHPAEA enforcement actions for parity violations increasing
- No Surprises Act disclosure requirements expanding
- Large employer clients increasingly require AI governance evidence from TPAs