Glossary | ETM
EU AI ACT: General-Purpose AI Obligations Locked. 2025 Compliance Timelines Now Board-Accountable.
SEC Update: AI-Washing Enforcement Active. Material AI Risk Disclosures Now Scrutinized.
NIST AI RMF: Generative AI Controls Expand Expected Risk Management Baselines.
ISO/IEC 42001: AI Management Systems Emerge as Enterprise Governance Baseline.
Regulatory Reference 2025-2026

Board-Level AI Governance Glossary

Board-defensible AI governance terms paired with the evidence expectations regulators and auditors demand.

Each term shows boards and risk leaders exactly what documentation demonstrates governance when regulators, auditors, or shareholders examine AI decisions.

Grounded in EU AI Act, SEC enforcement trends, NIST AI RMF, and ISO/IEC 42001.

Common Filters:

Tier 1: Foundation controls

Defines ownership, classification, and risk boundaries. This is the minimum viable structure for defensible oversight.

AI accountability and role assignment

GC ISO/IEC 42001

Definition

Clear assignment of responsibility and decision rights for AI systems across design, deployment, monitoring, and incident response, so liability does not default to ambiguity.

Board-Defensible Evidence

  • Role and responsibility model (RASCI or equivalent) naming accountable owners for business outcomes, technical operation, risk oversight, compliance review, and security controls, with documented decision rights.
  • System-level ownership records showing named individuals and back-ups, last reviewed dates, and evidence of acceptance of responsibility for required control obligations.
  • Approval workflow documentation showing who can authorize production deployment, who can approve exceptions, and who can stop or roll back a model when risk thresholds are breached.
  • Meeting minutes or governance committee charters showing escalation pathways, quorum requirements, and how disputed decisions are resolved and recorded.
  • Incident accountability evidence showing who is on the response roster, who must be notified, what timelines apply, and how post-incident corrective actions are assigned and tracked.

Why This Matters

When a regulator asks who was accountable, undefined roles convert a technical failure into a governance and liability failure.

AI management system (AIMS)

CRO ISO/IEC 42001

Definition

A structured management system that defines AI governance policies, roles, processes, and controls across the AI lifecycle, aligned to an auditable standard.

Board-Defensible Evidence

  • AIMS scope statement defining which business units, geographies, and AI systems are covered, who approves scope changes, and how exclusions are justified and documented.
  • Policy and procedure set covering risk classification, approval gates, documentation requirements, monitoring, incident response, vendor governance, and exceptions, including version history and approval dates.
  • Management review artifacts showing periodic review by executive leadership, decisions made, resourcing actions taken, and tracked corrective actions with owners and deadlines.
  • Internal audit or control testing records showing that AIMS controls were tested, deficiencies were logged, remediation was verified, and repeat findings were escalated.
  • Training and competency records showing who is authorized to develop, deploy, approve, or monitor AI systems, with completion dates and role-based training requirements.

Why This Matters

When scrutiny hits, a defensible management system is how you show governance is operational, not a slide deck.

AI risk appetite and tolerance

CRO NIST AI RMF

Definition

Board-approved boundaries that define which AI risks are acceptable, which are constrained, and which are not permitted, with measurable thresholds for decision-making.

Board-Defensible Evidence

  • Board or committee-approved risk appetite statement that explicitly addresses AI harms (safety, discrimination, privacy, security, financial reporting impact), including approval date and reviewing body.
  • Operational tolerance thresholds translated into measurable limits (error rates, drift thresholds, false positive rates, bias metrics, latency or uptime requirements), including who set each threshold and the rationale.
  • Decision logs showing how appetite and tolerance were applied to approve, reject, or constrain specific AI use cases, including escalation paths when risks exceeded tolerances.
  • Exception register documenting each override, who approved it, what compensating controls were required, and when the exception expires or is re-evaluated.
  • KRIs and reporting pack evidence showing how leadership monitors adherence to appetite over time, including trend reporting and remediation when limits are breached.

Why This Matters

In enforcement or litigation, the gap between stated appetite and actual approvals becomes discoverable evidence of oversight failure.

AI system risk classification

CRO EU AI Act

Definition

A formal method for categorizing AI systems by risk level, so controls, approvals, and obligations match the impact of the use case.

Board-Defensible Evidence

  • Risk taxonomy and written criteria defining tiers (prohibited, high-risk, limited-risk, minimal-risk) and the specific triggers that move a system into each category, including jurisdictional mappings where applicable.
  • Classification decision record for each system showing who classified it, when it was classified, what evidence was reviewed (use case, data types, user population, environment), and the documented rationale for the final tier.
  • Governance workflow proof showing required approvers by tier (risk, legal, compliance, security) and the timestamps of approvals, exceptions, or escalations to an AI governance committee.
  • Change-control linkage showing how reclassification happens when the system changes (model updates, new features, new data sources, new deployment context), including who authorized the change and why.
  • Independent challenge or second-line review evidence (risk, internal audit, or external advisor) confirming the taxonomy is applied consistently and that edge cases are handled through a documented exception process.

Why This Matters

In enforcement or litigation, an inconsistent or undocumented classification process becomes evidence that the organization did not exercise credible oversight over AI risk.

High-risk AI use case inventory

CCO EU AI Act

Definition

A centralized register of high-risk AI systems and use cases, including where they are used, who owns them, and what obligations and controls apply.

Board-Defensible Evidence

  • Inventory schema with required fields (system name, business owner, technical owner, purpose, impacted stakeholders, geography, vendor involvement, data types, model type, deployment location) and a defined completeness standard.
  • System-of-record exports showing when each entry was created and last reviewed, who attested to accuracy, and the workflow used to onboard new AI systems before production use.
  • Cross-reference evidence tying inventory entries to procurement records, vendor contracts, model cards, DPIAs or AI impact assessments, and security risk assessments, including traceable IDs or links.
  • Controls coverage report showing which required controls apply to each inventory entry (testing, monitoring, incident response, human oversight) and which are pending, waived, or implemented with compensating controls.
  • Periodic governance review minutes or attestation logs showing that risk, legal, and compliance reviewed the inventory on a defined cadence and addressed gaps, duplicates, or shadow AI discoveries.

Why This Matters

If you cannot produce a complete, current inventory on demand, regulators and auditors will assume governance is reactive and incomplete.

Tier 2: Evidence-grade controls

What auditors and regulators expect to see operating in the real world. These terms map to repeatable proof.

AI auditability and logging

CRO EU AI Act

Definition

The ability to reconstruct what an AI system did, when it did it, who approved it, and what data and model version were involved.

Board-Defensible Evidence

  • Logging standard defining required events (inputs, outputs, confidence scores, overrides, model version, feature set, user actions) and retention periods, with alignment to legal holds and regulatory timelines.
  • System logs and immutable audit trails showing who accessed the system, who changed configurations, who approved releases, and when governance gates were completed.
  • Traceability evidence linking a specific decision to the exact model version, dataset version, and configuration at the time, including unique identifiers and time synchronization controls.
  • Audit access procedures showing who can retrieve logs, how requests are approved, how chain-of-custody is preserved, and how log integrity is validated.
  • Periodic log review and control testing evidence showing that logging is complete, tamper-resistant, monitored for gaps, and remediated when failures occur.

Why This Matters

If you cannot produce an audit trail quickly, your defense collapses into assertions instead of evidence.

AI impact assessment (AIIA / risk assessment)

CCO EU AI Act

Definition

A structured assessment of how an AI system could affect people, operations, and compliance, including mitigations and approvals before deployment.

Board-Defensible Evidence

  • Assessment template capturing intended use, foreseeable misuse, impacted stakeholders, harm scenarios, severity and likelihood ratings, and who performed the assessment with dates.
  • Mitigation plan linking each identified risk to a control, owner, due date, and validation method, including how residual risk was determined and approved.
  • Stakeholder input records showing consultation with legal, risk, security, privacy, and business owners, including documented disagreements and how they were resolved.
  • Approval workflow evidence showing the decision gate to proceed, defer, redesign, or reject the use case, with escalation criteria to governance committees.
  • Post-deployment review evidence showing the assessment was revisited after real-world operation, including incident learnings, monitoring results, and updated mitigations.

Why This Matters

Impact assessments are a primary artifact regulators look for when evaluating whether harms were foreseeable and whether mitigations were responsibly implemented.

AI model monitoring and drift management

CRO Model Risk

Definition

Continuous oversight to detect performance degradation, data shifts, and unintended outcomes, with defined actions when thresholds are breached.

Board-Defensible Evidence

  • Monitoring plan defining what is monitored (accuracy, drift, bias signals, latency, error types), how often, and the thresholds that trigger investigation or rollback, with named owners.
  • Dashboards and alerting evidence showing real-time or periodic monitoring outputs, alert history, and ticket links documenting investigations and corrective actions taken.
  • Drift analysis records showing what changed (data distribution, population, environment), when the change began, and how impact to outcomes was measured and reported to oversight teams.
  • Operational playbooks showing what happens when thresholds are breached, including escalation paths, decision rights to pause the model, and communication requirements to stakeholders.
  • Periodic performance review and revalidation records showing sign-off to continue operation, including documented rationale when operating near tolerance limits.

Why This Matters

Post-incident questions are simple: what did you monitor, when did you know, and what did you do about it.

AI model validation and independent review

CRO Model Risk

Definition

Independent confirmation that an AI model is fit for its intended purpose, performs as claimed, and meets control expectations before and after deployment.

Board-Defensible Evidence

  • Validation plan defining scope, test methodology, acceptance criteria, and who performs validation versus who built the model, including the independence standard applied.
  • Validation report documenting performance results, limitations, sensitivity analysis, stress testing, and known failure modes, with explicit sign-off by the independent reviewer and the accountable owner.
  • Data and feature review evidence showing training and testing data provenance, representativeness concerns, leakage checks, and documented decisions about inclusion or exclusion of sensitive variables.
  • Issue log and remediation tracker showing validation findings, severity ratings, corrective actions, retesting results, and the date the model was cleared for production use.
  • Periodic re-validation triggers and records showing when validation must be repeated (drift, data changes, code changes, new populations, new geography), and who approved continuing operation.

Why This Matters

When outcomes are challenged, validation records are how you prove the model was reviewed independently and approved based on evidence, not optimism.

AI-related data governance and lineage

CPO NIST AI RMF

Definition

Control of data sources, quality, permissions, and traceability, so AI outputs can be tied back to the data used to train and operate the system.

Board-Defensible Evidence

  • Data lineage maps showing source systems, transformations, feature engineering steps, and where data is stored and accessed across training, testing, and production environments.
  • Data access and consent evidence showing legal basis or permissioning for data use, including purpose limitation, retention rules, and approvals for sensitive data categories.
  • Data quality controls showing validation checks, missingness thresholds, label quality reviews, and documented decisions when data quality is insufficient but the model proceeds with mitigations.
  • Dataset versioning and provenance evidence showing when datasets changed, who approved changes, and which model versions were trained on which dataset versions.
  • Third-party data governance artifacts (if applicable) showing vendor data sourcing assurances, contractual restrictions, audit rights, and compliance attestations tied to the data supply chain.

Why This Matters

When outputs are challenged, lineage is how you prove what data was used, whether it was permitted, and whether it was fit for purpose.

AI transparency and documentation

CCO EU AI Act

Definition

Complete, consistent documentation that makes AI design choices, limitations, and operational controls reviewable by auditors, regulators, and internal oversight teams.

Board-Defensible Evidence

  • Documentation baseline defining what must exist per system (purpose, intended use, data sources, model type, training approach, testing results, monitoring plan), including who is responsible for maintaining each artifact.
  • Version-controlled artifacts such as model cards, system cards, risk assessments, and control mappings, each with dates, approvers, and change summaries that explain what changed and why.
  • Traceability evidence connecting documentation to real systems (repository links, ticket IDs, deployment IDs, monitoring dashboards), so documentation can be tied to the exact model version in production.
  • Disclosure and communications review records showing that external statements (marketing, investor relations, customer documentation) were reviewed against actual capabilities and limitations.
  • Audit readiness pack checklists showing periodic completeness checks, gap remediation, and a defined process for producing documentation quickly during an inquiry.

Why This Matters

In regulatory review, the absence of documentation is treated as the absence of control.

Bias, fairness, and non-discrimination controls

CCO NIST AI RMF

Definition

Documented measures to detect, prevent, and remediate discriminatory outcomes in AI systems, especially where decisions affect people, access, or eligibility.

Board-Defensible Evidence

  • Defined fairness objectives tied to the use case, including which protected classes and stakeholder groups are in scope, and why the chosen fairness approach is appropriate for the business context.
  • Testing artifacts showing pre-deployment bias evaluation, what datasets were used, what metrics were computed, and how results were reviewed and approved by risk and compliance.
  • Ongoing monitoring records showing fairness metrics over time, alert thresholds, and evidence of investigation when drift or disparate impact signals appear.
  • Remediation logs showing what changes were made (data rebalancing, threshold adjustments, model changes, policy changes), who authorized them, and what post-fix validation confirmed improvement.
  • External or second-line review evidence (legal, compliance, internal audit, or independent reviewer) confirming testing and monitoring are consistent with policy and that exceptions are justified and time-bound.

Why This Matters

Discrimination claims convert quickly into regulatory scrutiny and litigation, and the defense depends on what you tested, what you monitored, and what you fixed.

Explainability and stakeholder rationale

GC NIST AI RMF

Definition

The ability to provide a human-understandable explanation for how an AI system influences outcomes, tailored to regulators, customers, and affected individuals.

Board-Defensible Evidence

  • Explainability standard defining what must be explainable for the use case (inputs, decision factors, confidence, limitations), who the explanation is for, and the minimum acceptable explanation format.
  • Evidence of explanation artifacts such as model cards, decision rationale templates, feature importance summaries, and user-facing disclosures, each versioned and approved for use.
  • Testing records showing that explanations are accurate, consistent, and not misleading, including review by legal and compliance for consumer or stakeholder communications.
  • Operational procedures showing how explanations are delivered on request (customer support workflow, regulator response workflow), including response SLAs and required approvers.
  • Exception documentation for models that cannot be fully explained, including risk justification, compensating controls, and approval by risk and legal with periodic re-evaluation dates.

Why This Matters

If you cannot explain outcomes credibly, the narrative will be written by regulators, plaintiffs, or the press, not your governance team.

Model governance and lifecycle controls

CRO ISO/IEC 42001

Definition

End-to-end controls that govern how AI models are proposed, built, tested, approved, deployed, changed, and retired.

Board-Defensible Evidence

  • Lifecycle policy defining required gates (proposal, design review, testing, approval, deployment, monitoring, retirement), with required artifacts and accountable approvers for each gate.
  • Release management and change-control records showing model versioning, what changed, who reviewed it, testing completed, and production deployment approvals with timestamps.
  • Access control and segregation-of-duties evidence showing who can modify training data, code, and deployments, and how privileged actions are logged and reviewed.
  • Retirement and decommission procedures showing how models are removed, data retention is handled, downstream dependencies are identified, and stakeholders are notified.
  • Governance reporting showing lifecycle compliance rates, outstanding control gaps, overdue reviews, and escalation actions taken for non-compliance.

Why This Matters

Without lifecycle controls, "one small model update" becomes an ungoverned change that is hard to defend after a failure.

Technical and organizational measures (TOMs) for AI

CCO EU AI Act

Definition

Practical controls, both technical and procedural, that reduce AI risks and demonstrate responsible operation across people, process, and technology.

Board-Defensible Evidence

  • Control catalog mapping TOMs to specific risks (privacy, security, discrimination, safety, integrity), including which TOMs are mandatory by risk tier and who owns implementation.
  • Implementation evidence such as configuration baselines, security control settings, access controls, monitoring configurations, and documented procedures for approvals and reviews.
  • Control testing results showing whether TOMs operate effectively, including sampling approach, testing dates, identified gaps, and remediation confirmation.
  • Organizational readiness records such as training completion, role-based access approval, and documented operational procedures for incident response and change control.
  • Exception process evidence showing compensating controls, approval authority, expiry dates, and periodic reassessment of exceptions that weaken baseline TOMs.

Why This Matters

Regulators do not grade intentions, they evaluate whether controls were implemented, tested, and enforced.

Tier 3: Safety and resilience controls

Hardening, oversight, and incident readiness. This is where "trust" becomes operational survival.

AI in safety-critical systems

CRO NIST AI RMF

Definition

AI used in environments where failures can cause injury, loss of life, major infrastructure disruption, regulatory shutdown, or contract termination.

Board-Defensible Evidence

  • Safety case or hazard analysis documenting plausible failure modes, severity ratings, and mitigations, including who approved the safety assumptions and when they were last reviewed.
  • Operational constraints documentation defining where AI is allowed to act autonomously, where it must defer to humans, and what conditions force a safe state.
  • Verification and validation evidence showing testing under representative operating conditions, edge-case testing, and documented results reviewed by safety, engineering, and risk leadership.
  • Fail-safe and rollback procedures showing how the system is halted or reverted, who can trigger it, how quickly it must happen, and how it is tested periodically.
  • Incident reporting and post-event review artifacts showing how near misses and failures are logged, investigated, escalated, and used to update controls and operating limits.

Why This Matters

In safety-critical contexts, governance must prove that controls were designed for harm prevention, not just performance optimization.

AI incident detection and response

CRO EU AI Act

Definition

Defined capability to detect, triage, contain, and remediate AI-related failures or harms, including governance escalation and documented post-incident corrections.

Board-Defensible Evidence

  • Incident taxonomy defining AI incident types (harm events, bias events, model failures, data leakage, security misuse) and severity levels, including escalation thresholds to leadership and legal.
  • Runbooks showing detection sources, triage steps, containment actions, notification requirements, and who has authority to pause or roll back a model.
  • Incident logs showing event timelines, who was notified when, decisions made, actions taken, and evidence preserved for internal review or external inquiry.
  • Root-cause analysis and corrective action records showing what failed, why it failed, what was changed, and what validation confirmed the fix, including governance sign-off.
  • Post-incident governance review minutes showing lessons learned, policy updates, control improvements, and tracked follow-ups with owners and deadlines.

Why This Matters

When harm happens, response quality and documentation determine whether the story is "controlled event" or "governance breakdown."

AI-related security and model abuse risk

CRO NIST AI RMF

Definition

Security threats unique to AI systems, including prompt injection, data leakage, model theft, training data poisoning, and abusive use of model outputs.

Board-Defensible Evidence

  • Threat model documenting AI-specific risks, attack surfaces, and controls, including who approved the threat model and how often it is updated.
  • Security control evidence showing access controls, secrets management, environment isolation, and model endpoint protections, with configuration baselines and review logs.
  • Testing evidence showing prompt-injection and data-exfiltration testing, abuse case testing, and documented mitigations validated through retesting.
  • Monitoring and alerting evidence showing detection of anomalous usage patterns, abuse signals, and data leakage indicators, with incident tickets and response timelines.
  • Vendor security due diligence records showing how third-party models or platforms were assessed, what security assurances were obtained, and what contractual safeguards are in place.

Why This Matters

AI security incidents often trigger both cyber response and governance questions about why foreseeable abuse paths were not controlled.

Human oversight and human-in-the-loop controls

GC EU AI Act

Definition

Design and operational controls that ensure humans can understand, intervene, override, and stop AI decisions when risk thresholds or safety conditions require it.

Board-Defensible Evidence

  • Oversight design documentation defining where humans review outputs, what they are expected to check, and what conditions require mandatory override or escalation.
  • Role-based training and competency records showing that humans assigned to oversight have the authority, knowledge, and procedures to intervene effectively.
  • Override and intervention logs showing when humans intervened, who intervened, what decision was changed, why it was changed, and what follow-up actions were taken.
  • UI and workflow evidence showing that humans receive meaningful context (confidence, rationale, limitations) and that "rubber-stamping" risk is addressed through controls.
  • Periodic effectiveness reviews showing oversight performance metrics (override rates, error catch rates, escalation timelines) and documented improvements when oversight fails.

Why This Matters

When an AI-driven harm occurs, oversight is judged by whether humans could realistically prevent it, not whether a policy said they could.

Robustness, resilience, and adversarial testing

CRO NIST AI RMF

Definition

Testing and controls designed to confirm an AI system remains reliable under stress, unusual inputs, attacks, and changing real-world conditions.

Board-Defensible Evidence

  • Adversarial test plan defining threat scenarios, misuse cases, stress conditions, and acceptance criteria, including who approved the plan and who executed testing.
  • Red-team or penetration testing outputs showing findings, severity, reproducibility steps, and documented remediation with retest results and closure dates.
  • Resilience controls evidence showing input validation, rate limiting, monitoring for anomalous prompts or inputs, and fail-safe behaviors when the system encounters unknowns.
  • Business continuity and recovery artifacts showing how the AI system is maintained during outages, model failures, or dependency failures, including tested recovery procedures.
  • Continuous improvement records showing that new threats, incidents, or near misses update the testing program, not just the documentation.

Why This Matters

Robustness is a defensibility issue because predictable failure modes that were never tested look like negligence after the fact.

Tier 4: Board and market exposure controls

Procurement, disclosures, and third-party risk. This is where governance failures become market, investor, and regulatory events.

AI procurement standards and contractual safeguards

GC EU AI Act

Definition

Procurement requirements and contract clauses that set minimum AI governance expectations, evidence access, and liability protections for AI suppliers.

Board-Defensible Evidence

  • Standard AI procurement checklist defining required evidence (documentation, logging, monitoring, security controls, testing results) and who must approve exceptions, including legal and risk sign-off.
  • Contract templates or clause libraries covering audit rights, incident notification timelines, data use restrictions, model change notice, subcontractor controls, and termination rights tied to governance failures.
  • Negotiation and exception records showing where clauses were modified, who approved deviations, what compensating controls were required, and how residual risk was accepted.
  • Delivery acceptance evidence showing the vendor provided required artifacts before go-live (model documentation, security attestations, evaluation results), with acceptance sign-off dates and owners.
  • Post-contract monitoring evidence showing ongoing compliance with contractual governance obligations, including periodic evidence requests and documented follow-up on gaps.

Why This Matters

Contracts are the enforceable layer of governance, and weak safeguards turn vendor risk into your liability.

AI-related financial and risk disclosures

CRO SEC RULES

Definition

Disclosure of AI-related risks and impacts that could influence financial performance, operations, legal exposure, or strategic outcomes.

Board-Defensible Evidence

  • Materiality assessment records showing how AI risks were evaluated for financial reporting and disclosure, including who performed the assessment, what criteria were used, and approval dates.
  • Risk factor drafting files showing how AI risks were described, what evidence informed the language, and legal review sign-off, including version history and rationale for changes.
  • ERM integration evidence showing AI risks mapped into enterprise risk registers, with owners, controls, KRIs, and periodic reporting to audit or risk committees.
  • Incident and loss event records showing AI-related operational disruptions, remediation costs, claims, regulatory inquiries, or contractual impacts, and how they informed disclosures.
  • Board oversight documentation showing AI risk disclosures were reviewed at the appropriate level, including minutes and management responses to committee questions.

Why This Matters

Disclosures are judged on whether leadership had a defensible basis for what was said and what was omitted.

AI-washing and misleading AI disclosures

GC SEC RULES

Definition

Overstating AI capabilities, maturity, or controls in external statements, creating legal and reputational exposure when claims do not match evidence.

Board-Defensible Evidence

  • Claims inventory capturing AI-related statements across marketing, sales, investor materials, and public filings, including who authored each claim and when it was published.
  • Substantiation pack for each material claim showing the supporting evidence (system documentation, performance results, control status), with legal review sign-off and retention of prior versions.
  • Disclosure review workflow showing required reviewers (legal, risk, product, security), approval timestamps, and criteria for rejecting or revising claims that exceed evidence.
  • Change-control linkage showing how public claims are updated when AI systems change, limitations are discovered, or control status shifts, including a record of retractions or corrections.
  • Training and guardrail guidance for communications teams defining prohibited language, required qualifiers, and escalation paths for high-risk claims.

Why This Matters

When regulators examine AI claims, the question becomes whether your statements were controlled communications backed by evidence.

Board oversight of AI governance

GC ISO/IEC 42001

Definition

Board-level structure and reporting that ensures AI risk is governed like other enterprise risks, with clear accountability, cadence, and escalation.

Board-Defensible Evidence

  • Board or committee charter language defining oversight responsibilities for AI risk, including which committee owns which decisions and how AI is included in enterprise risk oversight.
  • Board reporting packs showing AI inventory status, risk tier distribution, incidents, exceptions, and key risk indicators, with a consistent cadence and documented follow-ups.
  • Decision minutes showing board or committee engagement with AI risk, including questions raised, actions requested, and management responses with due dates.
  • Escalation protocols showing when AI issues must be brought to the board, what constitutes a material AI event, and who is responsible for notification and documentation.
  • Periodic effectiveness review evidence showing the board assessed whether oversight is sufficient, including training, external briefings, or independent review where needed.

Why This Matters

Oversight is judged by records of attention and action, not by claims that the board was "aware."

Vendor and third-party AI risk management

CCO NIST AI RMF

Definition

Controls to evaluate, contract for, and monitor AI vendors and third-party AI systems so external dependencies do not become ungoverned risk.

Board-Defensible Evidence

  • Third-party AI intake questionnaire capturing model purpose, training data constraints, evaluation results, security controls, logging support, and incident notification terms, with completed responses retained.
  • Risk assessment records documenting vendor AI risks, who assessed them, what evidence was reviewed (SOC reports, documentation, testing), and what risk rating and mitigations were assigned.
  • Ongoing monitoring evidence showing periodic reassessments, breach or incident notices, control changes, and documented decisions to continue, constrain, or exit the relationship.
  • Shadow AI detection evidence showing how unauthorized third-party AI tools are identified, addressed, and either approved under governance or blocked.
  • Governance reporting to leadership showing third-party AI exposure, critical vendor concentration, and open risk items with owners and deadlines.

Why This Matters

When a vendor AI failure harms customers or operations, the organization is judged on due diligence and ongoing oversight, not on who built the model.

Need this evidence documented and board-ready?

Board Packet Readiness Review

We organize your AI governance evidence into a board‑ready packet in 7 days, aligned to EU AI Act and emerging SEC expectations.