5 Questions Every Board Should Be Asking About AI Right Now | ETM
EU AI ACT: General-Purpose AI Obligations Locked. 2025 Compliance Timelines Now Board-Accountable.
SEC Update: AI-Washing Enforcement Active. Material AI Risk Disclosures Now Scrutinized.
NIST AI RMF: Generative AI Controls Expand Expected Risk Management Baselines.
ISO/IEC 42001: AI Management Systems Emerge as Enterprise Governance Baseline.
Board Intelligence

5 Questions Every Board Should Be Asking About AI Right Now

Boards are approving AI deployment without approving a governance framework. Here are the five questions that close that gap before a regulator does.

Grounded in EU AI Act, SEC disclosure review signals, and active U.S. litigation.

XTO Energy logo on a Simplex control panel screen used as a governance origin-story image
The building where I started. The system is still running. Governance kept it that way.

Most boards I talk to have approved AI deployment. Almost none have approved a governance framework to go with it. That gap is where the legal and financial exposure lives.

I sit in those rooms. As an IEEE CertifAIEd Lead Assessor, part of a small global cohort certified to evaluate real-world AI systems, I have assessed AI across industries and advised directors on what defensible governance looks like when a regulator shows up.

These are the five questions every board needs to answer. Not eventually. Now.

Question 01

Can you name every AI system your organization is running right now, including the ones your vendors are running on your behalf?

Most boards cannot.

Regulators and auditors now assume shadow AI exists in every organization. The only question is whether you have a documented way to identify and control it. At RSAC 2026, CrowdStrike CEO George Kurtz warned that most organizations deploy AI agents with less governance than they'd give an intern. Shadow agents outside audit visibility are the new shadow IT.

You cannot govern what you have not inventoried. That list is step one. Nothing else is defensible without it.

If a regulator, enforcement counsel, or your external auditor asked, "Show us your AI system inventory and how you classify risk," could you produce a current, approved document in the next 24 hours? If not, AI risk is effectively outside your enterprise risk management system, even if you have an AI strategy slide.

Question 02

Who is the named, accountable owner of AI governance?

Not a committee. A person. With a name and a title who answers to the board when something goes wrong.

When AI governance is everyone's job, it survives no audit. A title without authority is not governance. Name the person before a regulator asks who it is.

Traceability is not enough. Boards need explainability.

Most of the engineering work and regulation right now is aimed at traceability, logs that pin a decision to the exact dataset and model version that produced it. That is table stakes. It is not governance.

Under the EU AI Act, providers must furnish transparency and information to deployers for high-risk systems—including information needed to interpret outputs and use the system appropriately. (EU AI Act, Article 13) Boards should treat that as a governance floor: high-consequence decisions can’t run as unexplained black boxes.

For boards, the minimum line is this: no magical AI black-box algorithms in high-consequence decisions. If management cannot explain in plain language how a system reaches its decisions, who constrained it, and how its outputs are reviewed, you do not have oversight. You have liability.

Question 03

What does your AI disclosure in your SEC filings say, and can you prove it?

The SEC has been signaling that companies must be precise about what AI is actually doing in the business—and avoid “AI washing.” A review of SEC disclosure comments issued since 2021 found at least 92 separate comments addressing AI-related disclosures. (Summary of comment-letter trends)

Meanwhile, CFOs report AI is improving productivity even when financial outcomes lag—a classic “productivity paradox” dynamic. (Fortune coverage)

That gap between the executive narrative and auditable measurement is exactly where disclosure risk lives. If your GC cannot trace your AI efficiency claims back to verifiable data before the filing goes out, you do not just have a messaging problem. You have a potential AI washing and material misrepresentation problem.

That review process needs to exist before your next 10-K.

Question 04

How are you governing your vendors' AI?

Your vendors are running AI on your data. Their failures become your liability.

Mobley v. Workday tests whether an AI vendor can be treated as an “agent” and held directly liable in an employment discrimination context. Harper v. Sirius XM challenges algorithmic discrimination tied to AI-enabled hiring workflows.

If your organization touches EU data or markets, AI Act high-risk system requirements make vendor AI governance a board-level compliance question, not just a procurement question.

Ask your AI vendors to produce the documentation you would need in a regulatory exam or in discovery. What they send back tells you exactly where your governance gaps are.

Question 05

What is your incident response plan when an AI system produces a harmful output?

Most organizations have a cybersecurity incident response plan. Almost none have an AI-specific incident response plan.

Deepfake and AI impersonation attacks are now board-level risk. The Thales 2026 Data Threat Report release reports that nearly 60% of companies experienced deepfake-driven incidents, and 48% reported reputational damage tied to AI-generated misinformation or impersonation campaigns. (Thales release)

When, not if, an AI system makes a harmful decision, investigators will ask to see your AI incident playbook alongside your cyber runbooks. If it does not exist, the question becomes why the board allowed high-consequence AI decisions to run without one.

Someone approved those controls without accounting for what AI can now do to them. That someone answers to your board.

The director who asks what autonomous systems are acting on the organization's behalf and who owns governance for each one is the most valuable person in the boardroom right now.

That is a fiduciary question. Not a technical one.

I started my career installing SCADA systems in XTO Energy's oil and gas fields. I recorded this session looking out the window at that building. When those systems failed at 2am, people got hurt. Governance was how you kept the lights on.

AI governance is the same problem at a different scale. The boards that build it now make decisions from a position of strength. The ones that wait are calling lawyers at midnight.

Marian Newsome is the Founder and CEO of ETM Consulting and an IEEE CertifAIEd Lead Assessor, part of a small global cohort certified to evaluate real-world AI systems. She advises boards and C-suite executives on defensible AI governance frameworks for high-consequence systems.

Ready to assess where your organization actually stands?

A 60-minute briefing. Board-ready answers.

Book a Briefing →
AI Governance Board Liability SEC AI Disclosure EU AI Act AI Risk Management Enterprise AI Oversight AI Governance for Executives Board AI Oversight AI Fiduciary Responsibility