top of page

What Board Members Actually Need to Understand About AI Risk

  • Writer: Daniel Perkins
    Daniel Perkins
  • Apr 16
  • 4 min read

Updated: 6 days ago

Not the Technology. The Governance.

Here's a question worth sitting with. If someone asked your board right now to name every AI system your organization is currently running, how confident are you in the answer?

Not the ones IT formally approved. All of them. The vendor tool that added an AI feature six months ago. The department that built something internal over a long weekend. The productivity plugin three people on the sales team are using to write emails.

Most boards couldn't answer that question accurately. That's not an indictment of those boards. It's a signal that AI governance hasn't been positioned as a board level conversation yet. That needs to change.

Why this is a board issue, not an IT issue

There's a common assumption in organizations that AI risk belongs to the technology team. Let the CTO handle it. Let the CISO handle it. The board reviews the summary at the end of the quarter.

That framing is wrong, and it's worth being direct about why.


When an AI system makes a bad decision at scale, the organization is accountable for that outcome. That includes approving the wrong credit applications, screening out qualified job candidates, or generating inaccurate medical guidance. Not the vendor. Not the software. The organization.

The board's job is to ensure the organization is managing risk within bounds that leadership has actually thought through and agreed to. That's true for financial risk, operational risk, and reputational risk. AI risk is no different.


The COSO Enterprise Risk Management framework is direct about this. Boards don't just receive risk reports. They're responsible for ensuring the organization's approach to risk is sound, that significant risks are escalated appropriately, and that management's responses are consistent with what the board has authorized. A board that delegates all AI decisions to IT without retaining oversight responsibility hasn't exercised governance. It has stepped back from it.

The difference between understanding the technology and understanding the risk


Board members don't need to understand how a large language model works. They don't need to know the difference between supervised and unsupervised learning. What they need to understand is how AI changes their organization's risk profile and what governance is in place to manage that change.


Those are two very different conversations. One is about technical architecture. The other is about accountability, oversight, and organizational decision making. The second conversation is squarely in the board's lane.


Think about it this way. A board member doesn't need to understand how a loan origination system calculates interest to oversee credit risk. They need to know what controls are in place, who's accountable, what the exposure looks like, and when it gets escalated. The same logic applies to AI.

Four questions every board should be asking right now

If your organization is deploying AI in any capacity, these four questions aren't optional. They're the baseline of responsible oversight.


What AI systems are we running and what decisions are they influencing? This seems basic. In most organizations, it isn't. A complete inventory of AI systems in use, including AI features embedded in tools the organization didn't buy specifically for AI, is the foundation of any governance conversation. You can't manage what you haven't named.


Who is accountable when something goes wrong? This is where most organizations have a real gap. Is it the CTO? The CISO? The Chief Risk Officer? The business unit that deployed the system? If the answer isn't immediate and unambiguous, accountability is effectively diffuse, which means it belongs to no one. Your governance structure needs to name the person, not just the department.


Are the risks these systems introduce within the boundaries we've set? This is the risk appetite question. Every board already wrestles with it for financial risk, regulatory risk, and operational risk. AI introduces a new category of risk that needs the same explicit treatment. What's the acceptable error rate for an AI system making consequential decisions? What happens when outputs are biased? Who decides when to shut a system down? These aren't rhetorical questions. They require answers.


What would we have to disclose if an AI system caused harm? Regulatory expectations are moving fast. The EU AI Act is creating compliance obligations for organizations doing business with European customers. The FTC launched its Operation AI Comply enforcement sweep in 2024. State level AI transparency laws are expanding. Boards need to understand what their current AI deployments would require them to disclose if something went wrong, and whether they could actually meet that disclosure standard today.

What governance actually looks like in practice

Governance isn't a policy document. It's a set of active structures and accountabilities that get exercised before something goes wrong.


For AI, that means your organization should have clear answers to who approves AI deployments before they go live, who monitors AI systems after they're running, and who has authority to stop a system that's performing outside acceptable bounds. It means AI risk should appear in your enterprise risk reporting in terms the board can act on, not buried in a technical appendix.


It also means your vendor contracts need to keep pace with what your vendors are actually doing. If a software vendor you've used for years added an AI capability to their platform, that's a new data processing relationship. Your existing contract almost certainly doesn't address it.


None of this requires the board to become technically proficient in machine learning. It requires the board to apply the same oversight discipline to AI that it already applies to every other material risk the organization carries.

The bottom line

AI governance isn't a technology problem. It's a leadership problem. The organizations that get this right aren't the ones with the most sophisticated AI systems. They're the ones where the board has asked the right questions, named the right accountabilities, and insisted that governance is built into the process rather than added after something breaks.


We do not rise to the level of our AI capabilities. We fall to the level of our governance.

Comments


bottom of page