AI "Governance” Might Be the Wrong Word
And that could be why the right role never gets built
In a recent LinkedIn post, I said that AI governance is a power grab. A lot of people agreed privately, even if they wouldn’t say it out loud.
But something has been bugging me like crazy since I wrote it...
If everyone privately agrees the real issue is about power, why does the conversation keep defaulting to frameworks and checklists? Why does the role keep getting misplaced, underpowered, handed to the wrong people?
And then I had a weird thought. What if part of the problem is this word we’re all using, “governance”?
Words make decisions before we do
What comes to mind when you hear the word “governance”?
For most people, it’s committees. Policy documents. Compliance checklists. Someone in a sensible blazer presenting a risk matrix to a board that has seventeen other things on its agenda.
That’s not random. Words carry histories. “Governance” carries decades of bureaucratic weight. In some ways, even before anyone has a chance to think about it, this word has already signaled that it belongs in a compliance function. It conveys the idea that the whole thing is procedural, that “governance” operates within existing authority structures.
Not above them.
So organizations do the logical thing. They hear “governance” and respond with what’s familiar. Maybe this role is given to the GRC team. Or the legal department. Or a newly appointed VP with an impressive title and no real power.
Makes sense, right? Of course it does. That's exactly the problem.
The real conversation hasn’t started yet. And the wrong decision is already made.
That’s what I mean when I say this word may make things confusing. Organizations are just following the placement instructions that this word is quietly giving them.
So what is this function actually doing?
If we scrap the word for a moment, what does this role actually require?
It’s deciding which AI systems get deployed and which don’t. It’s setting the conditions under which AI can touch customer data, automate decisions, influence hiring, assess credit, flag fraud, generate content under the company’s name. It’s the function that can stop an initiative already championed by the CFO because the risk profile doesn’t hold up.
That last part is the one that makes people uncomfortable. It should.
This function has to sit above the competing agendas of every other department and hold the line when commercial pressure pushes toward shortcuts. And it has to do that without flinching. Without being reassigned or gettingy defunded after the first time it says no to someone important.
That’s not governance in any traditional sense.
That’s authority. It’s a type of institutional authority that most organizations haven’t had to create before. It’s new.
Call it AI Authority. Call AI it Integrity. Call it Fred. Call it whatever you like. The name almost doesn’t matter. What matters is that things shift from procedural oversight to genuine organizational power. Because that’s what actually makes it work. Without it you just have very busy people producing documentation that nobody is required to act on.
Here’s what it looks like when that authority is missing
A mid-market financial services company, around 800 employees, has been using an AI-powered applicant screening tool from a third-party vendor for eighteen months. The CHRO pushed for it. The CFO approved it: reduced time-to-hire, lower costs, more consistent candidate evaluation. Solid ROI case. Done.
The tool is working exactly as sold. Hiring cycles are faster. The recruiting team loves it.
Eighteen months in, the CAIO flags a problem.
An internal audit shows a statistically significant pattern. Candidates over 40 are being screened out at a rate disproportionate to their share of the applicant pool. The pattern holds across multiple roles. The tool isn’t explicitly using age as a variable. It’s using proxies. Graduation years. Tenure patterns. Career timelines. Correlations with age that the vendor never disclosed and the company never tested for.
The CAIO brings this to the executive team and recommends suspending the tool pending a full bias audit.
And here's where it gets predictable.
The CHRO pushes back. After all, the tool is delivering great results. Reverting to manual means slower hiring and more subjective decisions. The CEO is focused on a product launch and wants more people onboarded quickly. The CFO wants to quantify the legal exposure before disrupting something with positive ROI.
Which, to be fair, is exactly what CFOs are supposed to do. Quantify before you disrupt. It’s rational and completely reasonable.
The problem is that AI bias liability doesn’t arrive as a clean number. In May 2025, a federal court certified a nationwide collective action in Mobley v. Workday, revealing that 1.1 billion applications had been rejected using the platform’s screening software. The potential class runs into the hundreds of millions of members. There is no spreadsheet model that captures that exposure at the point when a CAIO would be raising the internal flag. The CFO wants a number. But this kind of liability doesn't arrive as a number until it arrives as a lawsuit.
Nobody in that room is acting in bad faith. They’re doing exactly what their incentives tell them to do. The CPO is protecting a win. The CFO is waiting for a number. The CEO wants bodies hired before the launch.
Of course.
And every person raising those concerns outranks the CAIO. That’s the whole problem.
If the CAIO (or whatever you call this role) doesn’t have the organizational standing to stop the tool pending audit, nobody does.
The tool keeps running. The pattern compounds. When the lawsuit arrives, and based on current litigation trends it will, the board asks: did anyone know?
The answer is yes. The CAIO knew. And didn’t have the authority to act on it.
That’s not a framework failure. No policy was missing. No one messed up. The failure was structural. The person who identified the risk didn’t have the power to stop it, because the organization never decided to give them that standing.
What usually happens, without this role
In the absence of a CAIO (or whatever you want to call it) with real authority, this situation doesn’t resolve dramatically. Nobody storms out. Nobody refuses to cooperate.
It resolves slowly.
The issue gets handed to legal. Legal flags it as a concern and recommends a review. The review gets commissioned. The tool keeps running because nobody has the unilateral authority to stop it and the review isn’t finished yet. The CHRO agrees to add some human oversight, which technically addresses the concern without actually stopping anything. A memo gets written. The committee meets. Everyone feels like something is being done.
Eventually the tool gets quietly modified, or the vendor relationship gets reviewed at contract renewal, or a complaint gets filed externally and suddenly everyone moves very fast and the question becomes who knew and when.
Everyone cooperated. Nobody made the decision that actually mattered. And the liability kept compounding quietly in the background while the process ran its course.
That’s almost worse than open refusal. At least open refusal produces a decision.
Why this missing role is unlike anything organizations have built before
Most senior functions exist within clearly bounded territory. The CFO owns finance. The CHRO owns people. The CTO owns technology. They have real authority, but it’s contained to their own lane.
This role has no lane. It has to operate everywhere AI operates, which is increasingly everywhere. Whoever holds it has to have authority over decisions that other senior leaders already consider their own. They have to be willing to be unpopular, slow things down, say no to initiatives with executive sponsorship, all without the institutional protection that comes with a well-understood role.
There’s no clean precedent for this.
And that makes everyone uncomfortable. Which means everyone has an incentive to pretend the precedent already exists and just hand it to whoever manages the compliance function.
The closest analogies are roles that emerged in similarly chaotic moments. The CISO role appeared in the early 2000s when organizations realized cybersecurity couldn’t live inside IT anymore and needed cross-functional authority. The Data Protection Officer came from GDPR, a position requiring independence, cross-functional reach, and the organizational courage to challenge decisions already made.
Both were initially resisted, misplaced, underpowered, and treated as compliance functions. Until organizations learned, often through genuinely painful experience, that they needed to be something more. Obviously. Because that’s what happens when you create a role that threatens existing authority without giving it enough power to defend itself. Organizations do this every time. It’s not stupidity. It’s just incentives doing what incentives do.
They figured it out eventually. Usually after something went badly enough wrong that the cost of denial finally exceeded the cost of actually building the role properly.
AI governance is in that same early, costly, poorly understood moment right now. The question is just how much has to go wrong before organizations stop pretending they’ve already solved it.
The hard step that is usually skipped over
Although there are a lot of conversations going on about AI governance, I’ve noticed that most of them usually focus on frameworks, certifications, and technical literacy.
What I notice is that most of what's written about AI governance assumes the hard organizational decision has already been made. Someone has been authorized, the function exists, and now the question is execution. The conversations move into the importance of knowing the EU AI Act, ISO 42001, model risk frameworks, etc.
These are useful tools for a role that's already been given real authority. But a lot of organizations are grabbing these practitioner frameworks before they've resolved the upstream question. Before anyone has decided who actually holds the power to govern AI here. Frameworks don't answer that question. They assume it's already been answered.
Knowledge of frameworks is important, sure. But it’s not what really matters to do well in this role.
It's a bit like buying an expensive set of surgical tools before deciding whether to build the hospital. The tools are fine. The sequencing is the problem.
So who is the best candidate for this type of role?
This is worth a moment of your serious consideration. The answer isn’t as obvious as people might assume.
A solid GRC background builds competence inside an established system. This role requires competence in spite of the system.
Sometimes in direct opposition to it.
The people who will be genuinely effective in this role are the ones who have navigated institutional chaos without losing their footing. Who have held an unpopular position inside a politically complex organization and survived it. Who understand that the real work isn’t writing the policy. It’s getting the policy to mean something inside a system full of competing incentives and people who would very much prefer it didn’t apply to them.
That experience tends to come from data protection under active regulatory pressure. Safeguarding in high-stakes environments. Crisis management. Major organizational restructuring. Anywhere that required someone to hold authority without perfect clarity, push back against more senior people when the situation demanded it, and maintain integrity under conditions that actively rewarded compromise.
Which also explains, by the way, why “governance” keeps attracting the wrong candidates. The word signals compliance. Compliance funct
ions recruit for compliance skills. And then everyone is surprised when the person in the role can’t hold the line against a CFO who has already approved the initiative.
This is a tall order. And it has nothing to do with whether someone knows the EU AI Act. It has everything to do with whether they can walk into a room, say something nobody wants to hear, and come back the next day and do it again.
So what does the right structure actually look like?
I lean toward a single CAIO role. It has to be positioned with genuine cross-functional authority and a reporting line that reflects that. One person, not a committee. And they need to be given real decision rights, not a carefully worded mandate.
The CAIO title is emerging and evolving in some organizations. I’m involved in some of those conversations. But the role is uneven and it’s still early. And the title alone doesn’t do anything. A CAIO buried at the wrong level with no real authority is just Schrödinger’s Power Grab.
So it doesn’t matter what you call the role. It’s about what authority it actually carries. Who it reports to and whether the organization is genuinely willing to let it function.
Because if the person in that role can’t stop a bad AI deployment championed by someone more senior, the governance isn’t real. It’s suspended in permanent superposition, appearing to exist without ever quite landing.
The bigger question underneath all of this
I think using the word “governance” makes it easier for organizations to feel like they’re doing the thing without confronting what the thing actually requires. I also know that just changing the word won’t fix things. But I wonder if doing so might make the structural conversation harder to avoid. I’m all for the little things that can make big change easier.
All of this is a seismic shift from how organizations have run. It’s so human to reach for what’s familiar first. Protecting your turf is deeply human. Waiting for a cleaner number before making a hard call is deeply human. None of this is surprising.
But at some point, the conversation has to move past which framework to adopt or which certification to pursue. It has to land on something harder: are we actually willing to create the conditions for this function to work?
This stuff is heavy. I know. But I’d rather name it clearly than write another piece about frameworks.
If it landed for you, forward it to one person who needs to read it. And if you want to think through what it means for your organization specifically, I work with senior leaders on exactly this. Start here: https://adelewang.com/discovery/ It’s a conversation worth having before you invest further in frameworks, certifications, or hiring.






