AI Governance Is Broken. Here's How to Fix It.

An Executive Point-of-View

BLOG

Rick Hamilton, Naresh Nayar, and Jaswant Singh

1/29/20264 min read

I. The Problem: AI Governance as It Exists Today Is Failing

Organizations are deploying AI faster than they are learning to govern it, and the cracks are showing. With the last few years’ explosion of generative AI solutions, what began as organizational experimentation has increasingly become operational dependence. We see this as AI now shapes underwriting decisions, clinical workflows, hiring pipelines, customer interactions, and strategic planning, across a variety of industries. Despite this, governance practices have not evolved at the same pace.

From our vantage point, many organizations still treat AI governance like traditional IT governance, with centralized control, technical oversight, and compliance checklists. Policies are drafted by senior committees, implemented by technical teams, and reviewed periodically for regulatory alignment. But this is not enough.

Our perspective is direct: this approach is fundamentally misaligned to how AI actually works, and how AI fails in operational scenarios.

AI systems, particularly agentic systems, are probabilistic and adaptive, and they are increasingly embedded across diverse business workflows. Their risks arise not only from code, but from context: how outputs are interpreted, which exceptions are ignored, where incentives distort behavior, and how small failures quietly accumulate all shape real-world outcomes. Traditional enterprise governance models assume predictability and linear cause-and-effect and, as a result, they systematically overlook the risks that matter most in AI-driven systems.

A further distinction between AI governance and prior IT governance lies in decision authority. Organizations must explicitly define which decisions AI may inform, which it may recommend, and which it may execute autonomously. These boundaries are not merely technical, but are organizational, ethical, and operational choices that evolve over time.

Effective AI governance must move at the same cadence as AI itself. Annual policy cycles and episodic reviews are misaligned with systems that learn, adapt, and act continuously. For agentic systems in particular, governance must extend into runtime operation, incorporating continuous supervision, real-time escalation signals, and the ability to pause, constrain, or override agents as conditions change.

Why Current Approaches Fall Short

1. Top-down governance is blind governance

Executive committees and centralized policy bodies operate far from where AI meets reality. They approve principles and frameworks, but they rarely see what matters most, including edge cases that only appear under real-world pressure; workarounds employees invent to “make the system work;” and those quiet failures that don’t trigger alerts but erode trust over time.

Eventually, when those problems surface to the top, the damage has often already been done.

2. Technical oversight alone misses the point

Accuracy, precision, drift detection, and model documentation are necessary, but not sufficient on their own. AI is not just a technical system; its successes and shortcomings have a strong behavioral element. Data scientists can tell you whether a model performs well on a test set. But they cannot always tell you:

  • Whether an AI model’s outputs are appropriate in a sensitive context.

  • Whether its users are over-trusting or under-trusting the AI model.

  • Whether its use subtly shifts responsibility or accountability, and if so, in what way?

Thus, governance which focuses exclusively on technical control confuses correctness with business suitability. This suitability to accomplish business objectives is the foundational capability which must be kept top-of-mind.

3. Compliance-driven governance is reactive and shallow

Regulatory compliance is essential, but it typically represents the bare minimum, and not the requirements of a successful and advanced business operation. Laws lag AI’s capabilities, so checklists reflect yesterday’s risks, not tomorrow’s needs.

Organizations that equate compliance with governance tend to react after public failures, employee backlash, and in some cases, after regulators intervene. Thus, this approach conflates governance with damage control, not stewardship of business processes.

The Cost of Getting This Wrong

Regardless of the failure mechanism, when AI governance falls short, the consequences can be significant. These include:

  • Reputational damage when AI misbehaves publicly.

  • Employee distrust that slows adoption and encourages “shadow AI.”

  • Regulatory exposure, particularly as global AI laws tighten.

  • Most pervasively, failures represent wasted investment when promising AI initiatives stall or collapse.

AI governance setbacks are rarely catastrophic all at once. More often, they are cumulative, as small misalignments compound until the organization loses control of its own systems.

II. Our Point of View: The Three-Pillar Framework

Our core thesis is the belief that effective AI governance requires distributed accountability across three interconnected pillars:

  1. First-line employee involvement in project selection, and in defining and monitoring proper AI behavior.

  2. A cross-functional oversight committee that reviews KPIs, outcomes, and risks.

  3. An independent audit function that red-teams AI use and challenges assumptions.

No single pillar is sufficient on its own. Together, these three functions form a system of checks and balances that reflects how AI operates inside organizations. In this context, governance defines decision rights, accountability, and escalation paths, while risk management implements controls and mitigations within the structure which governance establishes. For agentic AI, this system must also define bounded autonomy: clear thresholds for when agents may act independently, when human approval is required, and when authority must automatically revert to human control.

Why This Works

This framework deliberately combines:

  • Ground truth from the people closest to AI use

  • Strategic alignment from cross-functional leadership

  • Independent scrutiny from those empowered to question assumptions

It avoids the two most common governance failures--concentrating authority where visibility is weakest, and delegating responsibility without accountability

This approach is not about slowing AI adoption; rather, it is about making AI adoption durable. Importantly, the parties entrusted with these multilayered responsibilities should each be action-minded and accountable; each pillar earns its place. Finally, this framework complements – rather than replaces – technical AI safety practices, and should not be treated as a substitute for pre-deployment evaluation, ensuring sufficient observability, or guaranteeing strong data security and privacy controls.

Together, we explore this important topic more thoroughly in the full Substack article, including pillar definitions, conditions for framework success, and implications for leadership. Read the full article here.