AI adoption is outpacing organizations’ ability to govern it. Every team is testing tools. Vendors are baking AI into every product. Innovation is accelerating, but often without oversight.
Traditional security, privacy, and compliance controls weren’t built for AI systems that learn, adapt, and act in real time. They can’t keep up with businesses’ appetite for AI speed and scale.
A 2025 Forrester Consulting study, commissioned by Tines, surveyed more than 400 IT leaders. It found that 54% of IT leaders see governance, regulatory standards, and privacy as their top priority for the next 12 months. At the same time, 38% cite governance and security as the biggest barrier to scaling AI. Regulations are tightening, threats are escalating, and governance has become a board-level issue. CIOs who step up to lead orchestrated governance will create a strategic advantage.
Weak governance is a business liability
AI governance is more than a compliance checklist. It’s a framework to ensure AI systems are ethical, compliant, and secure. Done right, it enables innovation instead of slowing it down.
ISACA research shows that 83% of firms are already using AI, but only 31% have an AI policy. While many companies have started to draft plans, few have moved to execution. This gap leaves organizations exposed in three critical ways:
Rising regulatory pressure
AI regulation is ramping up worldwide and the cost of weak governance is rising. The new EU AI Act sets penalties of up to €35 million or 7% of global turnover for non-compliance. In the U.S., recent executive orders and state-level laws are beginning to set similar expectations for transparency, accountability, and risk management.
Escalating security threats
Governance gaps make it harder to prevent data leakage, model manipulation, and poisoned training inputs. Shadow AI, where teams adopt tools without approval, adds even more blind spots that attackers can exploit. No wonder the Forrester study found that 73% of IT leaders say end-to-end visibility across workflows is a top consideration.
Eroded trust and reputation
Nearly 70% of consumers say they have little or no trust in companies to make responsible decisions about AI. And 40% of IT leaders from the Forrester study admit they don’t fully trust AI-generated outcomes. Weak governance doesn’t just risk penalties. It undermines confidence with employees, customers, partners, and boards and leaves lasting reputational scars.
The four stages of AI governance maturity
AI governance maturity varies widely, based on an organization’s size, resources, risk tolerance, and strategic priorities. Directionally, most organizations fall into one of four stages:
Exploratory
Many large organizations are experimenting with AI governance pilot frameworks, including ethics committees, bias checklists, or draft policies. These efforts raise awareness but don’t scale beyond small groups.
Emerging
A smaller set of leaders are taking a more systematic approach. They’ve built model registries (for example, MLFlow Model Registry), audit logs, integrated risk controls, and cross-functional governance councils. These frameworks are promising, but often limited to specific teams or use cases.
Fragmented
Advanced firms at this stage often have registries, audit logs, and governance councils in place. But efforts remain siloed and disconnected, with no enterprise-wide standards. Tools may exist, but they’re often manual, siloed, and lack orchestration. When problems arise such as biased outputs, data drift, or shadow AI, remediation is slow, inconsistent, and relies on human effort.
Transformative
The most mature organizations orchestrate governance end-to-end. They have board-level support, coordinated tools, and clear ownership across functions. Human-in-the-loop oversight is built in. Governance is tied to strategic goals, with automated checks enforcing transparency, fairness, and security. The system is flexible, continuously improving, and evolving to meet new regulations. Regulatory tailwinds, from the EU AI Act to evolving U.S. guidance, are pushing more enterprises toward this model. But only a small minority have reached this stage today.
CIOs are primed to close the gap
Half of IT leaders from the Forrester study say it’s a top challenge to ensure ethical, transparent AI without orchestration. And 86% believe IT is uniquely positioned to orchestrate AI across workflows, systems, and teams. Orchestration connects people, processes, and technology. Done right, it bakes governance in from the start and gives leaders visibility across every initiative.
CIOs sit at the intersection of systems, data, and governance. That vantage point makes them well placed to balance security with innovation. But oversight alone isn’t enough. CIOs must shape how employees adopt AI. That means securing executive backing, defining which tools are safe, creating a cross-functional governance council, and training employees on where human judgment still matters.