AI is getting into the business faster than governance can catch it
News and information from the Advent IM team.
If recent AI research tells us anything, it is this: adoption is winning the race, and governance is still trying not to lose a shoe.
McKinsey’s 2026 AI Trust Maturity Survey found that only about 30% of organisations had reached a stronger maturity level in strategy, governance and agentic AI controls. Nearly 60% said knowledge and training gaps were the main barrier to implementing responsible AI, and organisations with explicit ownership were materially more mature than those without it. EY’s 2025 pulse survey adds a similarly awkward truth: 72% of executives said AI had been integrated and scaled in most or all initiatives, and 99% were at least in the process of doing so, yet only a third said their organisations had the right protocols across all facets of responsible AI. On average, organisations had strong controls in only three of nine governance facets (EY, 2025a; McKinsey & Company, 2026).
That is the so what. AI is getting into organisations through formal projects, vendor tools, embedded platform features and everyday experimentation. But too many leadership teams are still treating governance as something they will tidy up afterwards. History says that rarely ends well. We have seen versions of this before with shadow IT, poor SaaS control and unmanaged digital transformation. The difference now is that AI can affect content, decisions, operations, customer experience, privacy, security and regulatory exposure all at once.
The practical warning signs are already there. McKinsey found that security and risk concerns are the top barrier to scaling agentic AI, while inaccuracy and cyber security are among the most cited AI risks. EY reported that among organisations that allow citizen developers, only 60% have formal, organisation-wide frameworks and only half have high visibility into actual activity. That is how governance debt builds up: not because leaders do not care, but because the business is moving faster than its own control environment (EY, 2025b; McKinsey & Company, 2026).
The answer is not to ban everything until the dust settles. It is to govern AI as seriously as anything else that can create material risk. That means leadership ownership, clear policy, a live inventory of AI use cases, sensible supplier oversight, defined risk assessment criteria, human oversight where it matters, workforce guidance, monitoring, incident handling and assurance. In short: management, not magic.
That is why ISO/IEC 42001 is worth attention. ISO describes it as the international standard for AI management systems and says it helps organisations govern AI use, manage risks, support compliance and build trust in AI-driven processes. It can be used as a framework for structured compliance, or as the basis for certification if independent assurance is wanted. Better still, it fits naturally beside ISO/IEC 27001. ISO management system standards using the Harmonized Structure are designed to work together, and ISO itself offers a combined 42001/27001 package. For organisations that already take information security seriously, that makes AI governance much easier to embed into something real (ISO, 2022; ISO, 2026a; ISO, 2026b; ISO, 2026c).
The truth is that AI governance is now a leadership test. Not because leaders need to understand every technical detail, but because they need to create enough structure, accountability and challenge to let the business use AI safely and well. Adoption without governance is not ambition. It is exposure. And this is too important to get wrong.
– Ellie Hurst, Commercial Director, Advent IM
Discover our new AI Adoption Without Governance Maturity | Why risk appetite, leadership and human-first controls matter
Download our free whitepaper here.
Author: Ellie Hurst – Director- Commercial Advent IM Ltd
References
EY, 2025a. ‘EY survey: AI adoption outpaces governance as risk awareness among the C-suite remains low’. Available at: https://www.ey.com/en_gl/newsroom/2025/06/ey-survey-ai-adoption-outpaces-governance-as-risk-awareness-among-the-c-suite-remains-low (Accessed: 7 April 2026).
EY, 2025b. ‘How responsible AI translates investment into impact’. Available at: https://www.ey.com/en_uk/insights/ai/how-can-responsible-ai-bridge-the-gap-between-investment-and-impact (Accessed: 7 April 2026).
ISO, 2022. ‘ISO/IEC 27001:2022 Information security, cybersecurity and privacy protection – Information security management systems – Requirements’. Available at: https://www.iso.org/standard/27001 (Accessed: 7 April 2026).
ISO, 2026a. ‘ISO 42001 explained’. Available at: https://www.iso.org/home/insights-news/resources/iso-42001-explained-what-it-is.html (Accessed: 7 April 2026).
ISO, 2026b. ‘Management system standards list’. Available at: https://www.iso.org/management-system-standards-list.html (Accessed: 7 April 2026).
ISO, 2026c. ‘AI and information security management package’. Available at: https://www.iso.org/publication/PUB200427.html (Accessed: 7 April 2026).
McKinsey & Company, 2026. ‘State of AI trust in 2026: Shifting to the agentic era’. Available at: https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/tech-forward/state-of-ai-trust-in-2026-shifting-to-the-agentic-era (Accessed: 7 April 2026).