Shadow AI – Governance, risk, compliance and assurance perspective
News and information from the Advent IM team.
The old governance problem in a new and much riskier suit
Most organisations have seen this pattern before. First it was shadow IT: unknown tools, services and workarounds adopted outside formal controls because they were quicker, easier or simply less irritating than the approved route. Then came BYOD, where convenience, flexibility and speed collided with data protection, monitoring, ownership and security questions.
The lesson from both was not that employees are naturally reckless. It was that when the official route feels too slow, too clunky or too restrictive, people will route around it. The NCSC makes exactly that point on shadow IT: it is usually not malicious, but a response to friction, and it creates risk because the organisation no longer has a full picture of what is being used, where data is going, or what needs protecting.
Shadow AI is that same story, only with much higher stakes.
Shadow AI is not a novelty problem. It is shadow IT, BYOD and rogue app behaviour combined with the speed, scale and opacity of generative AI.
Why shadow AI is different
An unauthorised file-sharing app might leak a document. An unauthorised AI tool can leak a document, summarise it, rework it, infer from it, embed it in a model interaction, and do all of that at speed and scale with very little user effort. That is before we even get to inaccuracies, fabricated outputs, jurisdictional issues, intellectual property exposure, or the reputational damage that follows when staff use AI in ways the organisation cannot explain or defend.
Cisco’s privacy benchmarking found that 27% of organisations had banned generative AI at least temporarily in 2024 over privacy and security concerns, while 48% of respondents admitted entering non-public company information into GenAI tools. Its 2025 study still found that nearly half of respondents reported inputting personal employee information or non-public information into GenAI tools.
That is why shadow AI should not be treated as an amusing side effect of innovation. It is a GRC+A issue.
Why GRC+A matters
The NCSC is clear that assurance is about confidence that controls are working as intended, and that this confidence should be sought continually rather than assumed once.
The data protection angle is especially important. The ICO’s position is not vague on this. Organisations using generative AI should consider data protection obligations from the outset, use a data protection by design and by default approach, identify a lawful basis where personal data is involved, and complete a DPIA before processing begins where required. The ICO also stresses that DPIAs should be kept up to date as the processing and its impacts evolve.
What the examples tell us
People reach for unapproved AI for the same reasons they reached for consumer cloud storage, WhatsApp groups, personal devices and rogue SaaS before it: speed, simplicity and the feeling that the approved route is designed by committee rather than for reality. Microsoft’s UK research in 2025 reported that 71% of UK employees had used unapproved consumer AI tools at work, and 51% were still doing so weekly.
Real-world examples show why this matters. Samsung banned the use of ChatGPT and similar AI tools on company devices after an employee uploaded sensitive code. Google confirmed it had warned staff not to enter confidential material into chatbots. Apple reportedly restricted internal use of ChatGPT and GitHub Copilot over concerns about confidential data leakage.
The reputational risk is not theoretical either. In 2023, lawyers in New York were sanctioned after submitting a legal brief containing fictitious cases generated by ChatGPT. That case became a cautionary tale not because AI made a mistake, but because professionals used it without appropriate review and assurance.
There is also the question of where data goes and under what governance. In early 2025, Italy’s data protection authority blocked DeepSeek over concerns about its privacy practices and the sufficiency of its responses about personal data processing. Later in 2025, the Czech government banned DeepSeek across public administration on data security grounds, with concern focused in part on data stored in China.
What good looks like
None of this means organisations should respond by trying to ban AI into submission. BYOD taught us that blunt prohibition often creates workarounds rather than discipline. Shadow IT taught us that visibility beats wishful thinking. Shadow AI requires the same maturity, only faster.
What works is a serious operating model. Start with purpose. Decide which use cases are acceptable, which are prohibited, and which require review. Separate experimentation from deployment. Then decide what good looks like in practice: approved tools, defined data classes, clear red lines for personal data and confidential information, procurement checks, legal review, records management expectations, and role-based guidance that normal staff can actually understand.
After that comes assurance, which is where many organisations still go soft. You need evidence that policy is working. That means logging, monitoring, DLP, access control, supplier diligence, exception handling, spot checks, internal audit, and management reporting that tells you whether AI use is visible, controlled and aligned to risk appetite.
Frameworks such as the NIST AI Risk Management Framework and ISO/IEC 42001 are useful here not because they are fashionable, but because they force organisations to structure governance, accountability and continual improvement rather than improvising their way through a fast-moving problem.
The real danger with shadow AI is not simply that it breaks policy. It is that it exposes whether the organisation has moved beyond policy theatre at all.
If education is weak, people will improvise. If policy is vague, people will interpret. If enforcement is absent, people will ignore. And if assurance is missing, leadership will only discover the problem when a customer, regulator, journalist or court does it for them.
We have seen this story before with shadow IT and BYOD. The difference now is that AI can compress bad judgement into seconds and broadcast the consequences at scale. That is why shadow AI is not just another technology issue. It is a governance test, a risk test, a compliance test and, above all, an assurance test.
Sources:
| Source | Publisher | Link |
| Guidance on shadow IT | NCSC | Open source |
| Guidance on bring your own device | NCSC | Open source |
| How to gain and maintain assurance | NCSC | Open source |
| Generative AI: eight questions developers and users need to ask | ICO | Open source |
| AI and data protection: ensuring lawfulness | ICO | Open source |
| 2024 data privacy benchmark on generative AI | Cisco | Open source |
| UK research on the rise of shadow AI tools | Microsoft | Open source |
| Enterprise concerns over ChatGPT and confidential data | Reuters | Open source |
| Lawyers sanctioned for fake AI-generated legal cases | Reuters | Open source |
| Italy blocks DeepSeek over privacy concerns | Reuters | Open source |
| AI Risk Management Framework | NIST | Open source |