When ransomware becomes a safety incident: lessons from the Romanian water attack
News and information from the Advent IM team.
Ransomware used to be filed under “IT problems”, alongside expired certificates and the printer that only works when threatened. That mental model is now actively dangerous.
On 20 December 2025, Romania’s national water management administration suffered a ransomware incident that reportedly compromised around 1,000 systems across most regional organisations. Systems affected included email, Windows workstations and servers, database and web servers, DNS, and GIS platforms. The organisation’s website went offline and updates were pushed through alternative channels while remediation continued.
The most important line in the reporting, though, is not the number of endpoints. It is the operational context. Water infrastructure is not a “data business”. It is a public safety system, run by people who cannot simply shrug and wait for an incident ticket to climb the queue. Even where hydrotechnical operations continued locally, the loss of central IT services is more than inconvenience. It constrains coordination, visibility, decision-making, incident communications, and the ability to manage risk when conditions change quickly.
That is how ransomware becomes a risk-to-life category problem without ever touching an industrial controller.
The uncomfortable twist: “ransomware” without ransomware tooling
Reports indicate the attackers used Windows BitLocker to encrypt files. That matters because BitLocker is legitimate, widely deployed, and normally associated with protecting data, not holding it hostage. It is a neat illustration of a broader pattern: adversaries increasingly win by abusing what you already trust.
Traditional ransomware conversations tend to orbit around malware families, payload signatures, and which gang’s leak site is currently fashionable. BitLocker-based encryption flips the script. If an attacker can gain sufficient privilege, they can weaponise built-in capabilities in ways that look, at first glance, like normal administrative activity. Detection becomes harder, not because defenders are careless, but because the activity blends into the daily noise of enterprise IT.
This is also why “we have endpoint protection” is not a strategy. It is a component. When the mechanism of harm is a legitimate feature, you need control over who can invoke it, where, and under what conditions.
Why this lands differently for CNI, Government, and Defence
For critical national infrastructure, the real damage is often second-order. Water, energy, healthcare, transport, local government, defence support functions, and the suppliers that keep them moving all rely on a mesh of digital services that were never designed to fail gracefully at the same time.
Take away email and identity services, and your incident response slows. Take away GIS and data platforms, and your situational awareness degrades. Take away web services, and your public communications become messy at exactly the moment misinformation thrives. Even when operational technology is still running, the surrounding digital scaffolding is what lets humans operate safely at scale.
This is why ransomware has become a national resilience issue rather than a corporate hygiene issue. The UK has been moving in that direction in policy terms too, including proposals aimed at preventing public sector bodies and CNI operators from paying ransoms, alongside stronger reporting expectations. The UK government has explicitly framed ransomware as capable of creating life-threatening outcomes, and referenced an NHS incident where the attack was cited as a contributing factor to a patient death.
That is not politics. That is a statement about harm.
Monitoring and early warning are governance controls, not just technical ones
One particularly telling detail in the reporting was that the Romanian water agency’s network was not yet integrated into Romania’s national protection monitoring capability for critical infrastructure, described as similar in concept to the UK NCSC’s Early Warning service.
Zoom out and you get a familiar GRC story. Organisations often assume “someone else” is watching: a managed service provider, a SOC, a regulator, a sector body, a parent department, an insurance requirement. In practice, visibility is fragmented. Monitoring is partial. Logs exist but aren’t used. Alerts arrive but don’t land with a clear owner. The first time the system is truly tested is when it is already failing.
The NCSC’s own framing of detection is plain: you should continuously monitor for user and system abnormalities that indicate adverse activity, and generate alerts from that monitoring. This is not a technical nice-to-have. It is a governance requirement, because the board is effectively choosing a response time when it chooses its monitoring maturity.
Early warning services help because they can spot external indicators and known-bad activity tied to your domains and IP space, then nudge you before the incident becomes theatrical. They do not replace your internal detection, but they do reduce the time between compromise and awareness, which is often where ransomware does its worst work.
The BitLocker problem is really a privilege problem
BitLocker misuse is not a “BitLocker issue”. It is an identity and privilege issue.
If attackers can reach the permissions needed to trigger encryption at scale, the conversation quickly moves to how administrative access is managed, where it is permitted, and what frictions exist when someone tries to do something destructive. That includes strong identity assurance and MFA, but it also includes the less glamorous controls: tiering admin accounts, restricting remote management pathways, reducing standing privilege, and building high-signal detection around actions that should be rare in normal operations.
In CNI environments, this intersects with segregation of duties and the safety case. The question is not merely whether a control exists, but whether it still functions at speed under pressure, at 2am, on a weekend, in the middle of an operational wobble.
Resilience is not “backups”: it is recoverability under stress
A large-scale encryption event tests recoverability, not backup theory.
If your restore process depends on the same identity infrastructure that has just been knocked over, recovery slows. If your backups are reachable from the same privileged context the attacker has already captured, you may discover too late that they are also encrypted or deleted. If your team has never rehearsed rebuilding core services from scratch, you will burn precious hours rediscovering steps you thought you had documented.
None of this is solved by buying another tool. It is solved by designing for failure, rehearsing recovery, and being honest about which systems are genuinely mission-critical, which can be sacrificed, and which must be rebuilt first to restore safe operations.
The real escalation: ransomware as a safety and trust event
The Romanian incident is disturbing not because it is unique, but because it is plausible everywhere.
Modern organisations have spent years digitising coordination while underinvesting in the boring parts of resilience. Attackers have noticed. Whether the goal is extortion, disruption, or simply proving they can, the effect converges on the same outcome: reduced operational capacity and increased risk.
For CNI, Government and Defence suppliers, the most practical shift is conceptual. Stop treating ransomware as a type of breach and start treating it as a type of emergency. The governance model needs to assume that core digital services can vanish quickly, that “legitimate tooling” can be used against you, and that the consequences may be measured in public harm, not just financial loss.
The strange part of the universe is that the same encryption feature you deploy to protect laptops can be turned into a weapon. The wise part is remembering that this is exactly why governance exists: to plan for the day your assumptions get mugged in a dark alley.
Read the article here.
-Commercial Director, Ellie Hurst.