Cybercrime’s Hidden Cost: The Mental Health Impact on Employees and How a Just Culture Limits Harm
News and information from the Advent IM team.
Cybercrime doesn’t just drain budgets and stall operations. It also quietly erodes confidence, sleep, and trust, especially for the person who clicked, uploaded, or approved something that later became the pivot point for an incident. In security programmes, we often track mean time to detect and mean time to recover. We rarely track the human half-life of anxiety. That’s a gap in governance.
For organisations in government, defence, and critical national infrastructure (CNI), the stakes are heightened. The fear of reputational damage, regulatory scrutiny, and national-level consequences compounds the pressure on individuals. If we want genuine resilience, we must design for humanity as deliberately as we design for controls.
When “I caused this” becomes the loudest alarm
Breaches frequently stem from ordinary work under ordinary constraints: time pressure, confusing prompts, look-alike domains, awkward processes that encourage workarounds. When an incident traces back to an individual action, many employees experience a sharp mix of shame, fear for their job, and dread about telling anyone. That internalised stress has knock-on effects:
In short: blame drives problems underground; psychological safety brings them to the surface where they can be fixed.
What “just culture” really means in cyber
“Just culture” comes from safety-critical industries (aviation, healthcare) and draws a hard line between human error (inevitable and instructive), at-risk behaviour (driven by system pressures, fixable with better design), and reckless misconduct (a rarity, handled through HR/disciplinary routes). In cyber and information governance, a just culture reframes incident response from courtroom drama to clinical learning.
This isn’t “soft” security. It’s a disciplined approach that increases reporting, accelerates learning, and reduces repeat events. For CNI and defence suppliers; where cascading operational impact is a real risk, this is not simply ethical; it’s operationally prudent.
Governance implications (and why boards should care)
Boards, SIROs (Senior Information Risk Owners), IAOs (Information Asset Owners), and CISOs already hold risk and assurance duties. Adding mental-health-aware response isn’t “nice to have”, it’s part of effective governance:
Anatomy of a humane, effective incident response
Think of this as a combined playbook for people care and control improvement. Both matter.
1) First 24 hours: stabilise the person and the system
2) The review: blameless, evidence-led, time-boxed
3) Communications: clarity over spin
4) Aftercare and reintegration
Near-miss reporting: turning whispers into telemetry
Most incidents are preceded by near misses: almost-clicked links, blocked uploads, suspicious requests. Treat near-miss reporting with the same seriousness as incidents:
For CNI operators and government bodies, align this with existing safety reporting practices. You already know how to cultivate reporting in physical safety; extend the muscle memory to information security.
A short case vignette (composite but very common)
A contracts officer in a defence supply chain receives a “supplier” request to re-send a redacted export-controlled document. The email and footer look right. Under deadline pressure, they comply. Hours later, an internal scan flags unusual exfiltration behaviour.
In a punitive culture, the officer hides, legal gets cagey, security chases shadows, and the story leaks anyway. The lesson is fear.
In a just culture, the officer immediately flags suspicion, security contains the risk, and the review finds the workflow that encouraged ad-hoc sharing: unclear SOPs, no easy secure channel, and a template that trained users to trust the wrong signals. Fixes roll out in a week: a secure transfer path in the contract system, a new verification step, and comms that explain why. The lesson is improvement.
Same trigger, radically different outcomes.
Practical policy language you can adopt tomorrow
Keep it short, repeat it often:
We operate a just culture for information security.
Good-faith reporting of incidents and near misses is encouraged and supported. Reviews focus on system improvements, not individual blame. Reckless or malicious behaviour is a separate disciplinary matter. We will provide appropriate wellbeing support to anyone involved in an incident.
Put that in your ISMS manual, induction packs, and incident runbooks. Then live up to it.
Measures that matter (beyond MTTR)
If you can’t measure it, you can’t manage it. Add a few human-centred metrics to your board pack:
These aren’t vanity metrics; they’re leading indicators of resilience.
Special note for Government, Defence and CNI
Strong governance isn’t measured by the absence of bad news; it’s measured by how quickly truth surfaces and how predictably systems improve. A just culture and the mental-health-aware practices that flow from it, turns employees from potential scapegoats into active sensors and first responders. That shortens dwell time, reduces impact, and protects the organisation’s mission as well as its people.
If your incident plans don’t explicitly state how you protect and support the humans at the centre, they’re incomplete. Make the change, communicate it clearly, measure it rigorously, and prove it through leadership behaviour. That’s how you turn “we care about our people” from a poster into a control.
Written by Ellie Hurst, Commercial Director