The Human Factor in 2025: What Social Engineering Means for GRC, Risk and Resilience

News and information from the Advent IM team.

Governance, risk and compliance (GRC) professionals have long argued that security is not just a technical discipline but a human one. Proofpoint’s Human Factor 2025 report confirms this view with unsettling clarity: the most dangerous attack vector in today’s threat landscape isn’t a piece of malware or a zero-day exploit. It’s us.

Social engineering as the modern frontline

Social engineering is the manipulation of human psychology—fear, urgency, trust, even curiosity—to achieve malicious ends. From phishing links to voice-based scams, from fake QR codes to benign-looking emails, attackers increasingly exploit how we behave rather than what systems allow.

Proofpoint’s research shows that 25% of advanced persistent threat (APT) campaigns rely solely on social engineering, dispensing with malware altogether. More striking still, over 90% of these campaigns present themselves as collaboration or engagement opportunities. Attackers don’t need to “break in” when they can simply be invited.

For GRC leaders, this has profound implications. A governance framework designed only to assess and control technical risks will miss an entire domain of human vulnerability. The attack isn’t against the network perimeter—it’s against the judgement of an employee, a supplier, or even a board member.

Fraud is shifting—GRC must keep pace

The data shows that fraud techniques are in flux. Advanced fee fraud rose nearly 50% last year, while extortion scams fell by almost 70%. Why? Extortion has been blunted by better filtering and increased user awareness. Attackers pivoted quickly, proving once again that adversaries are agile learners.

This should act as a warning for risk governance. Compliance regimes tend to assume a fixed threat environment: you identify, assess, and control a known set of risks. But the human factor moves faster. GRC needs mechanisms for continuous reassessment of social vectors, ensuring that policies, training and controls keep pace with adversary innovation.

Benign conversations: the slow burn of espionage

Perhaps the most striking trend is the rise of “benign conversations”. State-sponsored actors, particularly from North Korea and Iran, now engage in weeks or months of apparently harmless exchanges with targets. These are often invitations to collaborate, comment on a news story, or take part in an academic discussion.

Only after trust is established does the malicious payload appear—perhaps a compromised attachment, perhaps a request for sensitive commentary. Proofpoint notes that around 25% of observed state-sponsored campaigns now use this method.

For governments, defence contractors, and critical national infrastructure providers, this should set alarm bells ringing. Risk management frameworks must consider who in the organisation is most exposed to such approaches—policy advisors, SIROs (Senior Information Risk Owners), board members, technical specialists—and apply tailored safeguards. Oversight needs to extend beyond the IT department to the entire governance structure.

The governance challenge: structuring trust

Every organisation runs on trust. Trust between colleagues, between suppliers and buyers, between citizens and government. Social engineering weaponises that trust.

GRC’s role is not to dismantle it, but to structure it. That means:

  • Identifying where trust relationships are most vulnerable (e.g. supplier invoices, executive instructions, policy consultation).
  • Embedding assurance mechanisms so that requests for payment, collaboration or data sharing are validated.
  • Ensuring accountability at the right level—so that the risk of responding to the wrong email is treated not as “human error” but as a systemic governance concern.

Risk management beyond compliance checklists

The report highlights another uncomfortable truth: technical controls catch only part of the problem. Proofpoint blocked 117 million telephone-oriented attack delivery (TOAD) threats in a year, but criminals still use them because someone, somewhere, responds.

This is where risk management needs to evolve. It is not enough to say “we trained staff once” or “we comply with ISO standards”. GRC must establish living frameworks that measure effectiveness, revisit assumptions, and adapt to new attack patterns. That means:

  • Embedding social-vector risk assessments into enterprise risk registers.
  • Regularly revising risk appetite statements to reflect exposure to human-centric threats.
  • Ensuring audit and assurance processes cover not just whether training exists, but whether it works.

Culture as a control

A culture of verification is as important as a firewall. Attackers succeed when people feel pressured, embarrassed, or too polite to challenge authority. Governance should therefore promote a speak-up culture in security: an environment where questioning an unusual request is not only permitted but expected.

This is especially critical in defence, CNI and government sectors where attackers often impersonate senior figures. If staff feel unable to query unusual demands from a “CEO” or “minister”, governance has failed, regardless of the technical stack in place.

The supply chain dimension

One of the most worrying aspects of the report is how easily social engineering crosses organisational boundaries. Fake requests for quotes, supplier impersonation, and payment redirection scams all exploit weak points in procurement and supply chain processes.

For CNI and government supply chains—already a focus of UK regulatory initiatives—this underlines the importance of third-party risk management. GRC cannot stop at the enterprise boundary. It must include:

  • Clear supplier assurance frameworks.
  • Continuous monitoring of impersonation risks.
  • Contractual requirements for supplier training and incident reporting.

Lessons for boards and SIROs

For senior information risk owners (SIROs), the report should serve as a wake-up call. The threats described are not “IT issues”. They go to the heart of governance, strategy, and resilience.

Boards should be asking:

  • How are we measuring exposure to social engineering risks?
  • Are we investing in adaptive awareness training rather than generic e-learning?
  • Do our policies and controls reflect the speed with which adversaries adapt?
  • Have we structured escalation processes so that a junior staff member can query a “CEO instruction” without fear?

Risk appetite discussions need to include the possibility that adversaries will target decision-makers themselves. In a geopolitical climate where benign conversations can be used to probe government policy, this is not theoretical—it’s operational reality.

Compliance isn’t protection

A final point: compliance frameworks (ISO 27001, NIS, DORA, etc.) are invaluable, but they are not shields. The Human Factor 2025 report is a reminder that compliance alone does not equal resilience.

What organisations need is principle-based security—a governance approach that adapts to behaviour, context and culture. That means treating GRC not as a paperwork exercise, but as a dynamic system of assurance, accountability and continuous learning.

The human factor is the GRC factor

From fake invoices to state espionage, the common denominator is trust exploited through human interaction. The report shows attackers understand this better than many boards.

For GRC professionals, the message is stark: if governance doesn’t keep pace with human-centric threats, technical investments will be undermined. Risk management must evolve to cover the behavioural layer, compliance must be seen as the floor not the ceiling, and resilience must be as much about culture and accountability as about controls and technology.

The human factor is no longer a “soft” risk. It is the central battlefield for security in 2025.

Written by Ellie Hurst, Commercial Director

Share this Post