Ensuring AI Security: Integrating the UK’s Code of Practice into GRC Frameworks

News and information from the Advent IM team.

The UK’s “Code of Practice for the Cyber Security of AI” establishes foundational principles to safeguard AI systems and the organisations that develop and deploy them. Integrating these principles within a Governance, Risk, and Compliance (GRC) framework ensures a structured approach to managing AI-related risks, aligning with organisational objectives, and adhering to regulatory requirements.

Principle 1: Raise Awareness of AI Security Threats and Risks

GRC Alignment: Governance and Risk Management

Organisations should incorporate AI-specific security content into their cyber security training programmes, ensuring regular updates to reflect emerging threats. Training should be tailored to the roles and responsibilities of staff members, promoting a culture of security awareness. This aligns with governance by establishing clear policies and with risk management by proactively addressing potential threats through education.

Principle 2: Design Your AI System for Security as Well as Functionality and Performance

GRC Alignment: Governance and Risk Management

Security considerations must be integrated into the AI system’s design phase, balancing functionality and performance with robust security measures. This proactive approach ensures that security is embedded from the outset, aligning with governance by setting clear design policies and with risk management by mitigating potential vulnerabilities early in the development process.

Principle 3: Evaluate the Threats and Manage the Risks to Your AI System

GRC Alignment: Risk Management and Compliance

Conduct thorough risk assessments to identify and evaluate threats specific to your AI system. Implement risk management strategies to mitigate identified risks, ensuring compliance with relevant regulations and standards. This aligns with risk management by systematically addressing potential issues and with compliance by adhering to established security protocols.

Principle 4: Enable Human Responsibility for AI Systems

GRC Alignment: Governance and Compliance

Establish clear accountability structures for AI systems, ensuring that human oversight is maintained. Define roles and responsibilities to manage and monitor AI operations, aligning with governance by delineating authority and with compliance by ensuring adherence to ethical and legal standards.

Principle 5: Identify, Track, and Protect Your Assets

GRC Alignment: Risk Management and Compliance

Maintain an inventory of all assets related to your AI system, including data, models, and infrastructure. Implement protective measures to safeguard these assets against threats, aligning with risk management by protecting critical resources and with compliance by adhering to data protection regulations.

Principle 6: Secure Your Infrastructure

GRC Alignment: Risk Management and Compliance

Ensure that the infrastructure supporting your AI system is secure, implementing measures such as access controls, encryption, and regular vulnerability assessments. This aligns with risk management by protecting the operational environment and with compliance by meeting security standards.

Principle 7: Secure Your Supply Chain

GRC Alignment: Governance, Risk Management, and Compliance

Assess and manage risks associated with third-party components and services integrated into your AI system. Establish policies for supplier assessment, contract management, and ongoing monitoring, aligning with governance by setting supplier standards, with risk management by mitigating third-party risks, and with compliance by ensuring that suppliers adhere to relevant regulations.

By embedding these principles within a GRC framework, organisations can systematically manage AI-related risks, ensure compliance with regulatory requirements, and uphold robust governance practices, thereby enhancing the security and integrity of their AI systems.

by Ellie Hurst ASYi, Commercial Director

Share this Post