New AI Security Guidelines Published by NCSC, CISA & More International Agencies

The U.K.’s National Cyber Security Centre, the U.S.’s Cybersecurity and Infrastructure Security Agency and international agencies from 16 other countries have released new guidelines on the security of artificial intelligence systems.

The Guidelines for Secure AI System Development are designed to guide developers in particular through the design, development, deployment and operation of AI systems and ensure that security remains a core component throughout their life cycle. However, other stakeholders in AI projects should find this information helpful, too.

These guidelines have been published soon after world leaders committed to the safe and responsible development of artificial intelligence at the AI Safety Summit in early November.

Jump to:

  • At a glance: The Guidelines for Secure AI System Development
  • Securing the four key stages of the AI development life cycle
  • Guidance for all AI systems and related stakeholders
  • Building on the outcomes of the AI Safety Summit
  • Reactions to these AI guidelines from the cybersecurity industry

At a glance: The Guidelines for Secure AI System Development

The Guidelines for Secure AI System Development set out recommendations to ensure that AI models – whether built from scratch or based on existing models or APIs from other companies – “function as intended, are available when needed and work without revealing sensitive data to unauthorized parties.”

SEE: Hiring kit: Prompt engineer (TechRepublic Premium)

Key to this is the “secure by default” approach advocated by the NCSC, CISA, the National Institute of Standards and Technology and various other international cybersecurity agencies in existing frameworks. Principles of these frameworks include:

  • Taking ownership of security outcomes for customers.
  • Embracing radical transparency and accountability.
  • Building organizational structure and leadership so that “secure by design” is a top business priority.

A combined 21 agencies and ministries from a total of 18 countries have confirmed they will endorse and co-seal the new guidelines, according to the NCSC. This includes the National Security Agency and the Federal Bureau of Investigations in the U.S., as well as the Canadian Centre for Cyber Security, the French Cybersecurity Agency, Germany’s Federal Office for Information Security, the Cyber Security Agency of Singapore and Japan’s National Center of Incident Readiness and Strategy for Cybersecurity.

Lindy Cameron, chief executive officer of the NCSC, said in a press release: “We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up. These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.”

The Guidelines for Secure AI System Development are structured into four sections, each corresponding to different stages of the AI system development life cycle: secure design, secure development, secure deployment and secure operation and maintenance.

  • Secure design offers guidance specific to the design phase of the AI system development life cycle. It emphasizes the importance of recognizing risks and conducting threat modeling, along with considering various topics and trade-offs in system and model design.
  • Secure development covers the development phase of the AI system life cycle. Recommendations include ensuring supply chain security, maintaining thorough documentation and managing assets and technical debt effectively.
  • Secure deployment addresses the deployment phase of AI systems. Guidelines here involve safeguarding infrastructure and models against compromise, threat or loss, establishing processes for incident management and adopting principles of responsible release.
  • Secure operation and maintenance contains guidance around the operation and maintenance phase post-deployment of AI models. It covers aspects such as effective logging and monitoring, managing updates and sharing information responsibly.

Guidance for all AI systems and related stakeholders

The guidelines are applicable to all types of AI systems, and not just the “frontier” models that were heavily discussed during the AI Safety Summit hosted in the U.K. on Nov. 1-2, 2023. The guidelines are also applicable to all professionals working in and around artificial intelligence, including developers, data scientists, managers, decision-makers and other AI “risk owners.”

“We’ve aimed the guidelines primarily at providers of AI systems who are using models hosted by an organization (or are using external APIs), but we urge all stakeholders…to read these guidelines to help them make informed decisions about the design, development, deployment and operation of their AI systems,” the NCSC said.

The Guidelines for Secure AI System Development align with the G7 Hiroshima AI Process published at the end of October 2023, as well as the U.S.’s Voluntary AI Commitments and the Executive Order on Safe, Secure and Trustworthy Artificial Intelligence.

Together, these guidelines signify a growing recognition amongst world leaders of the importance of identifying and mitigating the risks posed by artificial intelligence, particularly following the explosive growth of generative AI.

Building on the outcomes of the AI Safety Summit

During the AI Safety Summit, held at the historic site of Bletchley Park in Buckinghamshire, England, representatives from 28 countries signed the Bletchley Declaration on AI safety, which underlines the importance of designing and deploying AI systems safely and responsibly, with an emphasis on collaboration and transparency.

The declaration acknowledges the need to address the risks associated with cutting-edge AI models, particularly in sectors like cybersecurity and biotechnology, and advocates for enhanced international collaboration to ensure the safe, ethical and beneficial use of AI.

Michelle Donelan, the U.K. science and technology secretary, said the newly published guidelines would “put cybersecurity at the heart of AI development” from inception to deployment.

“Just weeks after we brought world-leaders together at Bletchley Park to reach the first international agreement on safe and responsible AI, we are once again uniting nations and companies in this truly global effort,” Donelan said in the NCSC press release.

“In doing so, we are driving forward in our mission to harness this decade-defining technology and seize its potential to transform our NHS, revolutionize our public services and create the new, high-skilled, high-paid jobs of the future.”

Reactions to these AI guidelines from the cybersecurity industry

The publication of the AI guidelines has been welcomed by cybersecurity experts and analysts.

Toby Lewis, global head of threat analysis at Darktrace, called the guidance “a welcome blueprint” for safety and trustworthy artificial intelligence systems.

Commenting via email, Lewis said: “I’m glad to see the guidelines emphasize the need for AI providers to secure their data and models from attackers, and for AI users to apply the right AI for the right task. Those building AI should go further and build trust by taking users on the journey of how their AI reaches its answers. With security and trust, we’ll realize the benefits of AI faster and for more people.”

Meanwhile, Georges Anidjar, Southern Europe vice president at Informatica, said the publication of the guidelines marked “a significant step towards addressing the cybersecurity challenges inherent in this rapidly evolving field.”

Anidjar said in a statement received via email: “This international commitment acknowledges the critical intersection between AI and data security, reinforcing the need for a comprehensive and responsible approach to both technological innovation and safeguarding sensitive information. It is encouraging to see global recognition of the importance of instilling security measures at the core of AI development, fostering a safer digital landscape for businesses and individuals alike.”

He added: “Building security into AI systems from their inception resonates deeply with the principles of secure data management. As organizations increasingly harness the power of AI, it is imperative the data underpinning these systems is handled with the utmost security and integrity.”

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Inline Feedbacks
View all comments

Latest stories

You might also like...