New UK cyber security codes of practice to form basis of global standard

New measures which are expected to set a global standard on how to bolster protections of AI models from hacking and sabotage have been unveiled by the UK government. Two new codes of practice are being introduced to help developers improve cyber security in AI models and software.

The codes set out requirements for developers to make their products resilient against tampering, hacking, and sabotage and will boost confidence in the use of AI models across most industries, helping businesses improve efficiencies, drive growth, and turbocharge innovation.

In the last 12 months, half of businesses (50%) and a third of charities (32%) reported cyber breaches or attacks, and phishing remained the most common type of breach. These codes are intended to show developers how software can be built in a secure way, with the aim of preventing attacks such as the one on the Move IT software in 2023 which compromised sensitive data in thousands of organisations around the world.

Technology Minister Saqib Bhatti said: “We have always been clear that to harness the enormous potential of the digital economy, we need to foster a safe environment for it to grow and develop. This is precisely what we are doing with these new measures, which will help make AI models resilient from the design phase.”

The new measures come as findings of a new report published today show the cyber security sector has experienced a 13% growth on the previous year and is now worth almost £12 billion, on par with sectors such as the automotive industry.

The findings are reported by the government’s annual Cyber Sectoral Analysis Report and show the number of cyber security firms finding home in the UK has risen in 2023, strengthening the UK’s resilience to attacks and propelling sustainable economic growth.
The new codes of practice are expected to improve cyber security in AI and software, while new government action on cyber skills will help develop the cyber workforce and ensure the UK has the people it needs to protect the nation online.

NCSC CEO Felicity Oswald said: “To make the most of the technological advances which stand to transform the way we live, cyber security must be at the heart of how we develop digital systems. The new codes of practice will help support our growing cyber security industry to develop AI models and software in a way which ensures they are resilient to malicious attacks. Setting standards for our security will help improve our collective resilience and I commend organisations to follow these requirements to help keep the UK safe online.”

These measures are crucial for new businesses in the digital age, ensuring cyber security commitment, safeguarding personal data for users, and fostering global alignment for enhanced cyber resilience. According to the UK government, the AI cyber security code is intended to form the basis of a future global standard.

Rosamund Powell, Research Associate at The Alan Turing Institute, said: “AI systems come with a wide range of cyber security risks which often go unaddressed as developers race to deploy new capabilities. The code of practice released today provides much-needed practical support to developers on how to implement a secure-by-design approach as part of their AI design and development process.

Plans for it to form the basis of a global standard are crucial given the central role international standards already play in addressing AI safety challenges through global consensus. Research highlights the need for inclusive and diverse working groups, accompanied by incentives and upskilling for those who need them, to ensure the success of global standards like this.”

Previous articleIndian government issues a security warning for this Apple’s Vision Pro
Next articleAjit Doval reappointed as India’s National Security Advisor for third term