The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the National Cyber Security Centre (NCSC) of the U.K., in collaboration with 21 global security agencies and ministries, have officially supported the release of guidelines aimed at fostering the creation of secure and reliable artificial intelligence (AI).
According to MSSP Alert, The theme of the 20-page Guidelines for Secure AI System Development, a joint effort of CISA and the NCSC, is to keep AI safe from rogue actors, pushing for companies to create AI systems that are “secure by design.”
An additional 19 organizations, including tech giants Amazon, Google, IBM and Microsoft and all members of the Group of 7 major industrial economies, have also given their nod of approval to the document’s contents. The document provides essential recommendations for AI system development and emphasizes the importance of adhering to “Secure by Design” principles, the agencies said.
The participating agencies and countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and users safe from abuse, officials said.
CISA’s Director Jen Easterly told Reuters that it is important that so many countries put their names to AI systems making safety first.
“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” she said, in stating that ensuring security at the design phase is what the guidelines cement.