Artificial Intelligence's first-ever 20-page non-binding regulatory agreement has been announced and signed by 18 countries, including the United States and the United Kingdom, securing AI systems by design, as reported by the Guardian.

LinkedIn news adds to the story by stating that to safeguard the public from bad actors, the 20-page document urges businesses to make their AI systems designed with security first while offering some basic guidelines, such as screening possible software vendors.

The approach also addresses concerns about preventing hackers from using AI technology. The pact, however, does not address complex issues such as when and how to utilize AI or how these models' data are collected.

AI Funding Must Prioritize Safety and Ethics: Leading Tech Experts
(Photo : Sean Gallup/Getty Images)
Members of the group Initiative Urheberrecht (authors' rights initiative) demonstrate to demand regulation of artificial intelligence on June 16, 2023, in Berlin, Germany.

The pact is nonbinding and does not outline fines for violation, similar to many previous government initiatives on AI, such as the executive order issued by the Biden administration in October. Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore are reportedly among the signatories of the first-ever AI pact.

Together with industry experts and 21 other international organizations and ministries worldwide, the US Cybersecurity and Infrastructure Security Agency (CISA) and the UK's NCSC, a subsidiary of Government Communications Headquarters (GCHQ), reportedly collaborated to create the recommendations and guidelines. Global South representatives and participants from G7 member countries were also included in the collaboration. 

NCSC reportedly described the guidelines' approach as "secure by design," with the recommendations assisting the developers in ensuring cybersecurity is both a necessary prerequisite for the safety of AI systems and an integrated part of the development process from the beginning to the end. 

Read Also: EU Warned by Experts, AI Models Could be Over-Regulated "Out of Existence" 

A Win for AI Safety Regulation

While non-binding, the Guardian reports that Jen Easterly, the head of the US Cybersecurity and Infrastructure Security Agency, says it was significant that numerous nations were endorsing the notion that AI systems should prioritize safety.

"We've normalized a world where technology products come off the line full of vulnerabilities and then consumers are expected to patch those vulnerabilities. We can't live in that world with AI," she stated as per a Reuters report.

Easterly adds that the guidelines represented "an agreement that the most important thing that needs to be done at the design phase is security." 

The senior official further stated that this is the first time the public has seen an acknowledgment that these skills should go beyond merely having exciting features, bringing products to market rapidly, and competing to reduce prices. A significant understanding is that security must be prioritized above anything else throughout the design phase.

Europe's Prior Advancements in AI Regulation

Legislators in Europe have been constantly developing AI legislation with its "AI Act," putting the nations within it ahead of those in the US. Most notably after the European Union recently hosted and concluded the world's first-ever "AI Safety Summit."

The Guardian adds that recently, France, Germany, and Italy came to a consensus on the governance of artificial intelligence that encourages "mandatory self-regulation through codes of conduct" for what are known as foundation models of AI, which are intended to generate a wide range of outputs.

The consensus puts AI technologies less stringently as it aims to regulate their use rather than the technology itself.

As for the world's leading AI developers, earlier this month, the firms decided to collaborate with governments to test new frontier models before their release, to mitigate the hazards associated with this quickly evolving technology.

Related Article: Deepfake Crackdown: India Takes Action to Regulate Harmful AI Content Written by Aldohn Domingo

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion