Safety testing of artificial intelligence models will now see a united and common approach from the United States and the United Kingdom after the two powers announced their partnership to ensure safe AI development.

As per TIME, the contract that Michelle Donelan, the U.K. Secretary of State for Science, Innovation, and Technology, and U.S. Gina Raimondo, the Secretary of Commerce, lays forth the framework for the two countries' cooperation. 

According to a press release, the two AI safety testing groups will create a standard methodology for AI safety testing that calls for the use of the same techniques and supporting infrastructure.

FINLAND-EU-AI-TECHNOLOGY-RIGHT

(Photo : OLIVIER MORIN/AFP via Getty Images) This illustration picture shows the AI (Artificial Intelligence) smartphone app ChatGPT surrounded by other AI App in Vaasa, on June 6, 2023.

The press statement also said that the institutes plan to conduct a cooperative testing exercise on an AI model that is accessible to the general public. The bodies would seek to swap staff and share information following with national laws and regulations, and contracts.

The UK and the United States AI Safety Institutes were founded on the first day of the U.K.-hosted AI Safety Summit in November 2023 at Bletchley Park.

Although the collaboration between the two groups was declared at the time of their founding, according to Donelan, the new agreement "formalizes" and "puts meat on the bones" of that collaboration. She added that it gives the US government a chance to borrow slightly from the UK to formalize and construct its AI safety testing institute. 

Read Also: EU Approves World's First AI Regulatory Act 

Laws on AI Development

Legislators and executives from tech companies will likely rely heavily on AI Safety Institutes to help reduce the risks associated with rapidly advancing AI systems. The businesses that created ChatGPT and Claude, OpenAI, and Anthropic, respectively, have released comprehensive plans outlining how safety testing will guide their future product development.

The just concluded European Union's AI Act and U.S. President Joe Biden's executive order, both state that businesses creating sophisticated AI models must reveal the outcomes of their safety testing. 

U.S. Follows California's AI Footsteps

The partnership comes just a week after the U.S. state of California said it is looking to both learn and work together with Europe in creating the state's artificial intelligence regulation.  To figure out how to apply AI regulations, they are trying to learn from and work with the Europeans, according to David Harris, senior policy adviser at the California Initiative for Technology and Democracy.

At least 30 unique bills addressing various aspects of AI have been introduced by state legislators in California, home to some of the largest AI companies.

California lawmakers are looking to recent European legislation on AI, much as they have done with EU regulations on private data in the past. This is especially true considering the unlikely likelihood that national legislation from Washington will resemble the European legislation.

The proposed regulations in California cover a wide variety of topics, from forbidding political advertisements that use computer-generated imagery to forcing AI developers to reveal the training data used to create models.

Related Article: US States are Increasingly Adopting AI Legislation Ahead 2024 Elections, Claims New Study 

Written by Aldohn Domingo

(Photo: Tech Times)

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion