AI researchers are urging the UK government to reconsider its current proposals for regulating artificial intelligence (AI) in the country. 

A report from the Ada Lovelace Institute has presented 18 recommendations, highlighting the limitations in legal protections for private citizens seeking redress in cases of discriminatory decisions made by AI systems.

OpenAI To Offer Commercial Version Of ChatGPT
(Photo: Leon Neal/Getty Images) LONDON, ENGLAND - FEBRUARY 03: In this photo illustration, the OpenAI "ChatGPT" AI-generated answer to the question "What can AI offer to humanity?" is seen on a laptop screen on February 03, 2023, in London, England. OpenAI, whose online chatbot ChatGPT made waves when it debuted in December, announced this week that a commercial version of the service, called ChatGPT Plus, would soon be available to users in the United States.

Becoming an "AI Superpower"

The report provides insights for policymakers, regulators, AI practitioners, and civil society organizations, aiming to regulate AI in the UK for the benefit of people and society.

It emphasizes the need for effective domestic regulation to achieve the government's ambition of becoming an "AI superpower" and harnessing AI technologies to enhance society and the economy.

To regulate AI effectively, the report identifies key issues that could erode public trust, such as data-driven or algorithmic social scoring, biometric identification, and AI applications in law enforcement, education, and employment. 

Ensuring the trustworthiness of AI systems, mitigating AI risks, and holding developers and users accountable for AI technologies are essential goals of regulation, according to the research team.

While the EU is pursuing a rules-based approach to AI governance, the UK is proposing a contextual, sector-based regulatory framework. This approach centers around AI principles to be implemented by existing regulators and introduces new central functions to support their work. 

The report emphasizes the importance of robust domestic regulatory frameworks in shaping corporate incentives and developer behavior. 

It argues that international agreements alone may not suffice to ensure AI safety and prevent harm, making a strong domestic regime crucial for the UK's leadership aspirations in AI.

Read Also: Margaret Atwood, James Patterson Among Thousands of Writers Demanding Compensation From AI Companies

AI Safety

Based on three core tests for effective AI regulation-coverage, capability, and urgency- the report presents recommendations to address the existing fragmented regulatory landscape for AI in the UK. 

Among the solutions proposed are investing in pilot projects to understand AI trends better, clarifying AI liability laws, establishing an AI ombudsman for dispute resolution, and involving civil society groups in regulatory processes. 

Expanding the definition of "AI safety" and enforcing existing GDPR and intellectual property laws are also vital components of the recommendations. 

The report further states that by considering and implementing these recommendations, the UK can strengthen its AI regulation to foster innovation, build public trust, and ensure AI's responsible and beneficial use in various sectors. 

To access the full report, click here.

In related news, the UK is set to host the world's first AI safety summit later this year. This move was approved by the CEOs of OpenAI, DeepMind, and Anthropic. 

Related Article: Are AI Detection Tools Effective in Assessing Artificially-Made Results? Here's Why They Aren't Always Reliable

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion