Meta's chief AI scientist, Yann LeCun, has joined 70 others in calling for greater transparency in the development of artificial intelligence (AI). 

These 70 prominent figures have put their name to a letter appealing for a more open approach to AI development. The signatories, comprising scientists, policymakers, engineers, activists, educators, and journalists, underscore the need for openness and transparency as AI technology continues to advance.

The letter, published by Mozilla, emphasizes that the world is at a pivotal moment in the governance of AI. It stresses the importance of embracing openness, transparency, and widespread accessibility in the pursuit of mitigating both current and future potential harms stemming from AI systems. The signatories assert that this should be a global imperative.

2016 Wired Business Conference
(Photo : Brian Ach/Getty Images for Wired)
NEW YORK, NY - JUNE 16: WIRED senior staff writer Cade Metz (L) and Director of Facebook AI Research Yann LeCun speak on stage during the 2016 Wired Business Conference on June 16, 2016 in New York City.

Meta's Yann LeCun: 'Regulatory Capture of the AI Industry'

Yann LeCun recently added his voice to the discourse, highlighting concerns about efforts from certain entities, including OpenAI and Google's DeepMind, in what he termed as an attempt to exert "regulatory capture of the AI industry." 

"If your fear-mongering campaigns succeed, they will *inevitably* result in what you and I would identify as a catastrophe: a small number of companies will control AI," LeCun wrote in an X post.

The signatories contend that while open-source AI models do carry certain risks and vulnerabilities, the same holds true for proprietary technologies.

They argue that increasing public access and scrutiny ultimately enhances technology's safety. They reject the notion that strict proprietary control is the only means of safeguarding society from potential harm.

Moreover, the letter points out that hastily implementing regulations without a nuanced understanding of the AI landscape can lead to unintended consequences, potentially consolidating power in ways that hinder competition and innovation. 

It advocates for open models to inform a more inclusive debate and shape effective policy-making.

Read Also: Human Input on AI Used in Public Services Can Boost Citizens' Acceptance, New Study Shows

Critical Objectives

The letter calls for a diversified approach, ranging from open-source initiatives to open science, which can serve as the foundation for several critical objectives:

1. Accelerating Understanding: By enabling independent research, collaboration, and knowledge sharing, the goal is to deepen comprehension of AI capabilities, risks, and potential harms.

2. Increasing Public Scrutiny and Accountability: Regulators should be equipped with the necessary tools to monitor large-scale AI systems, thus enhancing public scrutiny and accountability.

3. Lowering Barriers to Entry: Fostering an environment that enables new players to engage in the responsible creation of AI, thus promoting innovation and competition.

"As signatories to this letter, we are a diverse group - scientists, policymakers, engineers, activists, entrepreneurs, educators, and journalists. We represent different, and sometimes divergent, perspectives, including different views on how open source AI should be managed and released," the open letter reads. 

"However, there is one thing we strongly agree on: open, responsible, and transparent approaches will be critical to keeping us safe and secure in the AI era," it added.

Related Article: OpenAI's New 'Preparedness' Team Focuses on Countering 'Catastrophic' AI Risks, 'Human Extinction'

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion