Meta has reportedly moved its Responsible AI (RAI) team to other AI-related groups within the Big Tech company but will continue to work on preventing harms concerning artificial intelligence.

CNBC reports that according to the spokesperson, most members of the RAI team have been relocated to the company's Generative AI product division. In contrast, others will now work on the AI Infrastructure team.  

The spokesperson further reportedly added the reason for the changes by saying that it will help "prioritize and invest in safe and responsible AI development and these changes will allow us to scale better to meet our future needs."  

US States Sue Meta Over Alleged Link to Youth Mental Health Crisis
(Photo : LIONEL BONAVENTURE/AFP via Getty Images)
This picture taken on January 12, 2023 in Toulouse, southwestern France shows a smartphone and a computer screen displaying the logos of Instagram app and its parent company Meta.

According to Mashable, Meta's RAI team was formed in 2019 to monitor difficulties in Meta's AI training, such as ensuring that the AI was trained with a varied dataset and preventing moderation concerns on platforms like Facebook.

However, early this year, the Responsible AI team was reportedly a "shell of a team" with limited autonomy and extensive bureaucratic oversight. 

Most of Meta's RAI Team will now be relocated to the Generative AI team, which proves to be, as per CNBC, a February-born team dedicated to creating products that create words and pictures to imitate human-like versions. 

The Gen-AI team was created when corporations throughout the technology sector poured money into machine learning development to avoid falling behind in the AI race.  

Read Also: Meta Could Be Closer to AI-generated Movies with Emu Video 

Meta on AI Regulation

Reuters further reports that in October, Meta started rolling out generative artificial intelligence (AI) capabilities for all marketers, which can generate material such as picture backdrops and variants of written text. 

Meta's AI portfolio includes the language model "Llama 2" and Meta AI, an AI chatbot that can create text answers and photo-realistic graphics. 

The restructuring also comes as Meta is nearing the conclusion of its "year of efficiency," as CEO Mark Zuckerberg referred to it during a February earnings call.

CNBC reports that this has resulted in a flurry of layoffs, team mergers, and organizational redistributions. 

Private Sectors on AI Regulation

The company's recent effort reportedly comes at a time when voluntary promises to AI safety have been "all the rage," following the signing of such a commitment by a group of venture capital (VC) companies last week.

The pledge, unveiled last week, focuses on five basic points: a commitment to responsible AI, sufficient openness and documentation, risk and reward projection, auditing and testing, a feedback cycle, and continual improvements.

"The VC-signed voluntary agreement is meant to demonstrate leadership from the private sector around controlling for AI's risks, but it has sparked a debate among AI founders, with some in the AI field even pulling out of scheduled meetings with VCs," PYMNTS noted in its report.

As regulators and other authorities become more concerned about the potential downsides of AI, leading companies in the area have made ensuring its safety a stated goal. In July, Anthropic, Google, Microsoft, and OpenAI launched an industry partnership to define safety guidelines as AI improves.

Related Article: Meta's Chief AI Scientist Joins 70 Others in Calling for More Transparency in AI Development Written by Aldohn Domingo

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion