Facebook-owner Meta Platforms Inc announced on Tuesday that it will be democratizing the access of its large language model to advance artificial intelligence research.

Meta
(Photo : KIRILL KUDRYAVTSEV/AFP via Getty Images)
This photograph taken on October 28, 2021 shows the META logo on a laptop screen in Moscow as Facebook chief Mark Zuckerberg announced the parent company's name is being changed to "Meta" to represent a future beyond just its troubled social network.

According to Meta, this model will be the first 175-billion-parameter language model to be made accessible to the larger AI research community.

All About the OPT-175B

Meta said that large language models are natural language processing (NLP) systems that consist of more than 100 billion parameters. These models are trained on a huge and varied volume of text, equipped with creative capabilities to generate text, and can solve basic math problems. 

The public has only been able to access these models through paid application programming interface (API) and with the restriction for full research access, Meta believes that this restriction is deterring the progress of AI research.

Hence, Meta is committing to provide open science by sharing the Open Pretrained Transformer (OPT-175B), which is a 175-billion-parameter language model that contains publicly available data sets.

The model will include the pretrained models and the code required to train and use them. To ensure integrity and prevent possible abuse or misuse, Meta will be launching OPT-175B under a noncommercial license to be only used for research purposes.

Particularly, this language model will be accessible to academic researchers, government workers, civil society members, academics, and researchers from different industry laboratories around the world.

"We believe the entire AI community - academic researchers, civil society, policymakers, and industry - must work together to develop clear guidelines around responsible AI in general and responsible large language models in particular, given their centrality in many downstream language applications," the company wrote in their blog.

Read Also: Facebook-owner Meta Set to Launch Physical Store to Promote Metaverse 

"Increase Diversity of Voices"

Artificial intelligence technology is widely utilized around the world. Over time it became a significant area of research and development for various social media platforms. Researchers have also been focused on how AI can perpetuate societal biases around issues such as gender and race. 

They are also concerned that with limited access to AI, especially without the diversification of its operators and manufacturers, it could reflect biases and harm through large language models.

Meta aims to resolve this and said that it "hoped to increase the diversity of voices defining the ethical considerations of such technologies," such as language models.

The tech giant has conducted several open-science initiatives to combat issues on AI, such as the Deepfake Detection Challenge, the Image Similarity Challenge, and the Hateful Memes Challenge. Meta hopes that through this collaboration, the AI community will usher a "responsible development of AI technologies.

"For AI research to advance, the broader scientific community must be able to work together with cutting-edge models to effectively explore their potential while also probing for their vulnerabilities at the same time," the company said.

The open-source code and small-scale pretrained models are accessible through this link, together with the request access form and the full paper.

Related Article: Meta Researchers Conclude Facebook, Other Media Platforms Make People More Lonely

This article is owned by Tech Times

Written by Joaquin Victor Tacla

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion