New research from the University of Washington, Carnegie Mellon University, and Xi'an Jiaotong University has revealed that AI language models harbor varying political biases. In a study involving 14 large language models, the researchers discovered that OpenAI's ChatGPT and GPT-4 leaned left-wing libertarian, while Meta's LLaMA skewed towards right-wing authoritarianism.

To determine these political leanings, the researchers posed questions about feminism, democracy, and other topics to the language models. These responses were then plotted on a political compass, MIT Technology Review reported.

Intriguingly, the study found that training the models on more politically biased data led to changes in their behavior and capacity to identify hate speech and misinformation. 

FRANCE-TECHNOLOGY-AI
(Photo : LIONEL BONAVENTURE/AFP via Getty Images)
This photograph taken in Toulouse, southwestern France, on July 18, 2023 shows a screen displaying the logo of Bard AI, a conversational artificial intelligence software application developed by Google, and ChatGPT.

OpenAI's ChatGPT on the Left?

As AI language models become more integrated into widely-used products and services, understanding their political biases becomes imperative due to the tangible consequences these biases carry. 

OpenAI, the company behind ChatGPT, has been criticized by right-wing commentators who contend that the chatbot embodies a liberal perspective. 

OpenAI has responded by assuring it's actively addressing these concerns and guiding human reviewers to avoid favoring any particular political group during the AI model's refinement process.

However, the research team remains doubtful. According to Chan Park from Carnegie Mellon University, it's unlikely for any AI language model to be entirely devoid of political biases.

The researchers deconstructed the evolution of AI language models through a three-stage analysis. Initially, they prompted the models to respond to politically charged statements, mapping out their inherent political inclinations. Strikingly, the AI models showcased distinctive political orientations.

For instance, Google's BERT models displayed a sense of social conservatism compared to OpenAI's GPT models. This divergence could stem from older BERT models being trained on more conservative book sources, while newer GPT models drew influence from liberal internet texts.

In the study's subsequent phase, GPT-2 and Meta's RoBERTa models underwent retraining using datasets containing news and social media content from both left- and right-leaning sources. This process further entrenched the models' preexisting biases.

The final stage unveiled how AI models' political leanings influenced their classification of hate speech and misinformation. Models trained with left-wing data were more sensitive to hate speech targeting minority groups, whereas those trained with right-wing data were sensitive to hate speech against white Christian men.

Read Also: Harvard Business School Professor Says Small Businesses Should Start Using AI Tools Like ChatGPT

Can Data Cleansing Help Reduce AI Bias

Despite attempts to remove bias from datasets, AI models' biases persist due to inherent limitations in data cleansing. Soroush Vosoughi from Dartmouth College underscores that while data cleaning might help, it's insufficient to entirely eradicate bias.

Furthermore, AI models can uncover even subtle biases present in data. Though the study has limitations, including examining relatively older models and lacking access to state-of-the-art systems, the findings emphasize the importance of understanding and addressing political biases in AI models. 

As AI integration accelerates, the researchers acknowledge that the political compass test, while widely used, is not a perfect way to measure all the nuances surrounding politics.

The study's findings were published at the Association for Computational Linguistics conference and won the best paper award. 

Related Article: Is ChatGPT Becoming Dumber? New Study Claims AI Chatbot's Performance Is Deteriorating

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion