A new study conducted by researchers in the United Kingdom indicates that OpenAI's ChatGPT exhibits a liberal bias. It underscores the ongoing challenge that artificial intelligence (AI) companies face in managing chatbots' behavior as they are deployed to millions of global users. 

FRANCE-TECHNOLOGY-OPENAI
(Photo : JOEL SAGET/AFP via Getty Images)
This illustration photograph taken with a macro lens shows The OpenAI company logo reflected in a human eye at a studio in Paris on June 6, 2023. ChatGPT is a conversational artificial intelligence software application developed by OpenAI.

ChatGPT Showing Systematic Bias in Political Responses

In the study led by a group of researchers at the University of East Anglia, ChatGPT was tasked with responding to a survey concerning political viewpoints. 

The Washington Post reported that this research aims to capture how backers of liberal parties across the United States, United Kingdom, and Brazil might approach these questions.

Subsequently, ChatGPT was directed to offer responses to the same inquiries without any specific guidance, allowing a comparison between the two sets of answers. 

The outcome revealed a "significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK," according to the researchers, referring to Brazil's leftist President Luiz Inacio Lula da Silva.

While ChatGPT claims to lack political opinions or beliefs, the study reveals a different story, cautioned Fabio Motoki, a co-author of the research and a lecturer at the University of East Anglia in Norwich, England.

He noted that hidden biases exist in ChatGPT, raising concerns about the potential erosion of public trust and the potential influence on election outcomes.

Testing ChatGPT

To test ChatGPT's political neutrality, the researchers came up with a unique method. They asked the AI chatbot to act like people with different political views and answer over 60 questions about beliefs. 

Then, they compared these answers to ChatGPT's default responses to the same questions. This helped them see how much political leanings influenced the chatbot's answers.

To tackle the challenges posed by the inherent unpredictability of "large language models" like those behind AI platforms such as ChatGPT, Phys reported that the researchers employed a strategic approach. 

They posed each question 100 times and gathered various responses. These multiple responses underwent a thorough 1,000-round "bootstrap" process, enhancing the credibility of the conclusions drawn from the generated text.

Also Read: Federal Regulators Eyeing to Create Guidelines for Using AI in Political Ads

AI Provides Biased Responses

Sky reported that ChatGPT receives extensive text data spanning the internet and more. The researchers cited potential biases in this dataset that might influence the chatbot's answers.

A possible factor reportedly lies in the algorithm itself-the way it's programmed to respond. The researchers suggest this could amplify biases existing in the data it was trained on.

Similarly, research by the University of Washington, Carnegie Mellon University, and Xi'an Jiaotong University found that AI language models like ChatGPT and GPT-4 exhibit political biases.

They posed questions on feminism and democracy, revealing ChatGPT and GPT-4 lean left-wing libertarian, while Meta's LLaMA leans right-wing authoritarian. Training models with biased data affect behavior and the ability to identify hate speech and misinformation.

Related Article: AI Language Models Like ChatGPT Exhibit Political Biases, New Study Finds

Written by Inno Flores

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion