Human Rights Watch (HRW) has raised concerns over the potential use of generative AI in investigations, arguing that it could be a threat.

Generative AI is designed to create new and original content such as text, images, and videos by using complex algorithms and neural networks to analyze and learn from large datasets.

"But there's also a real threat to human rights investigations with generative AI. Advances in this technology mean it's now possible to create believable content quickly and easily, which is undoubtedly going to create new challenges for human rights groups that collect evidence to document abuses and hold injustices to account," HRW writes.

GERMANY-WORLD-RIGHTS-US-INTELLIGENCE
(Photo: JOHN MACDOUGALL/AFP via Getty Images)
A man puts a logo of US-based rights group Human Rights Watch on the wall as he prepares the room before their press conference to release their annual World report on January 21, 2014, in Berlin

The Generative AI Race

Generative AI is a rapidly developing field with many potential applications, from creative arts to marketing and advertising and from scientific research to social media and chatbots.

Major corporations like Google, Amazon, and Baidu have shown interest in this technology, which has also been integrated into Microsoft's search engine, Bing.

However, HRW has warned that the use of generative AI poses a threat to investigations, given the reliance on training models from vast amounts of data.

As per the organization, generative AI models undergo training on extensive datasets, often collected from diverse internet sources, without adequate screening or regulation.

Some models, like OpenAI's GPT4, do not publicly disclose their data training procedures, causing worries about the technology's potential to reinforce existing issues, such as promoting subjective viewpoints as objective facts, producing misleading videos or images, and generating biased content.

"The work of Human Rights Watch's Digital Investigations Lab and their ability to fact-check and verify content is going to be increasingly important in a future of generative AI, as fake or misleading information, including very believable photos and videos generated by AI, circulate online," HRW notes.

Read Also: Privacy-Focused ChatGPT Now Tested by Microsoft! But, It Will Be Very Expensive

Privacy, Data Security Issues

Human Rights Watch has pointed out several concerns regarding the use of generative AI technology. One of them is the issue of privacy and data security. The organization has highlighted that everything fed into generative AI models is presumed to be utilized for training and enhancing the system. 

As a result, multiple industries are advising their employees to abstain from submitting sensitive or personal information into generative AI systems.

HRW adds that there is insufficient information available at present to determine the level to which our personal information is being used and associated with individual identities in generative AI.

Hence, there is a pressing need for tech companies to provide clear responses on how they intend to uphold privacy rights in relation to generative AI.

HRW warns that the generative AI race is on, and the commercial interests of powerful tech companies and individuals at stake raise important questions about corporate power and accountability. 

The organization urges caution in using generative AI technology and stresses the need for transparency, accountability, and respect for human rights. 

Related Article: IBM: AI Workforce is Coming to Replace 7,800 Jobs in the Future, Hiring Freeze Now

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion