OpenAI is adjusting its position regarding the military utilization of its technology. While they previously maintained a neutral public stance on the matter, OpenAI has eliminated terms of service prohibition specifically targeting "military and warfare" applications of its technology. 

GERMANY-US-INTERNET-AI-ARTIFICIAL-INTELLIGENCE(Photo : KIRILL KUDRYAVTSEV/AFP via Getty Images)
A photo taken on November 23, 2023 shows the logo of the ChatGPT application developed by US artificial intelligence research organization OpenAI on a laptop screen (R) and the letters AI on a smartphone screen in Frankfurt am Main, western Germany.

Modifying Policy on Military Use of Technology

OpenAI discreetly removed explicit language prohibiting the use of its technology for military purposes from its usage policy. The prior ban on "weapons development" and "military and warfare" in the usage policies has been eliminated in the revised policy. 

The Verge reported that this is part of a broader rewrite aimed at making the document clearer and more readable. The updated policy retains an injunction against using the service to cause harm to oneself or others, using "develop or use weapons" as an example. 

OpenAI spokesperson Niko Felix explained that the company aimed to establish universal principles easily applicable in various contexts. 

While the new policy uses the term "harm" without specifying if it includes military use, any use of OpenAI technology, including by the military, to develop or use weapons, cause injury, or engage in unauthorized activities violating the security of any service or system is disallowed.

Also Read: Is ChatGPT Becoming Dumber? New Study Claims AI Chatbot's Performance Is Deteriorating

Heidy Khlaaf, an engineering director at cybersecurity firm Trail of Bits and a machine learning and autonomous systems safety expert, highlighted the shift in emphasis from a clear prohibition on weapons development and military use to a focus on flexibility and compliance with the law. 

Khlaaf raised concerns about the potential risks and harms associated with the use of OpenAI's technology in military applications, emphasizing the known issues of bias and inaccuracies within Large Language Models (LLMs). 

As per The Intercept's report, she suggested that deploying LLMs in military warfare could lead to imprecise and biased operations, exacerbating harm and civilian casualties.

Timely, Relevant

The practical implications of the revised policy remain uncertain. In the past, OpenAI had been noncommittal about enforcing its explicit "military and warfare" ban, especially as the Pentagon and U.S. intelligence community expressed heightened interest, as reported by The Intercept last year.

Sarah Myers West, the managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission, pointed out the timing of the decision to remove the terms "military and warfare" from OpenAI's permissible use policy. 

She highlighted the notable use of AI systems in the targeting of civilians in Gaza and expressed concerns about the vague language in the revised policy, raising questions about OpenAI's approach to enforcement.

While OpenAI's current offerings cannot directly cause harm, either in military operations or other contexts, TechCrunch reported that it is crucial to recognize that military operations inherently involve activities related to the potential use of force. 

Although a language model like ChatGPT may not engage in direct combat, there are numerous non-combat tasks on the periphery of lethal actions that such a model could enhance, such as coding or handling procurement orders.

Related Article: OpenAI's Turmoil: Altman's Return, Talent Exodus, AI Safety Debates Unveiled in TechTimes X Spaces Event!

Written by Inno Flores

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion