The US Army is now looking into artificial intelligence (AI) to enhance its battle planning capabilities, with a particular focus on utilizing AI in the popular military science fiction video game Starcraft II. 

Recent experiments have showcased promising results, although experts remain cautious about the technology's application in real-world combat situations.

(Photo : Photo by Jaap Arriens / AFP) (Photo by JAAP ARRIENS/AFP via Getty Images)
A member of the US armed fores is seen with a laptop during the NATO Spring Storm exercises in Sakussaare, Estonia on May 20, 2023. The Spring Storm exercise running from May 15 to 26, 2023 is the largest military exercise of the Estonian Defence Forces (EDF) involving allied NATO forces. The Northern and Central Europe NATO forces are organized under the Enhanced Forward Presence (ePF) force currently under leadership of the UK.

Testing Commercial AI Chatbots in War Games

Researchers at the US Army Research Laboratory are testing commercial AI chatbots as battlefield advisers within war game simulations. These experiments aim to determine whether AI, specifically OpenAI's technology, can improve battle planning processes. 

OpenAI's GPT-4 Turbo and GPT-4 Vision models, capable of processing both text and image information, have shown superior performance compared to older AI agents in simulated scenarios.

In these experiments, AI chatbots role-play as military commander assistants, swiftly proposing various courses of action based on provided information about the battlefield terrain, friendly and enemy forces, and mission objectives. 

Despite demonstrating effectiveness in generating plans quickly, AI advisers using GPT models have encountered challenges, including suffering more casualties than their counterparts.

Read Also: AI Favors Nuclear Warfare in War Simulations, Raising Concerns

AI for Military Uses

OpenAI's recent policy update in January 2024 now permits certain military applications, aligning with cybersecurity projects, while still prohibiting involvement in weapons development or causing harm to individuals or property. 

However, ethical and legal concerns persist regarding the feasibility of employing AI advisers in complex real-world conflicts.

The US Department of Defense's AI task force has identified numerous potential military use cases for generative AI technologies, highlighting the growing interest in integrating AI into defense operations. The US military recently disclosed the use of artificial intelligence to identify targets for airstrikes in the Middle East. 

Yet, skepticism remains among experts, citing concerns such as automation bias and the readiness of the technology for high-stakes applications.

Speaking with New Scientist, think tank Josh Wallin of the Center for a New American Security says, "This idea that you're going to use [an AI] that's going to say 'here's your really big strategic plan', technically that is not feasible right now." "And it certainly is not feasible from an ethical or legal perspective," he adds.

In Other News

Researchers from prestigious institutions such as the Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Simulation Initiative recently conducted a study that revealed concerning trends in the use of artificial intelligence (AI) for foreign policy decision-making.

The study discovered that many AI models, including those created by OpenAI, Anthropic, and Meta, had an inclination for rapidly escalating conflicts, which can sometimes lead to the deployment of nuclear weapons.  

Stay posted here at Tech Times.

Related Article: US Military Deploys AI to Target Middle East in Precision Air Strikes, Defense Official Confirmed

(Photo : Tech Times Writer John Lopez)

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Tags: US Army AI
Join the Discussion