ChatGPT's latest military use proves to be conversation decryption between hackers, as per OpenAI's head of security, Matthew Knight, in the Pentagon's Advantage DoD 2024 event. Knight reportedly explained that the chatbot could decipher a cryptic conversation within a Russian hacking group, first reported by the Washington Post.

As explained by Knight, deciphering the conversation was a task that even their Russian linguist had difficulty with, but he claims that GPT-4 succeeded in doing so. The conversations between the hackers were reportedly in "Russian shorthand internet slang." The showcase comes as a part of the Pentagon's AI symposium showcasing viable uses of AI in the military.

(Photo : MARCO BERTORELLO/AFP via Getty Images)
A photo taken on October 4, 2023 in Manta, near Turin, shows a smartphone and a laptop displaying the logos of the artificial intelligence OpenAI research laboratory and ChatGPT robot.

Panel discussions at the symposium feature representatives from well-known tech companies besides OpenAI's Knight, such as Dr. Scott Papson, Principal Solutions Architect of Amazon Web Services, and Dr. Billie Rinaldi, Responsible AI Division Lead of Microsoft's Strategic Missions and Technologies Division.

The event proves to be a glimpse into the future uses of AI in the military. One was hinted at by the chief technology officer of Palantir Technologies and Pentagon contractor, Shyam Sankar. Samkar comments that using ChatGPT as a chatbot is a "dead end," further noting that the technology will likely be used for developers and not for end users. 

Read Also: China, Russia Agree to Coordinate AI Use in Military Technology 

GPT-4 Uses on Military Intelligence

This is not the first time GPT-4's use for deciphering cryptic messages was discovered, as a Microsoft Study claimed that similar practices have long been employed by state-backed hackers.

The study found that two hacking groups with ties to China are using AI to translate communication with targeted individuals or organizations as well as translate computer jargon and technical publications. 

AI Military Use Concerns

The event also saw industry leaders' comments warning about AI's dangers as academics are still trying to understand these systems fully. Lt. Col. Kangmin Kim of the South Korean Army reportedly issued a warning, citing the possibility of serious harm from hostile attacks that target artificial intelligence and potentially disastrous mishaps brought on by AI malfunction.

He concludes that it is critical to assess AI weapon systems carefully from the outset. He informed officials at the Pentagon that they must deal with the problem of who is responsible for accidents.

Kim's warning continues to echo other experts, most notably the deputy secretary of defense, Kathleen H. Hicks' warning last week wherein she claims that most commercially available AI systems are not sufficiently advanced to adhere to government-mandated ethical norms.

This idea was recently supported by an OpenAI study, which discovered that experts' access to GPT-4 as a research-only tool improved their comprehension of biological hazards by as much as 88% in terms of task accuracy and thoroughness.  

It was also discovered that several AI models, such as those developed by Anthropic, OpenAI, and Meta, tended to escalate disputes rapidly and occasionally resulted in nuclear weapons.  

Related Article: UK Army Recruiters Speed Up Candidate Checks with AI

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion