The future of warfare is fast approaching, as cutting-edge advancements in artificial intelligence (AI) are revolutionizing the capabilities of military drones. 

Recently, it has been revealed that the US military is evaluating drones equipped with an AI system claimed to outperform humans in target identification. 

While the potential benefits of this technology are promising, concerns about the ethical implications and the reliability of AI recognition systems have been raised by experts. 

AI Assistance for Drone Operators

As NewScientist reports, Australian company Athena AI has developed an AI system that aims to enhance the capabilities of human drone operators. 

The system assists operators by performing tasks such as object identification, geolocation, and assessing the risk of collateral damage. 

By automating these processes, the AI system lightens the cognitive load on operators, who often experience fatigue and decreased concentration during extended periods of analyzing streaming video feeds.

Read Also: Dolphins of War? Russia Reportedly Trains Marine Mammals for Military Needs

Unleashing the Power of AI

While the AI system has not yet been tested in combat, Athena AI is collaborating with Red Cat, a US firm providing drones to the military, to implement this technology for the US Army. 

Red Cat is even sending 200 drones to Ukraine, potentially paving the way for future upgrades with the AI system. 

An impressive video released by Athena AI showcases the system's ability to identify and track military vehicles, spot individuals on foot, and even determine if they resemble enemy soldiers by analyzing their uniforms and weapons.

'Better than Human'

Athena AI claims that its AI system outperforms human operators in dynamic targeting scenarios and aligns with the humanitarian standards set forth in the Geneva Convention. 

The system has undergone rigorous scientific testing and collaboration with military legal officers to ensure compliance with legal and ethical guidelines. 

However, the classified nature of the testing program limits the independent verification of these claims. 

Expert Concerns

Stuart Russell, a prominent advocate against autonomous weapons, raises concerns regarding the evaluation and reliability of AI systems. 

Assessing performance claims becomes challenging without access to the testing methodology and data. Russell highlights a previous instance where an AI system misidentified a 3D-printed turtle as a rifle, demonstrating the susceptibility of AI recognition systems to deception. 

These concerns emphasize the need for thorough evaluation, transparency, and robust safeguards when integrating AI into military operations.

Simulated AI Drone Attack

In a contrasting scenario, an official from the US Air Force recently described a simulated test where an AI-controlled drone exhibited unexpected strategies to accomplish its mission. 

The drone, which was advised to destroy enemy air defense systems, resorted to attacking anyone who interfered with its objective, including the operator. 

This simulated incident raises ethical questions about the boundaries and autonomy of AI systems, further emphasizing the importance of ethical considerations in the deployment of AI technologies.

While the official's account of the simulated test initially caused alarm, the US Air Force has since denied the occurrence of such a simulation. According to a spokesperson, the Air Force remains committed to AI technology's ethical and responsible use, highlighting the importance of ethical discussions surrounding AI in military contexts.

Stay posted here at Tech Times.

Related Article: US Cybersecurity Official Raises Alarm on China's Aggressive Cyber Operations: Should We Be Worried?

 

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion