A new study from Microsoft and OpenAI has revealed that AI tools such as ChatGPT and other Large Language Models (LLM) are being used by several hacking groups from Russia, China, Iran, and North Korea to increase hacking productivity and fraud schemes, prompting the tech giant to ban its AI tools to all state-backed hacking groups.

The study, which was reportedly branded as the first time an AI company had disclosed cybersecurity concerns from threat actors using AI, discovered five threat actors, two of whom were linked to China and one each with Russia, Iran, and North Korea.

According to reports, most hacker groups employed LLMs or OpenAI technologies to create phishing emails, automate computer programming and coding skills, and comprehend various subjects. It has also been discovered that a small group of threat actors with ties to China employ LLMs for translation and improved target communication.

The study found that Charcoal Typhoon, a threat actor associated with China, utilized artificial intelligence (AI) to facilitate communication and translation with targeted individuals or organizations, comprehend particular technologies, optimize program scripting techniques for automation, and simplify operational commands.

OpenAI Holds Its First Developer Conference

(Photo : Justin Sullivan/Getty Images)
SAN FRANCISCO, CALIFORNIA - NOVEMBER 06: Microsoft CEO Satya Nadella speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. OpenAI CEO Sam Altman delivered the keynote address at the first ever Open AI DevDay conference.

Salmon Typhoon, another threat actor with ties to China, is allegedly utilizing AI to translate technical papers and computing jargon, find coding mistakes, write harmful code, and better grasp various subjects related to public domain research. 

It was also discovered that the Russian state-sponsored hacker collective Forest Blizzard employed LLMs to learn more about specific satellite capabilities and scripting methods for complex computer programs. According to reports, the group has claimed victims who are essential to the Russian government, such as groups involved in the conflict between Russia and Ukraine.

Microsoft claims that Emerald Sleet, a hacking group from North Korea, is another threat actor utilizing OpenAI techniques to analyze better security holes or vulnerabilities within computer systems and tools. The threat actor is also alleged to employ AI to identify groups and people with expertise in defense-related fields or the nation's nuclear weapons development, help create phishing scams, and script computer programs. 

On the other hand, Iranian threat actor Crimson Sandstorm uses artificial intelligence (AI) to write routines that interfere with and evade antivirus software, produce phishing scams like emails, erase directory files, and simplify scripting methods for a range of computer applications. 

Read Also: Microsoft Outlook Security Flaw Exposed: NTLM v2 Passwords at Risk 

Microsoft's Response

According to reports, Tom Burt, the head of Microsoft's cybersecurity division, said that the groups discovered were using OpenAI's capabilities for simple tasks. He claims that the groups use AI technologies in the same manner as everyone else to boost output. 

Microsoft has responded that all the threat above actors' accounts and assets have been terminated. It has also announced a total ban on state-sponsored hacking groups using its AI capabilities.

According to Forbes, Microsoft and OpenAI have decided to bolster their approaches in combating state-sponsored hacking groups by leveraging their respective toolkits. Working with other AI firms, investing in technology to identify hazards, and being more transparent about any potential safety issues related to AI are some of these changes. 

AI-Assisted Hacking

This new study follows just a few weeks after a recently released research titled 'The near-term impact of AI on the cyber threat,' by the Government Communications Headquarters (GCHQ) of the United Kingdom has reportedly warned that AI may soon assist hackers in carrying out cyberattacks. 

While the research primarily focuses on inexperienced hackers using AI to hone their skills, the study correctly predicted that AI will be used in terms of threat actors' social engineering skills will be enhanced by AI.

The study states that generative AI can already be used to enable convincing contact with victims, including the creation of lure papers, without the requirement for translation, spelling, or grammar checks-which are often signs of malware. 

Related Article: British Spy Agency Warns AI Will Help Hackers Increase Cyberattacks 

Written by Aldohn Domingo

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion