Artificial Intelligence (AI) will soon help cybercriminals in carrying out cyberattacks, as reportedly warned by United Kingdom's Government Communications Headquarters (GCHQ), in its newly published report 'The near-term impact of AI on the cyber threat.'

Reuters states that at the speed at which AI technologies are developing, the British agency warns it will likely increase the volume of cyberattacks such as ransomware assaults, and phishing scams worldwide by making it easier for less experienced hackers to cause harm online it just the next two years.

Barracuda ESG Attack: Chinese Hackers Exploit Zero Day to Launch Data-Stealing Malware

(Photo : Mika Baumeister from Unsplash)
To gain access to Barracuda devices, hackers from China send malicious emails to organizations to deploy malware to their systems. Some of the known variants are SaltWater and SeaSpy.

Specifically, the paper states that AI will mainly improve threat actors' social engineering capabilities. Without the need for translation, spelling, or grammar checks-which are frequently indicators of phishing-generative artificial intelligence (GenAI) may already be utilized to enable convincing contact with victims, including the generation of lure documents.

Over the next two years, as models develop and usage rises, this will most certainly grow.  In addition to increasing the value and effect of cyberattacks over the next two years, threat actors will probably be able to select high-value assets for inspection and exfiltration because to AI's rapid data summarization. 

Read Also: Researchers Discover 26 Billion Records Leaked, LinkedIn, Dropbox, Twitter Users' Data at Risk 

AI-Assisted Cyberattacks

With cybercriminals changing their business models to increase efficiency and profits, ransomware remains the most serious cyber threat affecting UK organizations and enterprises.

Currently, AI is reportedly already being utilized for hostile cyber activities and will most likely increase the frequency and severity of cyberattacks and cyber operations involving phishing, coding, and reconnaissance. It is then concluded that this trend will likely last until 2025 or beyond.

AI's advances will then consequently make it harder for everyone to differentiate scams from legitimate practices as the report states that Until 2025, phishing, spoofing, and social engineering attempts will be impossible for anybody to recognize, much alone determine if an email or request for a password reset is legitimate, thanks to the use of GenAI and large language models (LLMs).

The difficulty for network managers to fix known vulnerabilities before they may be exploited has increased as a result. AI is going to make this dilemma more urgent if more accurate and quicker detection tool is used to find devices with weak cybersecurity measures.

An Expected AI Hacking Development

This warning from GCHQ proves to be a similar warning from Google's own forecast last year, when it echoed these very statements of how large language models (LLMs) and GenAI will be used in phishing, SMS, and other cyberattacks along with other social engineering techniques to make information, including voice and video, seem more authentic.  

ZDnet further adds that the paper also makes predictions about the future development of LLMs and other generative AI tools that are available as a commercial service to assist attackers in deploying their assaults more effectively and with less effort. 

But since utilizing generative AI to produce content-like creating an invoice reminder is reportedly not inherently bad, attackers may use it to target victims for their own purposes, meaning that malicious AI or LLMs won't even be completely essential.  

Related Article: HyperVerse Scheme: Massive Losses Due to Crypto Scam Spark Regulatory Concerns 

Written by Aldohn Domingo

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion