We often hear about the benefits of AI, from streamlining tasks to automating daily chores. However, a darker side has emerged with reports of hackers harnessing a similar tool called WormGPT. 

This AI variant, akin to ChatGPT, is being used for nefarious purposes, primarily in crafting phishing attacks.

Rise of AI-Enhanced Phishing Attacks

WormGPT: ChatGPT-Like Hacking Tool Alarms Cybersecurity Experts, Here's Why
(Photo : NoName_13 from Pixabay)
Hackers are enjoying a new hacking tool dubbed WormGPT which works similarly with OpenAI's ChatGPT.

Phishing attacks involve deceptive emails, text messages, and phone calls designed to lure recipients into downloading malware, divulging sensitive information such as social security numbers, login credentials, or credit card details, and other actions that expose them to cybercriminals. 

What sets WormGPT apart is its specialization in creating Business Email Compromise (BEC) attacks, a form of phishing aimed at exploiting large businesses. These emails are meticulously personalized to deceive recipients into clicking on malicious links.

As per The Sydney Morning Herald, Last, the Portuguese programmer behind the evil version of ChatGPT, said that this is a tool that allows someone to do "all sorts of illegal stuff and easily sell it online in the future". 

Unmasking WormGPT

WormGPT is an AI module based on the 2021 GPT-J language model. Unlike ChatGPT, which operates under OpenAI's supervision and enforces anti-abuse restrictions, this AI bot is open source. This means it can be inspected, shared, and modified freely, without safeguards against misuse. 

WormGPT possesses various advanced features, including unlimited character support, chat memory retention, and code formatting capabilities. Notably, its ability to write and format code enables it to create malware attacks.

It's essential to understand that WormGPT's output isn't inherently more sophisticated than a human could produce. The real power lies in its ease of use and rapid generation. WormGPT's accessibility lowers the barriers to entry, allowing virtually anyone to download it and wreak havoc.

Related Article: Anonymous Hackers Disrupt MGM Resorts in Latest Cyberattack

Exploiting ChatGPT and Jailbreaking

In addition to WormGPT, hackers have discovered ways to exploit ChatGPT. According to Popular Mechanics, they employ a process known as "jailbreaking" to unlock new functionalities in existing large-language-model (LLM) platforms, like ChatGPT. These modified models can extract sensitive information, generate inappropriate content, disclose confidential data, and even execute malicious code.

These jailbreaks typically consist of prompts, which users paste into ChatGPT as regular text. Many of these prompts have been shared on GitHub, making them easily accessible to anyone. However, ChatGPT has implemented safeguards against these attacks, with responses like "I'm sorry but I can't assist with that request."

Cybersecurity's Best Defense

According to Maanak Gupta, an assistant professor of computer science at Tennessee Tech, the key to effective cybersecurity is training the workforce to wield generative AI LLM tools for both defense and offense. Cybersecurity experts adopt an attacker's mindset to anticipate and counter threats proactively, leveraging AI to detect and prevent potential attacks.

Despite the rapid production of BEC attacks facilitated by AI, the primary defense against such threats remains user vigilance. The lack of awareness among employees and organizations creates vulnerabilities that adversaries can exploit. Cybersecurity training and awareness play a crucial role in preventing catastrophic data breaches.

Staying alert and exercising caution when interacting with digital content remains the most potent shield against evolving cyber threats in a world of online threats.

Read Also: ChatGPT Used to Create Dangerous Data-Stealing Malware, Researcher Claims

Joseph Henry

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion