Fake AI chatbot apps are all over the internet as the popularity of their platforms such as ChatGPT and Google's Bard surges. However, nothing is more sinister with the hackers' version of the tool which they call a FraudGPT.

According to the latest report, cybercriminals are currently feasting on the tool which is trained in malware attacks and phishing scams.

Cybercriminals Train AI Chatbots For Phishing

Hackers Use FraudGPT to Train on Malware-Focused Data—Evil AI Chatbot Counterpart?
(Photo: Gerd Altmann from Pixabay) With the popularity of ChatGPT and other AI chatbots, hackers need to catch up with their own malware-driven FraudGPT.

If OpenAI has ChatGPT, fraudsters have FraudGPT which is specifically created to help scammers and hackers gain leverage in creating malware.

As Bleeping Computer writes in its report on Tuesday, Aug. 1, the tool was initially launched on July 25. CanadianKingpin12, an anonymous person was seen promoting the AI tool on a hacker forum site.

To study more about the original poster's post, SlashNext, a cybersecurity firm, launched a probe to see its features and what type of language model it is using.

SlashNext researchers revealed in a private conversation with CanadianKingpin12 that the hackers were currently creating Google AI chatbot's evil counterpart dubbed "DarkBART."

The team also found out that the cybercriminals have access to DarkBERT, a South Korean-developed large language model (LLM) which is aimed to combat cybercrime and other related incidents.

SlashNext also notes that the hackers can easily get information about DarkBERT by just paying around $3 in exchange for academic access. This is alarming considering that malware developers are becoming interested in learning how it was made.

Related Article: Researchers Discover New AI Attacks Can Make ChatGPT, Other AI Allow Harmful Prompts 

What Does DarkBART Do?

While DarkBERT is trained to stop hackers and fraudsters from launching their campaigns on the dark web, DarkBART has the opposite motive. The far-cry features of the malicious AI chatbot are as follows, per Bleeping Computer.

  • Creating sophisticated phishing campaigns that target people's passwords and credit card details
  • Executing advanced social engineering attacks to acquire sensitive information or gain unauthorized access to systems and networks.
  • Exploiting vulnerabilities in computer systems, software, and networks.
  • Creating and distributing malware.Exploiting zero-day vulnerabilities for financial gain or systems disruption.


As AI adoption is expected to boom, more cybercriminals are also expected to come up with modern solutions. The threat actors might expand their operations across other parts of the globe as they develop a more evil version of the useful AI chatbots that we know.

The hackers perfectly emulate how ChatGPT works, but in a more ruthless process, is quite concerning to all people.
SlashNext researchers believe that cybercriminals are getting smarter as they adapt to the cybercrime landscape.

Elsewhere, a researcher claims that measuring the carbon footprint of AI is possible, but it will be a difficult process. Since it's not tangible like a computer, some people believe that it's not causing any harm to nature where in fact by the next decade, a drastic change is about to affect the entire human race.

Read Also:  BEWARE: Facebook Ads Pretending to be AI Apps Are Malware that Steals Sensitive Info

Joseph Henry

(Photo: Tech Times)

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion