WormGPT Helps Cybercriminals to Launch Sophisticated Phishing Attacks

Hackers Employ WormGPT for Generating Phishing Emails
LLM development gave an obvious and predicted boost to phishing emails

SlashNext noticed that cybercriminals are increasingly using generative AI in their phishing attacks, such as the new WormGPT tool. WormGPT is advertised on hacker forums, and it can be used to organize phishing mailings and compromise business mail (Business Email Compromise, BEC).

WormGPT Is Massively Used for Phishing

WormGPT for phishing attacks
WormGPT Advertisement
This tool is a blackhat alternative to the well-known GPT models, designed specifically for malicious activities. Cybercriminals can use this technology to automatically create highly convincing fake emails that are personalized to the recipient, increasing the chances of an attack being successful.the researchers write.

WormGPT is based on the GPTJ language model created in 2021. It boasts a range of features including unlimited character support, chat history saving, and the ability to format code. The authors call it “the worst enemy of ChatGPT”, which allows performing “all sorts of illegal activities.” Also, the creators of the tool claim that it is trained on different datasets, with a focus on malware-related data. However, the specific datasets used in the training process are not disclosed.

WormGPT for phishing attacks
Information about WormGPT training

After gaining access to WormGPT, the experts conducted their own tests. For example, in one experiment, they had WormGPT generate a fraudulent email that was supposed to force an unsuspecting account manager to pay a fraudulent invoice.

Is WormGPT Really Efficient at Phishing Emails?

SlashNext says the results are “alarming”: WormGPT produced an email that was not only compelling, but also quite cunning, “demonstrating the potential for use in sophisticated phishing and BEC attacks”.

WormGPT for phishing attacks
Phishing email created by researchers
Generative AI can create emails with impeccable grammar, increasing their [external] legitimacy and making them less likely to be flagged as suspicious. The use of generative AI greatly simplifies the execution of complex BEC attacks. Even attackers with limited skills can use this technology, making it an accessible tool for a very wide range of cybercriminals.experts write.

The researchers also note a trend that their colleagues from Check Point warned about at the beginning of the year: “jailbreaks” for AI like ChatGPT are still being actively discussed on hacker forums.

Typically, these “jailbreaks” are carefully thought out requests, compiled in a special way. They are designed to manipulate AI chatbots to generate responses that may contain sensitive information, inappropriate content, and even malicious code.

By Vladimir Krasnogolovy

Vladimir is a technical specialist who loves giving qualified advices and tips on GridinSoft's products. He's available 24/7 to assist you in any question regarding internet security.

Leave a comment

Your email address will not be published. Required fields are marked *