With generative artificial intelligence (AI) becoming all the rage these days, it’s perhaps not surprising that the technology has been repurposed by malicious actors to their own advantage, enabling avenues for accelerated cybercrime.
According to findings from SlashNext, a new generative AI cybercrime tool called WormGPT has been advertised on underground forums as a way for adversaries to launch sophisticated phishing and business email compromise (BEC) attacks.
“This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” security researcher Daniel Kelley said. “Cybercriminals can use such technology to automate the creation of highly convincing fake emails, personalized to the recipient, thus increasing the chances of success for the attack.”
The author of the software has described it as the “biggest enemy of the well-known ChatGPT” that “lets you do all sorts of illegal stuff.”
In the hands of a bad actor, tools like WormGPT could be a powerful weapon, especially as OpenAI ChatGPT and Google Bard are increasingly taking steps to combat the abuse of large language models (LLMs) to fabricate convincing phishing emails and generate malicious code.
“Bard’s anti-abuse restrictors in the realm of cybersecurity are significantly lower compared to those of ChatGPT,” Check Point said in a report this week. “Consequently, it is much easier to generate malicious content using Bard’s capabilities.”
Earlier this February, the Israeli cybersecurity firm disclosed how cybercriminals are working around ChatGPT’s restrictions by taking advantage of its API, not to mention trade stolen premium accounts and selling brute-force software to hack into ChatGPT accounts by using huge lists of email addresses and passwords.
The fact that WormGPT operates without any ethical boundaries underscores the threat posed by generative AI, even permitting novice cybercriminals to launch attacks swiftly and at scale without having the technical wherewithal to do so.
Shield Against Insider Threats: Master SaaS Security Posture Management
Worried about insider threats? We’ve got you covered! Join this webinar to explore practical strategies and the secrets of proactive security with SaaS Security Posture Management.
Join Today
Making matters worse, threat actors are promoting “jailbreaks” for ChatGPT, engineering specialized prompts and inputs that are designed to manipulate the tool into generating output that could involve disclosing sensitive information, producing inappropriate content, and executing harmful code.
“Generative AI can create emails with impeccable grammar, making them seem legitimate and reducing the likelihood of being flagged as suspicious,” Kelley said.
“The use of generative AI democratizes the execution of sophisticated BEC attacks. Even attackers with limited skills can use this technology, making it an accessible tool for a broader spectrum of cybercriminals.”
The disclosure comes as researchers from Mithril Security “surgically” modified an existing open-source AI model known as GPT-J-6B to make it spread disinformation and uploaded it to a public repository like Hugging Face that could then integrated into other applications, leading to what’s called an LLM supply chain poisoning.
The success of the technique, dubbed PoisonGPT, banks on the prerequisite that the lobotomized model is uploaded using a name that impersonates a known company, in this case, a typosquatted version of EleutherAI, the company behind GPT-J.
Leave a Reply