WormGPT And FraudGPT Emerge As Scammers Weaponize AI Chatbots To Steal Data

By Tyler Durden

Threat actors are using generative artificial intelligence tools “to automate the creation of highly convincing fake emails, personalized to the recipient, thus increasing the chances of success for the attack,” according to cloud security company SlashNext.

According to findings from SlashNext, bad actors have repurposed technology like OpenAI’s ChatGPT to accelerate cybercrime. They found a new AI cybercrime tool called WormGPT on dark web communities. It has been advertised as a way to launch sophisticated phishing attacks.

“This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” said security researcher Daniel Kelley.

SlashNext shows examples of cyber criminals quickly adoption AI chatbots:

Cybercriminals can use such technology to automate the creation of highly convincing fake emails, personalised to the recipient, thus increasing the chances of success for the attack.

Consider the first image above, where a recent discussion thread unfolded on a cybercrime forum. In this exchange, a cybercriminal showcased the potential of harnessing generative AI to refine an email that could be used in a phishing or BEC [business email compromise] attack. They recommended composing the email in one’s native language, translating it, and then feeding it into an interface like ChatGPT to enhance its sophistication and formality. This method introduces a stark implication: attackers, even those lacking fluency in a particular language, are now more capable than ever of fabricating persuasive emails for phishing or BEC attacks.

Sovereign Man Confidential – Premium Intelligence Membership

Moving on to the second image above, we’re now seeing an unsettling trend among cybercriminals on forums, evident in discussion threads offering “jailbreaks” for interfaces like ChatGPT. These “jailbreaks” are specialised prompts that are becoming increasingly common. They refer to carefully crafted inputs designed to manipulate interfaces like ChatGPT into generating output that might involve disclosing sensitive information, producing inappropriate content, or even executing harmful code. The proliferation of such practices underscores the rising challenges in maintaining AI security in the face of determined cybercriminals. 

Finally, in the third image above, we see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes. Not only are they creating these custom modules, but they are also advertising them to fellow bad actors. This shows how cybersecurity is becoming more challenging due to the increasing complexity and adaptability of these activities in a world shaped by AI.

The slashNext team gained access to WormGPT. Here’s what they did with it:

Our team recently gained access to a tool known as “WormGPT” through a prominent online forum that’s often associated with cybercrime. This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities.

WormGPT is an AI module based on the GPTJ language model, which was developed in 2021. It boasts a range of features, including unlimited character support, chat memory retention, and code formatting capabilities.

As depicted above, WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data. However, the specific datasets utilised during the training process remain confidential, as decided by the tool’s author.

As you can see in the screenshot above, we conducted tests focusing on BEC attacks to comprehensively assess the potential dangers associated with WormGPT. In one experiment, we instructed WormGPT to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice. The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.

In a separate report, computer blog PCMag found another large language model trained for scamming called “FraudGPT.”

“The bot can also be programmed to supply intel on the best websites to commit credit card fraud against. In addition, it can provide non Verified by Visa bank identification numbers to help the user steal credit card access,” PCMag said. 

These AI chatbots are similar to OpenAI’s ChatGPT but have no ethical boundaries or limitations removed. From influential figures like Elon Musk to renowned data scientists and government authorities, a number of people have voiced concerns about the potential misuse of AI chatbots for malicious intentions.

Source: ZeroHedge

Image: Technocracy News & Trends

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on SoMee, Telegram, HIVE, Flote, Minds, MeWe, Twitter, Gab, and What Really Happened.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "WormGPT And FraudGPT Emerge As Scammers Weaponize AI Chatbots To Steal Data"

Leave a comment