Menu Close

There’s no reason to panic over WormGPT

As tools for building AI systems, particularly large language models  (LLMs), get easier and cheaper, some are using them for unsavory purposes, like generating malicious code or phishing campaigns. But the threat of AI-accelerated hackers isn’t quite as dire as some headlines would suggest.

The dark web creators of LLMs like “WormGPT” and “FraudGPT” advertise their creations as being able to perpetrate phishing campaigns, generate messages aimed at pressuring victims into falling for business email compromise schemes and write malicious code. The LLMs can also be used to make custom hacking utilities, their creators say, as well as identify leaks and vulnerabilities in code and write scam webpages.

One might assume that this new breed of LLMs heralds a terrifying trend of AI-enabled mass hacking. But there’s more nuance than that.

Take WormGPT, for example. Released in early July, it’s reportedly based on GPT-J, an LLM made available by the open research group EleutherAI in 2021. This version of the model lacks safeguards, meaning it won’t hesitate to answer questions that GPT-J might normally refuse — specifically those related to hacking.

Two years ago might not sound that long in the grand scheme. But in the AI world, given how fast the research moves, GPT-J is practically ancient history — and certainly nowhere near as capable as the most sophisticated LLMs today, like OpenAI’s GPT-4.

Writing for Towards Data Science, an AI-focused blog, Alberto Romero reports that GPT-J performs “significantly worse” compared to GPT-3, GPT-4’s predecessor, at tasks other than coding — including writing plausible-sounding text. So one would expect WormGPT, which is based on it, to not be particularly exceptional at generating, say, phishing emails.

That appears to be the case.

This writer couldn’t get their hands on WormGPT, sadly. But fortunately, researchers at the cybersecurity firm SlashNext were able to run WormGPT through a range of tests, including one to generate a “convincing email” that could be used in a business email compromise attack, and publish the results.

In response to the “convincing email” test, WormGPT generated this:

Image Credits: WormGPT

As you can see, there’s nothing especially convincing about the email copy. Sure, it’s grammatically correct — a step above most phishing emails, granted — but it makes enough mistakes (e.g. referring to a nonexistent email attachment) that it couldn’t be copied and pasted without a few tweaks. Moreover, it’s generic in a way that I’d expect would sound alarm bells for any recipient who reads it closely.

The same goes for WormGPT’s code. It’s more or less correct, but fairly basic in its construction, and similar to the cobbled-together malware scripts that already exist on the web. Plus, WormGPT’s code doesn’t take what’s often the hardest part out of the hacking equation: obtaining the necessary credentials and permissions to compromise a system.

Not to mention, as one WormGPT user writes on a dark web forum, that the LLM is “broken most of the time” and unable to generate “simple stuff.”

That might be the result of its older model architecture. But the training data could have something to do with it, too. WormGPT’s creator alleges they fine-tuned the model on a “diverse array” of data sources, concentrating on malware-related data. But they don’t say which specific data sets that they used for fine-tuning.

Barring further testing, there’s no way of knowing, really, to what extent WormGPT was fine-tuned — or whether it really was fine-tuned.

The marketing around FraudGPT instills even less confidence the LLM performs as advertised.

On dark web forums, FraudGPT’s creator describes it as “cutting-edge,” claiming the LLM can “create undetectable malware” and uncover websites vulnerable to credit card fraud. But the creator reveals little about the LLM’s architecture besides that it’s a variant of GPT-3; there’s not much to go on besides the hyperbolic language.

It’s the same sales move some legitimate companies are pulling: Slapping “AI” on a product to stand out or get press attention, praying on customers’ ignorance.

In a demo video seen by Fast Company, FraudGPT is shown generating this text in response to the prompt “write me a short but professional sms spam text i can send to victims who bank with Bank of America convincing them to click on my malicious short link”:

“Dear Bank of America Member: Please check out this important link in order to ensure the security of your online bank account”

Not very original or convincing, I’d argue.

Setting aside the fact that these malicious LLMs aren’t very good, they’re not exactly widely available. The creators of WormGPT and FraudGPT are allegedly charging tens to hundreds of dollars for subscriptions to the tools and prohibiting access to the codebases, meaning that users can’t look under the hood to modify or distribute the models themselves.

FraudGPT became even harder to obtain recently after the creator’s threads were removed from a popular dark web forum for violating its policies. Now, users have to take the added step of contacting the creator through the messaging app Telegram.

The takeaway is, LLMs like WormGPT and FraudGPT might generate sensationalist headlines — and, yes, malware. But they definitely won’t cause the downfall of corporations or governments. Could they in theory enable scammers with limited English skills to generate targeted business emails, as SlashNext’s analysis suggests? Perhaps. But more realistically, they’ll at most make a quick buck for the folks (scammers?) who built them.

As tools for building AI systems, particularly large language models  (LLMs), get easier and cheaper, some are using them for unsavory purposes, like generating malicious code or phishing campaigns. But the threat of AI-accelerated hackers isn’t quite as dire as some headlines would suggest. The dark web creators of LLMs like “WormGPT” and “FraudGPT” advertise  Read More TechCrunch 

Disclaimer

Innov8 is owned and operated by Rolling Rock Ventures. The information on this website is for general information purposes only. Any information obtained from this website should be reviewed with appropriate parties if there is any concern about the details reported herein. Innov8 is not responsible for its contents, accuracies, and any inaccuracies. Nothing on this site should be construed as professional advice for any individual or situation. This website includes information and content from external sites that is attributed accordingly and is not the intellectual property of Innov8. All feeds ("RSS Feed") and/or their contents contain material which is derived in whole or in part from material supplied by third parties and is protected by national and international copyright and trademark laws. The Site processes all information automatically using automated software without any human intervention or screening. Therefore, the Site is not responsible for any (part) of this content. The copyright of the feeds', including pictures and graphics, and its content belongs to its author or publisher.  Views and statements expressed in the content do not necessarily reflect those of Innov8 or its staff. Care and due diligence has been taken to maintain the accuracy of the information provided on this website. However, neither Innov8 nor the owners, attorneys, management, editorial team or any writers or employees are responsible for its content, errors or any consequences arising from use of the information provided on this website. The Site may modify, suspend, or discontinue any aspect of the RSS Feed at any time, including, without limitation, the availability of any Site content.  The User agrees that all RSS Feeds and news articles are for personal use only and that the User may not resell, lease, license, assign, redistribute or otherwise transfer any portion of the RSS Feed without attribution to the Site and to its originating author. The Site does not represent or warrant that every action taken with regard to your account and related activities in connection with the RSS Feed, including, without limitation, the Site Content, will be lawful in any particular jurisdiction. It is incumbent upon the user to know the laws that pertain to you in your jurisdiction and act lawfully at all times when using the RSS Feed, including, without limitation, the Site Content.