Rise of Malicious Black Hat AI Tools That Shift the Nature of Cyber Warfare

Business email compromise (BEC) assaults are being carried out by cybercriminals with the help of generative AI technology. One such tool that they employ is WormGPT, a black-hat substitute for GPT models that are specifically made for harmful purposes.

A SlashNext study claims that WormGPT was trained on a variety of data sources, with an emphasis on malware-related data. It can produce remarkably realistic phony emails and can produce human-like writing based on the input it receives.

Screenshots from a cybercrime form show conversations between malevolent actors about how to utilize ChatGPT to support effective BEC assaults. Even hackers who are not very proficient in the target language can employ gen AI to create an email that looks authentic.

The study team also assessed WormGPT possible hazards, concentrating on BEC assaults in particular. They gave the program instructions to produce an email intended to coerce an unwary account manager into paying a fake invoice.

According to the findings, WormGPT was not only able to use a convincing tone but was also "strategically cunning," which suggests that it was capable of launching complex phishing and BEC operations.

The paper stated, 'It is like ChatGPT but has no ethical boundaries or limitations," and that the creation of tools highlights the threat that generative AI technologies, like WormGPT, represent, even to unskilled hackers.

The study also showed that hackers are constructing "jailbreaks," which are specific prompts meant to trick generative AI interfaces into generating output that might include revealing private data, generating offensive material, or even running malicious code.

A development that can further complicate cyber defense is the creation of custom modules by some determined cybercriminals, which are similar to those used by ChatGPT but intended to aid in the execution of assaults.

"Malicious actors can now launch these attacks at scale at zero cost, and they can do it with much more targeted precision than they could before," Patrick Harr, CEO of SlashNext, adds. "If they aren't successful with the first BEC or phishing attempt, they can simply try again with retooled content."

What Harr refers to as the "polymorphic nature" of attacks—those that can be launched quickly and at no cost to the person or organization initiating them—will result from the application of generative AI. "It is that targeted nature, along with the frequency of attack, which is going to make companies rethink their security posture," according to him.

Fighting Fire with Fire

The emergence of generative AI tools is making cybersecurity operations more challenging, increasing the sophistication of attacks and emphasizing the need for stronger defenses against ever-evolving threats.

Harr believes that the most effective way to combat the threat of malware, phishing, and AI-assisted BEC is by utilizing AI-assisted protection capabilities.

"You're going to have to integrate AI to fight AI, otherwise, you're going to be on the outside looking in and you're going to see continued breaches," according to him. To do that, defense systems based on AI must be trained to find, identify, and eventually stop a complex and quickly changing array of threats created by AI.

"Theres only so many ways you can say the same thing if a threat actor creates an attack and then tells the gen AI tool to modify it," Harr says, citing invoice fraud as an example. "What you can do is tell your AI defenses to take that core and clone it to create 24 different ways to say that same thing." After that, security personnel can use those artificial data clones to train the defensive model inside the firm.

"You can almost anticipate what their next threat will be before they launch it, and if you incorporate that into your defense, you can detect it and block it before it infects," according to him. "This is an example of using AI to fight AI."

According to him, enterprises will eventually have to rely on AI to find these vulnerabilities and, eventually, remediate them because there is no way for humans to stay ahead of the curve without doing so.

Despite being instructed to reject malicious requests, an AI tool used by a Forcepoint researcher in April was persuaded to produce malware for locating and stealing particular documents.

Meanwhile, most businesses are largely unprepared to guard against the vulnerabilities that the emerging technology presents due to developers' enthusiasm for ChatGPT and other Large Language Model (LLM) tools.