Connect with us

Hi, what are you looking for?

White Papers

Sophos anticipates AI-based attack techniques and prepares detections

Using a simple e-commerce template and LLM tools like GPT-4, Sophos X-Ops was able to build a fully functioning website with AI-generated images, audio, and product descriptions, as well as a fake Facebook login and fake checkout page to steal users’ login credentials and credit card details.

Sophos, a global player in innovating and delivering cybersecurity as a service, today released two reports about the use of AI in cybercrime. The first report—“The Dark Side of AI: Large-Scale Scam Campaigns Made Possible by Generative AI”—demonstrates how, in the future, scammers could leverage technology like ChatGPT to conduct fraud on a massive scale with minimal technical skills.

However, a second report, titled “Cybercriminals Can’t Agree on GPTs,” found that, despite AI’s potential, rather than embracing large language models (LLMs) like ChatGPT, some cybercriminals are skeptical and even concerned about using AI for their attacks.

The Dark Side of AI

Using a simple e-commerce template and LLM tools like GPT-4, Sophos X-Ops was able to build a fully functioning website with AI-generated images, audio, and product descriptions, as well as a fake Facebook login and fake checkout page to steal users’ login credentials and credit card details. The website required minimal technical knowledge to create and operate, and, using the same tool, Sophos X-Ops was able to create hundreds of similar websites in minutes with one button.

“It’s natural—and expected—for criminals to turn to new technology for automation. The original creation of spam emails was a critical step in scamming technology because it changed the scale of the playing field. New AIs are poised to do the same; if an AI technology exists that can create complete, automated threats, people will eventually use it. We have already seen the integration of generative AI elements in classic scams, such as AI-generated text or photographs to lure victims.

Advertisement. Scroll to continue reading.

“However, part of the reason we conducted this research was to get ahead of the criminals. By creating a system for large-scale fraudulent website generation that is more advanced than the tools criminals are currently using, we have a unique opportunity to analyze and prepare for the threat before it proliferates,” said Ben Gelman, senior data scientist, Sophos.

Cybercriminals Can’t Agree on GPTs

For its research into attacker attitudes towards AI, Sophos X-Ops examined four prominent dark web forums for LLM-related discussions. While cybercriminals’ AI use appears to be in its early stages, threat actors on the dark web are discussing its potential when it comes to social engineering.  Sophos X-Ops has already witnessed the use of AI in romance-based, crypto scams.

In addition, Sophos X-Ops found that the majority of posts were related to compromised ChatGPT accounts for sale and “jailbreaks”—ways to circumvent the protections built into LLMs, so cybercriminals can abuse them for malicious purposes. Sophos X-Ops also found ten ChatGPT-derivatives that the creators claimed could be used to launch cyber-attacks and develop malware. However, threat actors had mixed reactions to these derivatives and other malicious applications of LLMs, with many criminals expressing concern that the creators of the ChatGPT imitators were trying to scam them.

“While there’s been significant concern about the abuse of AI and LLMs by cybercriminals since the release of ChatGPT, our research has found that, so far, threat actors are more skeptical than enthused. Across two of the four forums on the dark web we examined, we only found 100 posts on AI. Compare that to cryptocurrency where we found 1,000 posts for the same period.

Advertisement. Scroll to continue reading.

“We did see some cybercriminals attempting to create malware or attack tools using LLMs, but the results were rudimentary and often met with skepticism from other users. In one case, a threat actor, eager to showcase the potential of ChatGPT inadvertently revealed significant information about his real identity. We even found numerous ‘thought pieces’ about the potential negative effects of AI on society and the ethical implications of its use. In other words, at least for now, it seems that cybercriminals are having the same debates about LLMs as the rest of us,” said Christopher Budd, director, X-Ops research, Sophos.

For more about AI-generated scam websites and threat actors’ attitudes to LLMs, read The Dark Side of AI: Large-Scale Scam Campaigns Made Possible by Generative AI and Cybercriminals Can’t Agree on GPTs on Sophos.com.

Advertisement
Advertisement
Advertisement

Like Us On Facebook

You May Also Like

HEADLINES

As generative AI fuels large-scale impersonation imagery and remote work reshapes enterprise security, identity has become the perimeter, and high-assurance verification is essential to...

HEADLINES

Cybersecurity experts warn that aside from sexualized content, AI-generated images are also increasingly used for fraud, scams, and identity theft, and share tips on...

HEADLINES

AI-enabled adversaries increased operations by 89% year-over-year, weaponizing AI across reconnaissance, credential theft, and evasion.

HEADLINES

Enterprise complexity is working in the attackers' favor — identity weaknesses were exploited in 89% of investigations, while 87% of attacks involved multiple attack...

HEADLINES

AI-first businesses – those integrating AI into key processes and offerings from the outset rather than as a secondary enhancement – are hurtling towards...

HEADLINES

The acquisition is an important step in Sophos’ strategy to help organizations strengthen cybersecurity strategy and governance across all levels of maturity, delivered through...

HEADLINES

Critical infrastructure all over the world is under threat from highly organized, state-sponsored “espionage ecosystems”. These loosely knit but well-resourced organizations are deploying a...

HEADLINES

The agreement marks a significant step in the evolution of Carbon Markets 2.0, as governments and investors increasingly demand real-time verification, auditability, and transparency across...

Advertisement