x
N A B I L . O R G
Close
Security - August 27, 2025

Generative AI Fueling Evolution of Ransomware: Criminals Leverage Advanced Tools to Develop and Distribute Malware

Generative AI Fueling Evolution of Ransomware: Criminals Leverage Advanced Tools to Develop and Distribute Malware

In the face of a global surge in cybercrime, new research indicates that ransomware is evolving due to the widespread availability of advanced AI tools. This evolution is manifesting in various ways, including the use of AI to draft more menacing and coercive ransom notes, enhancing extortion attacks’ effectiveness. However, the utilization of AI by cybercriminals is becoming increasingly sophisticated.

Today, researchers from AI company Anthropic have disclosed that attackers are heavily relying on AI—in some cases exclusively—to create actual malware and offer ransomware services to other cybercriminals.

According to a recent threat intelligence report released by Anthropic, ransomware criminals have been identified using their large language model, Claude, and its coding-specific model, Claude Code, in the development process of ransomware. This revelation complements separate research this week from security firm ESET, which showcases a proof of concept for a type of ransomware attack executed entirely by local Language Models (LLMs) running on a malicious server.

These findings collectively underscore how AI is propelling cybercrime forward, making it easier for attackers—even those with minimal technical skills or ransomware experience—to execute such attacks. In their report, Anthropic’s threat intelligence team wrote, “Our investigation revealed not merely another ransomware variant, but a transformation enabled by artificial intelligence that removes traditional technical barriers to novel malware development.”

Over the past decade, ransomware has proven to be an persistent problem. Attackers have grown increasingly cunning and relentless, ensuring victims continue to pay up. According to some estimates, the number of ransomware attacks reached record highs at the beginning of 2025, and criminals continue to generate hundreds of millions of dollars annually. As Paul Nakasone, former US National Security Agency and Cyber Command chief, noted at the Defcon security conference earlier this month, “We are not making progress against ransomware.”

The integration of AI into the already treacherous realm of ransomware could further amplify what hackers are capable of. According to Anthropic’s research, a cybercriminal threat actor based in the United Kingdom—tracked as GTG-5004 and active since the start of this year—has been using Claude to “develop, market, and distribute ransomware with advanced evasion capabilities.”

On cybercrime forums, GTG-5004 has been offering ransomware services ranging from $400 to $1,200, providing different tools depending on the package level. Anthropic claims that while GTG-5004’s products encompass a range of encryption capabilities, software reliability tools, and methods designed to help the hackers avoid detection, the developer appears to lack technical proficiency. “This operator does not appear capable of implementing encryption algorithms, anti-analysis techniques, or Windows internals manipulation without Claude’s assistance,” the researchers note.

Anthropic has banned the account linked to the ransomware operation and implemented “new methods” for detecting and preventing malware generation on its platforms. These include using pattern detection known as YARA rules to identify potential malware and malware hashes that may be uploaded to their platforms.