AI-Powered Ransomware Discovered Using OpenAI Model

A potential threat has emerged in the cybersecurity landscape, with antivirus company ESET reporting the first known AI-powered ransomware, dubbed PromptLock. This malware utilizes OpenAI’s open-source gpt-oss:20b model, which was released earlier this month and allows users to modify its code freely.
ESET discovered PromptLock through samples uploaded to VirusTotal, a service owned by Google that catalogs malware and checks files for potential threats. The ransomware runs the gpt-oss:20b model locally on an infected device to generate malicious code using hardcoded text prompts.
Once activated, PromptLock executes malicious Lua scripts, which are cross-platform compatible, functioning on Windows, Linux, and macOS. These scripts may exfiltrate data, encrypt it, or potentially destroy it, as per ESET’s warning.
However, the current findings suggest that PromptLock might be a “proof-of-concept” or “work-in-progress” rather than an operational attack, as the file-destruction feature has yet to be implemented. One security researcher also claimed ownership of PromptLock on Twitter.
The large size of the gpt-oss:20b model (13GB) could potentially hog a GPU’s video memory, but ESET states that the attack is highly viable. The attacker need not download the entire gpt-oss model; instead, they can establish a proxy or tunnel from the compromised network to a server running the model and accessible via the Ollama API.
ESET emphasizes the importance of sharing such developments within the cybersecurity community. John Scott-Railton, a spyware researcher at Citizen Lab, echoes this sentiment: “We are in the earliest days of regular threat actors leveraging local/private AI. And we are unprepared.”
OpenAI released a statement thanking researchers for their findings and reiterating their commitment to ensuring the safe development of their models. They take steps to reduce the risk of malicious use and continuously improve safeguards to make their models more robust against exploits. Further details on this approach can be found in OpenAI’s model card.
OpenAI previously tested its more powerful source model, gpt-oss-120b, but concluded that despite fine-tuning, it “did not reach High capability in Biological and Chemical Risk or Cyber risk.”
As a professional journalist with over 15 years of experience, I have covered various topics, starting as a schools and cities reporter in Kansas City and joining this publication in 2017.