Researchers at NYU’s Tandon School of Engineering discovered a system that could allow hackers to carry out cyberattacks, override security software and steal personal data by weaponizing artificial intelligence.
The system, called “Ransomware 3.0,” was uploaded by Tandon researchers as a prototype to VirusTotal, an online service for security researchers to test malware. It was then discovered by cybersecurity firm ESET, which identified it as the first ransomware system to be entirely AI powered, and initially believed it was developed by hackers, rather in a research lab.
“‘Ransomware 3.0’ is the first to showcase an end-to-end automated large language model agent that executes the ransomware attack chain,” Tandon researcher Meet Udeshi told WSN. “Our proof-of-concept shows that this threat is credible and viable, which is significant for the cybersecurity community.”
The Tandon team began research in late July and completed the evaluation by the end of August. Researchers first prompted large language models — AI systems trained on large amounts of content to process human language — to output codes custom to each targeted computer system. They then ran code which flagged critical material and produced files, logs and information summaries. Depending on their environment, the models were able to identify 63-96% of the system’s sensitive or important content.
LLMs are typically trained to comply with ethical policies that prevent them from responding to known malicious actors. However, researchers used phrases to frame their prompts as legitimate tasks, allowing the request to not be detected as an attack. Using the system, researchers were able to map databases, identify files, steal and encrypt data and write threatening messages to victims.
In the press release, author Md Raz said that ESET’s initial belief that the system was created by hackers is indicative of its viability. Udeshi emphasized that “Ransomware 3.0” is designed to demonstrate the capabilities of AI-automated ransomware in a lab setting, not operational ransomware that can be used by criminals.
“The purpose of our research was to alert the community of this new class of threats,” Udeshi said. “So their response motivates us to further study AI-powered cyber-attacks and develop robust defenses.”
Researchers said that this study indicates heightened challenges among cyber security engineers in the age of AI. By utilizing AI, hackers will be able to easily communicate with more people and without as many technical resources.
NYU has recently emphasized its goal to bolster its science and technology programming. Starting in fall of 2026, the Courant Institute of Mathematical Sciences will expand to encompass computer science programs at Tandon, as well as NYU’s Center for Data Science. It also recently purchased a 1.1 million-square-foot office in Astor Place, specifically to support work in science and technology. It also received federal funding last year to establish a cybersecurity center.
“LLMs are able to make human-like decisions and generate human-like communications, so what previously required a human bad actor can now be automated,” Udeshi said. “We must upgrade traditional defenses for those dynamic threats by incorporating similarly adaptive AI-powered malware.”
Contact Jake Christy at [email protected].