x
N A B I L . O R G
Close
Security - August 16, 2025

Anthropic Chatbot Cracks Down on Weapons Synthesis While Loosening Lobbying Restrictions

Anthropic Chatbot Cracks Down on Weapons Synthesis While Loosening Lobbying Restrictions

In response to escalating concerns regarding the misuse of artificial intelligence, Anthropic, a leading AI startup, has tightened the usage policy of its chatbot Claude. The updated policy aims to prevent the chatbot from being utilized in the synthesis or development of high-yield explosives and weapons of mass destruction, including biological, chemical, radiological, or nuclear.

Previously, the terms and conditions prohibited the design of weapons, explosives, dangerous materials, or systems intended to cause harm. However, this is the first time the policy includes such granular detail, as highlighted by The Verge.

Simultaneously, Claude has eased restrictions in some other areas. The chatbot has revised its blanket ban on generating all types of lobbying and campaign content. The new restrictions now only prohibit use cases that are deceptive or disruptive to democratic processes, or involve voter and campaign targeting. This move is aimed at fostering “legitimate political discourse.”

Additionally, Claude has introduced terms to prevent its tools from being employed for cyberattacks or the creation of malware.

While there are no known instances of terrorists utilizing publicly released chatbots to construct biological, chemical, radiological, or nuclear weapons, research has underscored the potential for large language models (LLMs) to be used for such purposes.

In April 2025, security researchers at HiddenLayer suggested that it was possible to bypass safeguards in mainstream LLMs from OpenAI, Anthropic, Meta, and Google to produce guides on enriching uranium—a key component in nuclear weapon production. Although the chatbots did not provide information unavailable on the internet, they presented the data in a user-friendly format that could potentially be easier for individuals without a technical background to comprehend.

Meanwhile, an academic paper published in 2024 by researchers from Northwestern and Stanford, as reported by Time Magazine, suggested that although current AI models are unlikely to substantially contribute to biological risk, future systems might aid in engineering new pathogens capable of causing pandemics.

Moreover, foreign powers like China have allegedly employed chatbots for offensive purposes, even if indirectly, such as using ChatGPT to write and translate political propaganda internationally.

As a professional reporting on technology news, I am committed to exploring the intersection of technology and human lives. Prior to joining this publication, I gained bylines in various renowned outlets including BBC News, The Guardian, The Times of London, The Daily Beast, Vice, Slate, Fast Company, The Evening Standard, The i, TechRadar, and Decrypt Media.

My passion for technology was ignited during the era when games had to be installed from multiple CD-ROMs manually. As a reporter, I am dedicated to understanding and reporting on how technology shapes our lives, covering a broad spectrum of topics from cryptocurrency scandals to the art world, conspiracy theories, politics, and international affairs.