
Anthropic tightens AI use policy to ban weapon development and enhance cybersecurity
Anthropic has updated the usage policy for its Claude chatbot, explicitly banning the use of its AI for developing biological, chemical, nuclear, radiological, or high-yield explosive weapons. The policy also bars using Claude to create malware, exploit computer vulnerabilities, or launch cyberattacks. These changes respond to growing concerns about misuse of agentic AI tools like Claude Code and Computer Use, which can access user systems or developer terminals. Anthropic also eased restrictions on political uses, as long as they are non-deceptive and uphold democratic processes.
As advanced AI tools become more capable, stronger safeguards like these are crucial to prevent misuse and promote responsible deployment.