The US artificial intelligence (AI) firm Anthropic is looking to hire a chemical weapons and high-yield explosives expert to try to prevent catastrophic misuse of its software. In other words, it fears that its AI tools might tell someone how to make chemical or radioactive weapons, and wants an expert to ensure its guardrails are sufficiently robust. In the LinkedIn recruitment post, the firm states that applicants should have a minimum of five years of experience in chemical weapons and/or explosives defense as well as knowledge of radiological dispersal devices – also known as dirty bombs.
Anthropic is not the only AI firm adopting this strategy. A similar position has been advertised by ChatGPT developer OpenAI, which is offering a salary of up to $455,000 (£335,000) for a researcher in biological and chemical risks.
However, some experts are alarmed by the risks of this approach, warning that it gives AI tools information about weapons, even if they have been instructed not to use it. Dr. Stephanie Hare, a tech researcher, questioned whether it is safe to use AI systems to handle such sensitive information.
The issue has gained urgency as the US government calls on AI firms amid military operations. Anthropic is also taking legal action against the US Department of Defense for designating it as a supply chain risk due to its emphasis on not using its systems in autonomous weapons or mass surveillance.
Anthropic is not the only AI firm adopting this strategy. A similar position has been advertised by ChatGPT developer OpenAI, which is offering a salary of up to $455,000 (£335,000) for a researcher in biological and chemical risks.
However, some experts are alarmed by the risks of this approach, warning that it gives AI tools information about weapons, even if they have been instructed not to use it. Dr. Stephanie Hare, a tech researcher, questioned whether it is safe to use AI systems to handle such sensitive information.
The issue has gained urgency as the US government calls on AI firms amid military operations. Anthropic is also taking legal action against the US Department of Defense for designating it as a supply chain risk due to its emphasis on not using its systems in autonomous weapons or mass surveillance.




















