OpenAI: ChatGPT company quietly softens ban on using AI for military
The company behind ChatGPT has removed language in its terms and conditions that prohibited the use of its artificial intelligence technology for military purposes.
The AI company’s usage policy initially included a ban on any activity that involves a high risk of physical harm, including “weapons development” and “military and warfare”.
This ruled out the use of the AI technology by the US Department of Defense, for instance, or by any other state military.
But a new update to the company’s AI usage policy appears to soften the language around this ban on military use.
While the policy update retains an injunction not to “use our service to harm yourself or others,” and provides as an example of this the us of AI to “develop or use weapons”, the initial blanket ban on “military and warfare” use has vanished, in a development first reported by The Intercept.
This unannounced alteration of the company’s AI usage policy is part of a major rewrite of its policy page, which the firm said was an attempt to make the document more readable.
“We’ve updated our usage policies to be more readable and added service-specific guidance,” OpenAI said in a blog post.
The updated version says the software should be used to help maximise “innovation and creativity”, with a high degree of flexibility as long as this is compliant with the law.
“We believe you should have the flexibility to use our services as you see fit, so long as you comply with the law and don’t harm yourself or others,” the updated policy notes.
It remains unclear currently what the real world implications of the policy change could be.
OpenAI did not immediately respond to The Independent’s request for comment.
“Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy," @sarahbmyers told @samfbiddle.
1/3🧵 https://t.co/FEb4PwOhHe— AI Now Institute (@AINowInstitute) January 12, 2024
OpenAI appears to be aware of the risks that may arise from the use of its technology for military purposes.
A 2022 study co-authored by OpenAI researchers flags the risks and potential harms of using large language models such as the one behind ChatGPT for warfare.
Previous research has also warned that AI tools like ChatGPT can be tricked into producing malicious code that could be used to launch cyber attacks.
“Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” Sarah Meyers, anaging director of the AI Now Institute, posted on X.