TECHNOLOGY, INTERNET TRENDS, GAMING

OpenAI reviews policy on use for military purposes

OpenAI reviews policy on use for military purposes

By auroraoddi

OpenAI recently made changes to its usage policy, explicitly removing the prohibition against using its technology for “military and warfare” purposes. This change is being closely watched, especially considering the growing interest of military agencies around the world in using artificial intelligence (AI). However, the real implications of this change have yet to be assessed.

OpenAI usage policy

In the past, OpenAI ‘s use policy explicitly prohibited the use of its language models for purposes that could cause harm or to develop weapons. However, the specific mention of “military and warfare” has been removed from the list of prohibitions. This suggests that OpenAI may now collaborate with government agencies, such as the Department of Defense, which usually offer lucrative deals to outside companies.

Military agencies’ interest in AI

Military agency interest inartificial intelligence is increasing, and the change in OpenAI usage policy seems to reflect this. However, it is important to note that OpenAI does not currently have a product that could directly cause physical harm or kill anyone. Its technology could be used for purposes such as writing code and processing provisioning orders for objects that could be used for warfare purposes.

Sarah Myers West, executive director of the AI Now Institute, noted that the removal of the words “military and warlike” from OpenAI’s use policy is a significant moment, considering the use of artificial intelligence systems in targeting civilians in Gaza. This change could raise concerns about OpenAI’s involvement in projects that could cause harm or violate human rights.

OpenAI’s rationale.

When questioned about the change in usage policy, OpenAI spokesperson Niko Felix said the company aims to create universal principles that are easy to remember and apply, especially considering the wide use of their technologies by ordinary users who can also build generative language models (GPTs).

Felix emphasized the general principle of “do no harm to others” as a broad but easily understood concept relevant in many contexts. He also specified that OpenAI explicitly mentioned weapons and injury to others as clear examples of what is prohibited. However, the spokesperson did not clarify whether the ban on using their technology to “harm” others includes all types of military use outside of weapons development.