Military turn from major AI labs threatens race to the bottom
Photo by Mohammed Ibrahim on Unsplash
2024 gave us a first insight into the impact of AI-enabled warfare. A series of important articles published by +972 Magazine and Local Call revealed how Israel's military was employing an AI powered targeting system called Lavender in Gaza, to absolutely devastating effect.
Lavender, which was built on top of years of mass surveillance data, was reportedly relying on infrastructure supplied by major US companies, particularly Amazon, Google and Microsoft (a major investor in OpenAI).
Despite sustained worker protests at these revelations, Google has now signalled that it intends to continue facilitating the use of AI for military purposes. As Bloomberg reports, the company has removed an online pledge that it would not develop AI for use in weaponry.
Google's move is only the latest of a series of reversals from the major US-based AI labs (including OpenAI, Anthropic and Meta), who all previously disavowed participation in military projects. Several of these companies are already engaged in contracts for the US Department of Defense as well as data-focused security contractors like Palantir, Faculty AI and Anduril.
Whether and how the use of AI for military purposes will be governed in future remains unclear. The emerging international alliance of AI safety institutes, which develops new techniques for evaluating AI models, excludes military AI from its remit.
Meanwhile, a long-standing initiative to forge an international treaty on autonomous weapons systems (AWS) is slowly progressing through the institutions of the UN General Assembly. This, however, would not cover the kind of surveillance-powered AI targeting that we have seen employed in Gaza.
Lavender and the related systems employed by the IDF appears to violate even the voluntary statement of principles on military AI promoted by the United States as an alternative to a formal treaty. The absence of international criticism about those systems does not bode well for establising basic humanitarian standards for the future military use of AI.