On Monday, September 9, 2024, South Korea initiated a global summit focusing on the responsible application of artificial intelligence in military contexts. This event, co-hosted by the Netherlands, Singapore, Kenya, and the United Kingdom, has drawn representatives from more than 90 nations, including the United States and China.
The summit's primary objective is to formulate a blueprint for the ethical use of AI in military operations. However, it's important to note that any resulting agreement is not expected to have legally binding enforcement mechanisms.
This gathering marks the second of its kind, following the inaugural summit held in Amsterdam in 2023. At that event, participating nations endorsed a modest "call to action" without legal commitments. The current summit aims to build upon this foundation, establishing minimum safeguards and suggesting principles for responsible AI use in military domains.
A senior South Korean government official, speaking anonymously, stated:
"The summit is expected to yield a blueprint for action, establishing a minimum level of guard-rails for AI in the military and suggesting principles of responsible use of AI in the domain. There are already principles laid out by NATO, by the U.S. or multiple other countries, so we tried to find the converging area and reflect that in this document."
The summit's organizers have emphasized the importance of multi-stakeholder discussions in a field where technological advancements are primarily driven by the private sector, but governments remain the key decision-makers. This approach aligns with the complex nature of AI development and its potential military applications.
Approximately 2,000 individuals globally have registered to participate in the summit, including representatives from international organizations, academia, and the private sector. Discussions will cover a range of topics, including civilian protection and the potential use of AI in nuclear weapons control systems.
It's worth noting that this summit is not the only international effort addressing AI in military contexts. The United Nations, through its 1983 Convention on Certain Conventional Weapons (CCW), is exploring potential restrictions on lethal autonomous weapons systems, with a focus on compliance with international humanitarian law.
Additionally, in 2023, the U.S. government launched a declaration on the responsible use of AI in the military, covering broader applications beyond weapons. As of August 2023, 55 countries had endorsed this declaration.
The ongoing discussions and initiatives reflect the growing importance of AI in military technology and the need for international cooperation to ensure its responsible development and use. As AI continues to advance, it's crucial for nations to collaborate on establishing ethical guidelines that balance innovation with safety and humanitarian concerns.