Global AI Safety Summit Set for San Francisco Post-U.S. Elections

International experts to convene in San Francisco for AI safety discussions following U.S. elections. The two-day event aims to address AI risks and foster global cooperation in technology development.

September 18 2024, 11:50 AM  •  205 views

Global AI Safety Summit Set for San Francisco Post-U.S. Elections

In a significant move towards global cooperation on artificial intelligence (AI) safety, government scientists and AI experts from at least nine countries and the European Union are scheduled to gather in San Francisco following the U.S. elections. This two-day international AI safety meeting, set for November 20-21, 2024, marks a crucial step in coordinating efforts to safely develop AI technology while mitigating potential risks.

The upcoming event builds upon previous international collaborations, including the AI Safety Summit in the United Kingdom in November 2023 and a follow-up meeting in South Korea in May 2024. These gatherings have led to the establishment of a network of publicly supported safety institutes dedicated to advancing AI research and testing.

Image

U.S. Commerce Secretary Gina Raimondo emphasized the importance of this meeting, stating:

"We're going to think about how do we work with countries to set standards as it relates to the risks of synthetic content, the risks of AI being used maliciously by malicious actors. Because if we keep a lid on the risks, it's incredible to think about what we could achieve."

U.S. Commerce Secretary Gina Raimondo

The San Francisco summit will focus on several critical areas, including:

  • Addressing the rise of AI-generated fakery
  • Identifying when AI systems require safeguards due to their capabilities or potential dangers
  • Establishing international standards for AI risk management
  • Promoting collaboration on AI safety research

It's worth noting that the field of AI has seen rapid advancements since its inception. The term "Artificial Intelligence" was coined by John McCarthy in 1956, the same year as the first AI conference at Dartmouth College. Since then, AI has achieved numerous milestones, including IBM's Deep Blue defeating world chess champion Garry Kasparov in 1997 and AlphaGo besting world champion Lee Sedol in the game of Go in 2016.

The meeting will take place in San Francisco, a hub of current generative AI technology, approximately two weeks after the U.S. presidential election. This timing adds a layer of complexity to the proceedings, as the outcome may influence the United States' approach to AI policy.

Notably absent from the list of participants is China, a major player in AI development. However, Raimondo indicated that efforts are ongoing to determine if additional scientists might join the discussions.

The urgency of addressing AI risks has been highlighted by recent developments. In 2023, over 1,000 AI experts signed an open letter calling for a pause in AI development. Additionally, the rapid growth of AI applications like ChatGPT, which became the fastest-growing consumer application in history by reaching 100 million users in just two months, underscores the need for robust safety measures.

As governments worldwide pledge to safeguard AI technology, different approaches have emerged. The European Union has taken the lead with the world's first comprehensive AI law, the AI Act, proposed in 2021. In the United States, President Biden signed an executive order on AI in October 2023, requiring developers of powerful AI systems to share safety test results with the government.

The San Francisco meeting represents a crucial step towards establishing a global framework for AI safety. As Raimondo noted, while current measures are voluntary, there may be a need to move beyond this system and implement more binding regulations. The outcome of this gathering will likely shape the future of AI development and governance on a global scale.