On Wednesday, the UK hosted an AI Safety Summit attended by 28 countries, including the US and China, which gathered to address potential risks posed by advanced AI systems, reports The New York Times. The event included the signing of "The Bletchley Declaration," which warns of potential harm from advanced AI and calls for international cooperation to ensure responsible AI deployment.
"There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models," reads the declaration, named after Bletchley Park, the site of the summit and a historic World War II location linked to Alan Turing. Turing wrote influential early speculation about thinking machines.
Rapid advancements in machine learning, including the appearance of chatbots like ChatGPT, have prompted governments worldwide to consider regulating AI. Their concerns led to the meeting, which has drawn criticism for its invitation list. In the tech world, representatives from major companies included those from Anthropic, Google DeepMind, IBM, Meta, Microsoft, Nvidia, OpenAI, and Tencent. Civil society groups, like Britain's Ada Lovelace Institute and the Algorithmic Justice League in Massachusetts, also sent representatives.
Read 6 remaining paragraphs | Comments
Ars Technica - All contentContinue reading/original-link]