We need a political Alan Turing to design AI safeguards

Receive free artificial intelligence updates

The author is the founder of the organization Sievea website about European startups powered by the Financial Times

While working at Bletchley Park during World War II, Alan Turing helped solve a diabolical problem: cracking Nazi Germany’s “unbreakable” Enigma code. Next month, the British government will host an international conference at the same country house in Buckinghamshire to explore a similarly puzzling issue: minimizing the potentially catastrophic risks of artificial intelligence. Yet even a brilliant mathematician like Turing would be tested by this challenge.

The electromechanical devices Turing built could only perform one code-breaking function well, while today’s cutting-edge artificial intelligence models are approaching the “universal” computers he could only imagine, capable of performing many more functions. The dilemma is that technologies that enhance economic productivity and scientific research may also exacerbate cyberwarfare and bioterrorism.

It’s clear from the heated public debate that has erupted since OpenAI released its ChatGPT chatbot last November that the scope of concerns raised by artificial intelligence is rapidly expanding.

On the one hand, “safety” advocates extrapolate from recent advances in artificial intelligence technology and focus on extreme risks. an open letter Earlier this year, dozens of the world’s leading artificial intelligence researchers signed an agreement, including the CEOs of OpenAI, Anthropic, and Google DeepMind, which are developing the most powerful models, and even declared: “The key to mitigating AI extinction is The risk should be a global priority, alongside other society-scale risks such as pandemics and nuclear war.”

On the other hand, “ethical” advocates are uneasy about current concerns about algorithmic bias, discrimination, disinformation, copyright, workers’ rights, and the concentration of corporate power. Some researchers, such as University of Washington professor Emily Bender, believe that debates about the risks of artificial intelligence are science fiction fantasies designed to distract from today’s concerns.

Some civic groups and small tech companies, feeling left out of official proceedings at Bletchley Park, are organizing fringe events to discuss issues they feel are being ignored.

Matt Clifford, a British tech investor who is helping set the agenda for the AI ​​Security Summit, acknowledged that this would only solve a series of problems. But he believes other forums and institutions are already grappling with many other issues. “We chose a narrow focus, not because we didn’t care about everything else, but because it was the part that felt urgent, important and overlooked,” he told me.

In particular, he said the meeting will explore the possibilities and dangers of next-generation cutting-edge models that could be released within the next 18 months. Even the creators of these models have a hard time predicting their capabilities. But they are confident they will be more powerful than today’s products and available to millions of people by default.

as Anthropic CEO Dario Amodei) Outlined in Chilling Testimony The development of more powerful artificial intelligence models could revolutionize scientific discovery, but it would also “dramatically expand the range of people capable of wreaking havoc,” he told the US Congress in July. Without proper guardrails, he said, there could be a huge risk of a “large-scale biological attack.”

Despite industry resistance, it’s hard to avoid the conclusion that the precautionary principle must now apply to cutting-edge AI models, given the unknowability of their capabilities and speed of development. This is the view of Yoshua Bengio, a pioneering artificial intelligence researcher and Turing Award winner in computer science, who is attending the Bletchley Park conference.

Bengio suggested that cutting-edge AI models could be regulated in the same way that the U.S. Food and Drug Administration regulates the release of drugs to stop the sale of junk therapies. This may slow down the pace of innovation and cost tech companies more money, but, “that’s the price of security and we should not hesitate to do it,” he told the Financial Times’ upcoming Tech Tonic said in an interview for the podcast series.

To its credit, the UK government is starting a global conversation about AI safety and is itself building expert national capacity to deal with cutting-edge models. But Bletchley Park means nothing unless it leads to meaningful coordinated action. In a world filled with so many dangers, it takes a political Turing rather than a technical Turing to crack the code.

Svlook

Leave a Reply

Your email address will not be published. Required fields are marked *