Elon Musk has repeatedly called artificial intelligence “civilization risk”. Geoffrey Hinton, one of the founders of artificial intelligence research, recently changed his tune, calling artificial intelligence “existential threat”. There’s also Mustafa Suleyman, co-founder of DeepMind, a company previously backed by Musk who has been working in this field for more than a decade and is the co-founder of the newly released “The Coming Wave: Technology, Power, and the Greatest Dilemmas of the Twenty-First Century.As one of the most prominent and longest-serving experts in the field, he believes such far-reaching concerns are not as pressing as others say and, in fact, the challenge from now on is quite simple.

The risks posed by artificial intelligence have been front and center in the public debate in 2023, ever since the technology entered public consciousness and became the subject of media attention. “I just think there’s risk in something that’s a completely crazy distraction,” Mustafa told MIT Technology Review last week. “There are 101 more practical issues we should all be discussing, from privacy to bias, from facial recognition to online moderation.”

The most pressing issue, in particular, should be regulation, he said. Suleiman is optimistic that governments around the world can effectively regulate artificial intelligence. “I think everyone is panicking because we can’t police this,” Suleiman said. “This is just nonsense. We are fully capable of regulating it. We will apply the same framework that has been successful before.”

His belief stems in part from the successful regulation of past technologies such as aviation and the Internet that were once considered cutting-edge. He believes: If commercial flights do not have proper safety protocols, passengers will never trust the airlines, which will hurt the business. Online, consumers have access to countless websites, but activities such as selling drugs or promoting terrorism are prohibited—though not eliminated entirely.

On the other hand, due to reviewWill Douglas Heaven pointed out to Suleiman that some observers believe that current Internet regulations are Defective and don’t fully Hold big tech companies accountable.Specifically, Section 230 of the Communications Decency Act, one of which The cornerstone of current Internet legislation, which provides a platform safe harbor for content posted by third-party users. It’s the foundation upon which some of the largest social media companies are built, shielding them from any liability for content shared on their sites. In February, the Supreme Court heard two cases This may change the legislative landscape of the Internet.

To bring AI regulation to fruition, Suleiman hopes to combine broad international regulation to create new oversight bodies and smaller, more granular policies at the “micro level.” The first step all aspiring AI regulators and developers can take is to limit “recursive self-improvement,” or the ability of AI to improve itself. Limiting this specific capability of artificial intelligence will be a critical first step to ensure that its future development does not proceed entirely without human oversight.

“You don’t want your little AI to start and update its own code without your supervision,” Suleiman said. “Maybe this should even be a licensed activity – you know, like dealing with anthrax or nuclear material.”

Without managing some of the details of artificial intelligence, sometimes introducing the “actual code” used, lawmakers will have a hard time ensuring their laws are enforceable. “It’s about setting boundaries that artificial intelligence cannot cross,” Suleiman said.

To ensure this happens, governments should have “direct access” to AI developers to ensure they don’t cross any boundaries that are ultimately established. Some of these boundaries should be clearly marked, such as banning chatbots from answering certain questions, or privacy protections for personal data.

Governments around the world are developing artificial intelligence regulations

period speech At the United Nations on Tuesday, U.S. President Joe Biden struck a similar note, calling on world leaders to work together to mitigate the “enormous dangers” of artificial intelligence while ensuring it remains used “for good.”

At home, Senate Majority Leader Schumer (D-N.Y.) urged lawmakers to take action Rapidly regulate artificial intelligence, taking into account the rapid changes in technological development. Last week, Schumer invited senior executives from the largest technology companies, including Tesla CEO Musk, Microsoft CEO Satya Nadella and Alphabet CEO Sundar Pichai, to a meeting in Washington to discuss The future of artificial intelligence regulation. Some lawmakers are skeptical of the decision to invite Silicon Valley executives to discuss policies governing their companies.

The European Union was one of the first government agencies to regulate artificial intelligence. Adopt draft legislation Developers are required to share the data used to train models and the use of facial recognition software is severely restricted – something Suleiman also said should be restricted. A time Report finds OpenAIwhich led ChatGPT to lobby EU officials to weaken certain parts of its proposed legislation.

China is also one of the first countries to promote comprehensive artificial intelligence legislation. In July, China’s Cyberspace Administration of China issued interim measures to govern artificial intelligence, including explicitly requiring compliance with existing copyright laws and determining what types of development require government approval.

Suleiman firmly believes that governments have a key role to play in the future of AI regulation. “I love nation-states,” he said. “I believe in the power of regulation. What I’m calling on is for nation-states to take action to solve the problem. Given how much is at stake, now is the time to act.”


Leave a Reply

Your email address will not be published. Required fields are marked *