Google Cloud CEO Thomas Kurian speaks at the company’s 2019 cloud computing conference.
Michael Short | Bloomberg | Getty Images
London – Google The head of the company’s cloud computing division told CNBC that the company is in productive early conversations with EU regulators about the EU’s breakthrough AI regulations and how the company and others can build AI safely and responsibly.
The internet search pioneer is developing tools to address a range of EU concerns about artificial intelligence, including concerns that it could become more difficult to distinguish between human-generated and AI-generated content.
“We are having a productive dialogue with the EU governments. Because we really want to find a way forward,” Thomas Kurian told CNBC in an interview at the company’s London offices.
“These technologies have risks, but they also have a tremendous ability to create real value for people.”
Kurian said Google is working on technology to ensure people can distinguish between human and AI-generated content. The company unveiled a “watermarking” solution at last month’s I/O event that adds tags to images generated by artificial intelligence.
It hints that Google and other major tech companies are looking at how to bring private-sector-driven oversight of artificial intelligence before formally regulating the technology.
AI systems are advancing at breakneck speed, and tools such as ChatGPT and Stability Diffusion are capable of producing things beyond the possibilities of past iterations of the technology. For example, ChatGPT and similar tools are increasingly used by computer programmers as companions to help them generate code.
A major concern of EU policymakers and regulators, though, is that generative AI models lower barriers to mass production of content based on copyright-infringing material, and could harm artists and other creative professionals who depend on royalties for money. Generative AI models are trained on vast amounts of publicly available internet data, much of which is copyrighted.
Earlier this month, members of the European Parliament approved legislation aimed at overseeing the deployment of artificial intelligence in the EU. The law, known as the EU Artificial Intelligence Act, includes provisions to ensure that the training data used to generate AI tools does not violate copyright law.
“We have a lot of European customers using our platform to build generative AI applications,” Kurian said. “We will continue to work with EU governments to ensure we understand their concerns.”
“For example, we provide tools to identify whether content was generated by a model. This is as important as copyright, because if you can’t tell which content is generated by a human and which is generated by a model, model, you can’t execute it.”
Artificial intelligence has become a key battleground in the global tech industry, with companies vying for dominance in technology development — especially generative artificial intelligence, which can generate new content based on user prompts. From generating lyrics to generating code, the capabilities of generative AI have wowed academics and boardrooms alike.
But it has also raised concerns about job losses, misinformation and bias.
Several top researchers and employees within Google have expressed concern about the pace of development of artificial intelligence.
For example, Google employees posted on the internal forum Memegen that the company’s generative AI chatbot Bard, which competes with Microsoft-backed OpenAI’s ChatGPT, was “rushed,” “clumsy” and “not Google-like.” .
Several former high-profile researchers at Google have also sounded alarms over the company’s approach to artificial intelligence, saying it lacks focus on the ethical development of such technologies.
These include Timnit Gebru, the former co-head of Google’s ethical AI team, who has sounded the alarm about the company’s internal AI code of ethics, and Jay C. Geoffrey Hinton, who recently left the company amid concerns that its aggressive push toward artificial intelligence would spiral out of control.
To that end, Google’s Kurian wants global regulators to know that it’s not afraid to welcome regulation.
“We’ve broadly welcomed regulation,” Kurian told CNBC. “We do think these technologies are powerful enough to need to be regulated in a responsible way, and we’re working with the governments of the EU, the U.K., and many other countries to ensure that They’re being used in the right way.”
Elsewhere in the global frenzy to regulate AI, the UK has introduced a framework of AI principles for regulators to enforce, rather than writing its own formal regulations into law. Domestically, President Joe Biden’s administration and various U.S. government agencies have also proposed frameworks for regulating AI.
Yet a major complaint from tech industry insiders is that regulators have not been the fastest movers when it comes to tackling innovative new technologies. That’s why many companies are figuring out their own ways to introduce guardrails around AI, rather than waiting for proper laws to pass.
watch: AI isn’t in a hype cycle, but a ‘transformative technology’, says Wedbush Securities’ Dan Ives
Svlook