
Receive free internal business updates
we will send you myFT Daily Digest Email summary of latest news Business internal There is news every morning.
Listening to Mark Zuckerberg speak this week, it’s hard not to conclude that when it comes to artificial intelligence, many companies in the tech industry are throwing any new ideas they can think of at the wall to see what works. Stick to it.
The Meta CEO showed off at the company’s annual developers conference how its 3 billion users will soon be able to do things like retouch photos on Instagram with new digital effects or generate text and images with artificial intelligence. Chat with celebrity avatars.
Zuckerberg spent much of last year downplaying the near-term prospects of the immersive 3D virtual worlds he has long promoted. Instead, he has been pushing the idea that new forms of artificial intelligence will enhance all of the company’s existing services.
As he said this week, he once thought people would buy Meta’s augmented reality glasses to view dramatic holograms overlaid on the real world. Now, he thinks they might be buying them for more prosaic reasons, such as being able to see a short text description of what they’re looking at.
Of these new uses of artificial intelligence, which, if any, will become popular? Can any of them ignite the same enthusiasm that followed the arrival of ChatGPT last year? This experiment feels a lot like the era of mobile computing before the iPhone was launched. Many in the technology industry firmly believe that the integration of computing and mobile communications will usher in a new era. They were right, but it wasn’t until Apple launched its first touch-screen phone in 2007 that the path forward became clear.
Zuckerberg isn’t the only one seeking a formula to bring artificial intelligence to the masses. OpenAI, the company behind ChatGPT, is also exploring ways to embed its technology into new services and products.
This week, artificial intelligence startups declare New voice and image features for ChatGPT. Take a photo of what’s in your fridge and the chatbot can help you decide what’s for dinner and walk you through the recipe, it says. Or you could use it to settle an argument at the dinner table without everyone starting tapping away on their smartphones. It is also reported that OpenAI is exploring cooperation with iPhone designer Sir Jony Ive to launch new digital products specifically designed for its new technology.
Meta and OpenAI’s latest efforts highlight two major frontiers being opened in the consumer artificial intelligence race. One is the emergence of so-called multimodal systems, which combine the understanding of text, images and speech. A year or two ago, technology in this area was running on a parallel but independent track: OpenAI’s Dall-E 2 image generator was causing a stir in artificial intelligence long before ChatGPT was launched. Integrating these into the same service creates even more possibilities. Google has been pursuing a multimodal model for longer.
This could shake up competition in consumer technology. OpenAI’s launch of a voice service, for example, could surpass Amazon, which last week promised to bring chatbot-style intelligence to its Alexa-powered smart speakers. While Amazon is still describing what it might do, OpenAI says it’s already been able to pull it off.
Another new front in this consumer AI race involves hardware. Predictions that smartphones will be replaced by new products that are less intrusive and more suited to artificial intelligence are not new. Big tech companies and startups have been experimenting with smart glasses, wristbands and other “wearables” for years, aiming to create a more seamless tech experience than pulling out your phone.
OpenAI’s exploration of new artificial intelligence-driven hardware is still in its early stages. But its interest in working with Ive suggests it sees an opportunity for an “iPhone moment” that will have the same impact on mobile communications that Apple’s smartphones did. Exactly what form that will take—or what these new devices will be used for—is difficult to predict.
Zuckerberg showed off his company’s smart glasses for streaming live video from the front seat of a race car at a Meta developer event this week. It’s a strange throwback to the early 2010s and the early days of augmented reality, when Google introduced its then-revolutionary glasses to the world with similar demos.
At the time, it was easy to imagine that we would all be wearing smart glasses now. More than a decade later, it’s still difficult to understand exactly how the next generation of artificial intelligence services will permeate our daily lives.
richard.waters@ft.com
Svlook