Microsoft Corp CEO Satya Nadella at the company’s Ignite Spotlight event in Seoul on November 15, 2022.

Sung Joon Cho | Bloomberg | Getty Images

Microsoft announced a major update to its AI chatbot on Tuesday: Visual Search. Users can now snap a photo or upload it to Bing Chat and ask for more information about that photo from the desktop or the Bing app.

“Bing can understand the context of an image, interpret it, and answer questions about it,” Microsoft wrote in a release. “Whether you’re vacationing in a new city and asking about the architectural style of a particular building, or trying Come up with lunch ideas based on what’s in your fridge, upload an image to Bing Chat and use that to leverage web knowledge to give you the answer.”

The update comes as an AI arms race heats up among chatbot leaders like Microsoft, Google, OpenAI, and Anthropic. To develop state-of-the-art generative artificial intelligence, tech giants are rapidly rolling out new capabilities, aiming to keep up not only with text-based chatbot rivals but also image-heavy AI tools.

While image searches, and responses that include images, are now becoming part of the chatbot user experience, unlike tools like Midjourney, Stable Diffusion, and DALL, none of the current leading text-based chatbots seem to be able to generate their own images. E 2. However, Google says its Bard chatbot is working on this feature.

Microsoft’s decision to allow Bing Chat to use images follows Google’s recent rollout of an image search feature for its AI chatbot, Bard. Using Google Lens, users can ask Bard for information about images they’ve uploaded, ask it to generate captions, or even just add some zest to the chatbot’s responses, such as requesting restaurant recommendations that include photos of the restaurant’s interior. As of this writing, OpenAI’s ChatGPT does not allow photo uploads, as the chatbot is still entirely text-based, while Anthropic’s chatbot, Claude 2, operates in a similar fashion.

Svlook

Leave a Reply

Your email address will not be published. Required fields are marked *