Facebook parent Meta Platforms on Tuesday unveiled an artificial intelligence model capable of translating and transcribing speech in dozens of languages, a potential building block for cross-lingual real-time communication tools.

The company said in a statement blog post Its SeamlessM4T model can support translation between text and speech in nearly 100 languages, and full speech-to-speech translation in 35 languages, combining technologies previously only available in separate models.

Chief Executive Mark Zuckerberg has said he hopes such tools will facilitate global user interaction in the Metaverse, an interconnected virtual world on which he is betting the company’s future.

The blog post says Meta is making the model available to the public for non-commercial use.

The world’s largest social media company has released a series of mostly free AI models this year, including a large language model called Llama that complements Microsoft-backed OpenAI and a proprietary model sold by Alphabet’s Google. severe challenge.

Zuckerberg said an open AI ecosystem is good for Meta because the company makes more money by effectively crowdsourcing the creation of consumer-facing tools for its social platform than by charging for access to the models .

Still, Meta faces similar legal issues as the rest of the industry around the training data it acquires to create its models.

In July, comedian Sarah Silverman and two other authors filed a copyright infringement lawsuit against Meta and OpenAI, accusing the companies of using their books as training data without permission.

For the SeamlessM4T model, Meta researchers said in a research paper that they collected audio training data from 4 million hours of “raw audio from publicly available scraped web data repositories,” without specifying which repository library.

A Meta spokesman did not respond to questions about the source of the audio data.

The text data came from a dataset created last year that extracted content from Wikipedia and related websites, the research paper said.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our Ethics Statement for details.

Svlook

Leave a Reply

Your email address will not be published. Required fields are marked *