Inflection AI raises .3B in funding led by Microsoft and Nvidia
Inflection AI raises .3B in funding led by Microsoft and Nvidia

June 29, Palo Alto-based Inflection AI Announce Closed a $1.3 billion funding round led by Microsoft, Reed Hoffman, Bill Gates, Eric Schmidt, and Nvidia. Part of the new funding will be used to build a 22,000-unit Nvidia H100 Tensor GPU cluster, which the company claims is the largest in the world. GPUs will be used to develop large-scale AI models. The developers wrote:

“We estimate that if we recently entered our cluster TOP500 Despite being optimized for artificial intelligence (rather than scientific) applications, it will come in second, close to the supercomputer list of supercomputers. “

Inflection AI is also developing its own personal assistance artificial intelligence system called “Pi”. Pi is a “teacher, coach, confidant, creative partner and advisor,” the company explained, and can be accessed directly via social media or WhatsApp. Since launching in early 2022, the total funding has reached $1.525 billion.

Despite growing investment in large AI models, experts warn that their practical training efficiency may be severely limited by current technical limitations.in an example raised Researchers at Singapore-based venture fund Foresight wrote, citing an example of a large AI model with 175 billion parameters storing 700GB of data:

“Suppose we have 100 computing nodes, each node needs to update all parameters every step, then each step needs to transfer about 70TB of data (700GB*100). If we optimistically assume that each step takes 1 second, then each second requires Transferring 70TB of data. This demand for bandwidth far exceeds the capacity of most networks.”

Continuing with the example above, Foresight also warns that “due to communication delays and network congestion, data transfer times may exceed 1 second,” meaning that compute nodes may spend most of their time waiting for data transfers rather than performing actual computations. Foresight analysts concluded that, given current constraints, the solution lies in small AI models that are “easier to deploy and manage.”

“In many application scenarios, users or enterprises do not need the more general reasoning capabilities of large language models, but only focus on very fine-grained prediction targets.”

Magazine: AI Eye: AI Travel Booking Is Terrible, 3 Weird Uses of ChatGPT, Encryption Plugins