Mark Zuckerberg’s Meta’s artificial intelligence division recently launched the Llama 2 chatbot. Microsoft has been named Meta’s preferred partner on Llama 2, which will be available on the Windows operating system.
Meta’s Llama 2 approach stands in stark contrast to that of OpenAI, the company that created the AI chatbot ChatGPT. That’s because Meta has made its product open-source—meaning the original code is freely available, allowing it to be studied and modified.
This strategy has sparked a huge wave of discussion. Will it prompt greater public scrutiny and regulation of large language models (LLMs), the underlying technology for AI chatbots like Llama 2 and ChatGPT? Could it inadvertently allow criminals to exploit the technology to aid them in phishing attacks or malware development? Could the move help Meta gain an edge over OpenAI and Google in this fast-growing field? Whatever happens, this strategic move looks set to reshape the current landscape of generative AI. In February 2023, Meta released the first version of LLM, called Llama, but only for academic use. Its updated version, Llama 2, has improved performance and is more suitable for business use.
Like other AI chatbots, Llama 2 must be trained using online data. Access to this vast source of information helps it improve its functionality—providing useful answers to users’ questions.
The initial version of Llama 2 was created through “supervised fine-tuning,” a technique that uses high-quality question-answering data to calibrate it for public use. It was further refined with human-feedback reinforcement learning, which, as the name suggests, incorporates human assessments of AI performance to align it with human preferences.
Guaranteed Benefits Meta embraces the open source ethos with Llama 2, allowing it to leverage what has worked for the company in the past. Meta’s engineers are known for building products that help developers, such as React and PyTorch. Both are open source and have become industry standards. Through them, Meta sets a precedent for innovation through collaboration.
The release of Llama 2 promises to lead to safer generative artificial intelligence. Through shared intelligence and collective exploration, users can identify misinformation and vulnerabilities that could be exploited by criminals. Unexpected applications have emerged, such as a user-created version of Llama 2 that can be installed on an iPhone, highlighting the creative potential of this community.
But there are limits to how much Meta allows Llama 2 users to commercialize its AI system. If either party’s Llama 2-based product has more than 700 million active users in the previous calendar month, a license must be requested from Meta. For Meta, this opens up the profit-sharing potential of a successful Llama 2-based product.
Meta’s strategy contrasts sharply with the more cautious approach of its main competitor, OpenAI. While some have questioned Meta’s ability to compete in this space and commercialize its product, as OpenAI and ChatGPT have done, Meta’s decision to invite global developers to join is indicative of a broader vision. The move positions Zuckerberg’s company not just as a participant, but as a facilitator, leveraging global talent to contribute to the growing Llama 2 ecosystem.
The strategy could also be a shrewd hedge against potential competition from tech giants like Google. With a large number of users exploring the potential of Llama 2, any successful advancements can be quickly integrated into Meta’s other products. Only time will reveal the full impact of this decision, but the immediate impact on the industry has resonated widely.
User Strengths and Pitfalls The public experimental aspect of open source technology allows for greater scrutiny, providing the user community with an opportunity to evaluate the strengths and weaknesses of Llama 2, including its vulnerability to attack. The intense public scrutiny could expose the LLM’s flaws, prompting the development of defenses against them.
On the downside, there are concerns that this is akin to “handing a criminal a knife,” as it could also allow malicious users to take advantage of the technology. For example, its functionality could help fraudsters build a dialog system that generates believable automated conversations for phone scams. This potential for abuse has led some to call for regulation of the technology.
But exactly what rules will be set, who will have the power to oversee the process, and exactly what will require more or less scrutiny will require careful planning to ensure that regulation does not simply support the monopoly of big tech companies.
In the ever-evolving saga of AI development, the debate over open source reminds us that technological progress is rarely simple or one-dimensional. The ramifications of Meta’s decision could ripple through the tech world for years to come. While Llama 2 may not yet match the capabilities of ChatGPT, it opens the door to a host of innovative products.
Google will also come under scrutiny as speculation grows about how it will respond. In an age of thriving open source culture, it’s no surprise to see Google follow suit and release their own version.
The phrase “tech for good” has become a common mantra to describe tech companies using some of their resources to positively impact the lives of all of us. But in the end, this goal remains a shared responsibility, not something just a few companies should be involved in.
This goal also requires cooperation and joint efforts of academia, industry and other fields. As LLM technology continues to evolve, the stakes are high and the path forward is full of opportunities and challenges.
(This story was not edited by NDTV staff and was automatically generated from syndicated feeds.)