Today’s cutting-edge artificial intelligence is based on neuroscience from the ‘50s and ‘60s. Imagine what A.I. could do if it incorporates the latest breakthroughs

With all the hype surrounding ChatGPT, most people are excited about the promise of artificial intelligence, but they ignore its pitfalls. If we want to have truly intelligent machines that understand their environment, learn continuously, and help us on a daily basis, we need to apply neuroscience to deep learning AI models. Yet, with few exceptions, the two disciplines have been surprisingly isolated for decades.

This is not always the case. In the 1930s, donald hebb and others theorized how neurons learn, inspiring the first deep learning models. Then in the 1950s and 1960s, David Huber and Torsten Wiesel Winner of the Nobel Prize for understanding how the brain’s perceptual systems work.This pair convolutional neural networkwhich is an important part of deep learning in artificial intelligence today.

superpowers of the brain

While neuroscience as a field has exploded over the past 20 to 30 years, few of these recent breakthroughs are evident in today’s AI systems. If you ask the average AI professional today, they are unaware of these advances, or what impact recent neuroscience breakthroughs will have on AI. This must change if we want AI systems to push the boundaries of science and knowledge.

For example, we now know that there is a Common circuits in our brains Can be used as a template for artificial intelligence

For an average adult, the human brain consumes about 20 watts, less than half the power consumed by a light bulb.In January, ChatGPT consumes Approximately equivalent to the electricity consumption of 175,000 people.given Adoption of ChatGPT is rising rapidly, which now consumes the equivalent of 1,000,000 people per month. A Paper According to the professor at the University of Massachusetts Amherst, “Training an artificial intelligence model emits the carbon emissions of five cars in its lifetime.” However, this analysis only applies to one training run. When the model is improved through repeated training, the energy consumption will increase greatly.

In addition to energy consumption, the computing resources required to train these AI systems have also been consumed. Doubling every 3.4 months Since 2012. Today, with the incredible growth in AI usage, it is estimated that the cost of inference (and power usage) is at least 10 times higher than the cost of training. This is totally unsustainable.

Not only does the brain use a fraction of the energy used by larger AI models, but it’s also “truly” intelligent. Unlike AI systems, the brain can understand the structure of its environment to make complex predictions and execute intelligent actions. Unlike AI models, humans learn continuously and incrementally. Instead, the code hasn’t really “learned”. If an AI model makes a mistake today, it will keep repeating that mistake until it is retrained with new data.

How neuroscience can improve AI performance

Despite the increasing need for interdisciplinary collaboration, cultural differences between neuroscientists and AI practitioners make communication difficult. In neuroscience, experiments require a great deal of detail, and each discovery can take two to three years of painstaking documentation, measurement, and analysis. When research papers are published, the details are often like red tape to AI professionals and computer scientists.

How can we bridge this gap? First, neuroscientists need to take a step back and explain their concepts from a big-picture perspective so their findings make sense to AI professionals.Second, we need more researchers AI-Neuroscience Hybrid Roles Help fill the gap between the two fields.By collaborating across disciplines, AI researchers can better understand how to translate neuroscience research into brain-inspired AI

Recent breakthroughs have demonstrated that applying brain-based principles to large language models can improve efficiency and sustainability by orders of magnitude. In practice, this means mapping neuroscience-based logic to the algorithms, data structures, and architectures that run AI models so that it can learn as quickly as our brains do on very little training data.

Several organizations have made progress in applying brain-based principles to artificial intelligence, including government agency, academic researchers, Intel, google deep thinkingand smaller companies like cortex.io (Cortical uses technology from Numenta, and Numenta owns some of Cortical’s technology as part of our licensing agreement). As today’s deep learning systems move toward larger and larger models, this work is critical if we are to scale up AI work while protecting the climate.

From the smallpox virus to the light bulb, nearly all of humanity’s greatest breakthroughs have come from multiple contributions and interdisciplinary collaboration. The same must be true of artificial intelligence and neuroscience.

We need a future where AI systems can actually interact with scientists, helping them create and run experiments that push the boundaries of human knowledge. We need artificial intelligence systems that truly augment human capabilities, learn with all of us and assist us in all areas of life.

Whether we like it or not, AI is here to stay. We must make it sustainable and efficient by bridging the gap between neuroscience and artificial intelligence. Only then can we apply the right interdisciplinary research and commercialization, education, policy, and practice to AI for the betterment of the human condition.

Subtai Ahmed is the CEO of Numenta.

The opinions expressed in Fortune review articles are solely those of the authors and do not necessarily reflect the views and beliefs of: wealth.

More must-read reviews by wealth:

Svlook

Leave a Reply

Your email address will not be published. Required fields are marked *