AI is making us smarter, says AI pioneer Terry Sejnowski

lightbulb with glowing brain inside

From a long perspective of working in the trenches of machine learning, Terry Sejnowski has been an enthusiastic advocate for the positive impact of artificial intelligence (AI). In 2018, he wrote in the book The Deep Learning Revolution that "AI will make you smarter."

Also: DeepSeek challenges OpenAI's o1 in chain of thought – but it's missing a few links

Things move fast in AI time. Since 2018, generative AI (Gen AI) has invaded our lives. In his latest book, ChatGPT and the Future of AI: The Deep Language Revolution, published last month by MIT Press, Sejnowski reviews the rise of large language models (LLMs) and concludes that "AI is indeed making us smarter."

Terry Sejnowski

But how do we measure smarter? What exactly does that mean?

"What is intelligence? Intelligence is really about problem-solving," Sejnowski told ZDNET in an interview. With ChatGPT, and programs like it, "I am able to get up to speed faster, but, also, it leads me to things that I might never have even thought of or explored; it's opening up doors."

He continued: "Think about what ChatGPT really is. Everybody thinks, 'Oh, it's talking like a human.' The only thing we know for sure is, it's not human. What is it? It's a tool like a shovel."

Also: AI pioneer Sejnowski says it's all about the gradient

Like a shovel, argued Sejnowski, the large language tool is helping us do things better than we could with our bare hands. He said writers are getting better with ChatGPT because "it helps them through mental blocks."

He used ChatGPT extensively to research the book, he noted in his new book. "With the help of LLMs, this book took about half the time it took to write my previous book on The Deep Learning Revolution."

Written with the same engaging voice and authoritative knowledge of AI, the new book is very different from the previous one. In 2018, Sejnowski gave a history lesson. In the new Revolution, Sejnowski is interested in where these tools are headed and how they're changing our notions of thought and how we regard ourselves.

Also: OpenAI expands o1 model availability – here's who gets access and how much

"We're at the tool-using stage right now; we are learning how to use the tool, and the tools are getting better all the time," Sejnowski told ZDNET.

"ChatGPT could do a lot of things, but it can't do it as well as the best humans. But, I'll tell you, it does it a lot better than most humans."

One thing ChatGPT doesn't do is write anywhere nearly as well as Sejnowski. Throughout the book, he offers ChatGPT-generated summaries of chapters, hoping they may be "easier to follow than the text." In fact, the summaries are banal, much like a lot of GPT-generated prose, and seem like mostly a gimmick. It is the book's only weak spot and a small enough transgression to be forgiven in what is otherwise a masterly and thoroughly engrossing read.

Lest you think the book is a love letter to ChatGPT, the deeper element of the book, taking up most of its pages, is an analysis of how generative AI affects science, and vice versa.

AI is, for example, revealing aspects of the brain to neuroscientists, and neuroscience is in turn opening up new possibilities for AI, he argues, in a kind of virtuous cycle.

Also: ChatGPT writes my routine in 12 top programming languages. Here's what the results tell me

That observation is backed up by Sejnowski's extensive career in both fields. Sejnowski is the Francis Crick Chair at The Salk Institute for Biological Studies and Distinguished Professor at the University of California at San Diego. He made foundational contributions to today's AI but charted a different path from his AI colleagues.

Sejnowski earned his PhD in physics under John Hopfield at Princeton in the 1970s and then collaborated extensively with Geoffrey Hinton, two individuals who received this year's Nobel Prize in physics for their work on AI. Sejnowski's early focus turned away from building AI systems per se, toward neuroscience because, he told ZDNET, "I wanted to understand how the brain works."

Many AI practitioners feel the brain is far too complex relative to artificial neural networks to make headway, and they flee from brain science to enhance their professional chances of publishing breakthroughs. Sejnowski, however, is exhilarated by what he learns and is convinced he is at the threshold of making great discoveries about the brain thanks to AI.

Also: AI-driven software testing gains more champions but worries persist

For example, the underlying mechanism of large language models — the way they predict the next word — is a fundamental mechanism, with applicability to human memory.

Everything you type into GPT and the rest is encoded as a long string of numbers, known as the "context window". That window constitutes the working memory used to make predictions. OpenAI and others compete to have longer and longer context windows, which should translate to a greater capacity to predict the next word, phrase, or paragraph.

Sejnowski believes something similar is happening in the brain. He explained to ZDNET that the question for the neuroscientist might be, "How is the long input vector implemented in the brain? Not just across sentences, but across paragraphs. You're building up in your brain some kind of story, and how is that taking place?"

The answer, Sejnowski believes, are what are called "traveling waves", which are waves of neuronal activity traveling across the cerebral cortex. The phenomenon has "generally been ignored" in neuroscience, he said, because "nobody had any clue as to what the function could possibly be."

Also: How well can OpenAI's o1-preview code? It aced my 4 tests – and showed its work in surprising detail

In the middle third of Revolution, Sejnowski hints at the possibility that Gen AI is finally elucidating the mystery of traveling waves. He offers an excellent history of LLMs, taking the reader from the early days of AI to the development of the transformer, the earliest form of language model. Interested readers can find much more details on traveling waves and transformers in a scholarly paper for the journal, Trends in Neuroscience.

At the same time, things are "going in the opposite direction", with artificial intelligence continuing to evolve as it borrows from neuroscience, Sejnowski told ZDNET.

In the book, he posits that the various foibles of large language models — the "hallucinations" and sometimes nonsensical outputs — can be understood as developmental stages analogous to humans' own mental development. The technology, although promising, still has a long way to go.

"LLMs are Peter Pans, who have never grown up and live in a digital Neverland," writes Sejnowski. "LLMs also lack adolescence; in humans, this is before the prefrontal cortex matures and puts brakes on poor judgment."

Also: I tested 9 AI content detectors – and these 2 correctly identified AI text every time

The last third of the book focuses on where AI may go given that paradigm of a kind of childhood.

"A long-term direction for AI is to incorporate LLMs into larger systems," he writes, "much as language was embedded into brain systems that had evolved over millions of years for sensorimotor control, essential for survival."

Already, Gen AI has extended its capabilities by borrowing from other areas of science, Sejnowski told ZDNET.

One of the most significant innovations in recent years in LLMs is inserting something called a "state space model", borrowed from particle physics. Commercial companies, such as AI21, have used the state space mode to dramatically boost performance in terms of time required to reply to the prompt.

The state space model also ties into the theory of the brain's traveling waves, Sejnowski told ZDNET, bringing things full circle.

This cross-pollination of efforts between science and AI is the book's most fascinating aspect, highlighting how much is left to be understood in both camps.

Also: AI21 and Databricks show open source can radically slim down AI

LLMs have an underlying structure that AI researchers are only just beginning to understand. Sejnowski predicts that unfolding that mystery could lead to new forms of mathematics, which, in turn, could dramatically advance AI.

"Today's LLMs are the equivalent of the cathedrals built in the Middle Ages by trial and error," he writes in Revolution. "As LLMs inspire new mathematics, a new conceptual framework will reify concepts like understanding and intelligence; their progeny will be the equivalent of skyscrapers."

One of the remarkable things about the book is that it is extraordinarily grounded in the work of science and AI, informed by Sejnowski's decades of participation in both, and yet soars to new heights of scientific imagination.

Sejnowski posits that entirely new sciences and mathematics may emerge, just as breakthroughs by Newton and others changed our understanding of the universe.

"Physicists came up with equations that described mysterious properties of the universe, such as gravity, thermodynamics, electricity, magnetism, and elementary particles, which made accurate predictions with only a few parameters, called physical constants," writes Sejnowski.

"In the twenty-first century, a new area of mathematics is having more success based on algorithms from computer science. We are just beginning to explore the algorithmic universe, which may require a shift in our thinking about scientific understanding."

Also: ChatGPT writes my routine in 12 top programming languages. Here's what the results tell me

There may await for us a revelation about intelligence as it has always existed but that we have never grasped.

Using the tools of Gen AI, people are coming to a better understanding of their strengths and limitations, Sejnowski told ZDNET.

"The more that I use it, and the more that I see what other people are using it for, it's pretty clear that it's really mirroring them," said Sejnowski. As people get better at prompt engineering, the tool reflects more and more of the user's style: "They get better at seeing themselves in that mirror."

The mirror effect leads to a tantalizing prospect: we are not going to achieve "artificial general intelligence", the holy grail of AI, in the cliche, sci-fi form of a life-like humanoid that walks and talks like us. Rather, we will shift our understanding of what we think we know about intelligence. It is beyond mere tool use, but we don't yet have an expression for what that something else might be.

"Could general intelligence originate in how humans interact socially, with language emerging as a latecomer in evolution to enhance sociality?" Sejnowski asks in the book. "The time has come for us to rethink the concept of 'general intelligence' in humans."

Artificial Intelligence

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...