Sundar Pichai Interview Stokes Debate on True Capabilities of AI Chatbots April 19, 2023 by Jaime Hampton
Has artificial intelligence left the realm of fantasy to become a disruptor of reality? Are AI models now learning and thinking on their own, heralding the age of artificial general intelligence? If you caught last weekend’s "60 Minutes," you might think so.
In the April 16 episode, CBS host Scott Pelley interviewed Google CEO Sundar Pichai and other developers about new and emerging AI technology and the future of AI.
“In 2023 we learned that a machine taught itself how to speak to humans like a peer, which is to say with creativity, truth, error, and lies,” Pelley says of AI chatbots, comparing this moment to the discovery of fire or invention of agriculture.
Pichai carries an optimistic yet concerned outlook and warns that “profound technology” like Google’s chatbot, Bard, will “impact every product across every company.”
In the interview, Pichai acknowledges the rapid pace of AI development and the challenges it presents, admitting that these concerns sometimes keep him up at night. He also acknowledges the technology’s propensity for creating fake news and images, saying, “On a societal scale, you know, it can cause a lot of harm.”
Google introduced Bard, somewhat hesitantly, following Microsoft’s launch of a Bing search engine version that utilizes OpenAI's large language models, the same technology responsible for the widely known ChatGPT. Google has thus far released Bard with limited capabilities, but Pichai mentions the company is reserving a more powerful Bard, pending more testing.
Pelley: Is Bard safe for society?
Pichai: The way we have launched it today, as an experiment in a limited way, I think so. But we all have to be responsible in each step along the way.
Narrator: Pichai told us he's being responsible by holding back for more testing, advanced versions of Bard, that, he says, can reason, plan, and connect to internet search.
Pelley: You are letting this out slowly so that society can get used to it?
Pichai: That's one part of it. One part is also so that we get user feedback. And we can develop more robust safety layers before we build, before we deploy more capable models.
CBS "60 Minutes" host Scott Pelley interviews Google CEO Sundar Pichai about the future of AI. (Source: CBS)
Chatbots like Bard and ChatGPT are prone to fabricating information while sounding completely plausible, something the 60 Minutes team witnessed firsthand when Google SVP of Technology and Society James Manyika asked Bard about inflation in a demo. The chatbot recommended five books that do not exist but sound like they could, like the title "The Inflation Wars: A Modern History" by Peter Temin. Temin is an actual MIT economist who studies inflation and has written several books, just not that one.
Narrator: This very human trait, error with confidence, is called, in the industry, hallucination.
Pelley: Are you getting a lot of hallucinations?
Pichai: Yes, you know, which is expected. No one in the field has yet solved the hallucination problems. All models do have this as an issue.
Pelley: Is it a solvable problem?
Pichai: It's a matter of intense debate. I think we'll make progress.
Some AI ethicists are concerned, not only with the hallucinations, but with how humanizing AI may be problematic. Emily M. Bender is a professor of computational linguistics at the University of Washington, and in a Twitter thread turned blog post, she accuses CBS and Google of “peddling AI hype” on the show, calling it “painful to watch.”
Specifically, Bender’s post highlights a segment where Bard’s “emergent properties” are discussed: “Of the AI issues we talked about, the most mysterious is called emergent properties. Some AI systems are teaching themselves skills that they weren't expected to have. How this happens is not well understood,” narrates Pelley.
During the show, Google gives the example of one of its AI models that seemed to learn Bengali, a language spoken in Bangladesh, all on its own after little prompting: “We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali. So now, all of a sudden, we now have a research effort where we’re now trying to get to a thousand languages,” said Manyika in the segment.
Bender calls the idea of its translating “all of Bengali” disingenuous, asking how it would even be possible to test the claim. She points to another Twitter thread from AI researcher Margaret Mitchell who asserts that Google is choosing to remain ignorant of the full scope of the model’s training data, asking, “So how could it be that Google execs are making it seem like their system ‘magically’ learned Bengali, when it most likely was trained on Bengali?” Mitchell says she suspects the company literally does not understand how it works and is incentivized not to.
Pichai: There is an aspect of this which we call– all of us in the field call it a "black box." You know, you don't fully understand. And you can't quite tell why it said this, or why it got wrong. We have some ideas, and our ability to understand this gets better over time. But that's where the state of the art is.
Pelley: “You don’t fully understand how it works, and yet you’ve turned it loose on society?”
Pichai: “Let me put it this way: I don’t think we fully understand how a human mind works, either.”
Bender calls Pichai’s reference to the human mind a “rhetorical sleight of hand,” saying, “Why would our (I assume, scientific) understanding of human psychology or neurobiology be relevant here? The reporter asked why a company would be releasing systems it doesn’t understand. Are humans something that companies ‘turn loose’ on society? (Of course not.)”
By inviting the viewer to “imagine Bard as something like a person, whose behavior we have to live with or maybe patiently train to be better,” Bender asserts that Google is either evading accountability or hyping its AI system to be more autonomous and capable than it actually is.
(Source: Twitter)
Bender was co-author of a research paper that led to the 2020 firing of Mitchell and Timnit Gebru, co-leads of Google’s Ethical AI team. The paper argues that bigger is not always better when it comes to large language models, and the more parameters on which they are trained, the more intelligent they can appear, when in reality, they are “parrots” that do not actually understand language.
In the interview, Pelley asks Pichai how Bard could convincingly discuss painful human emotions as it had during a demo where it wrote a story about the loss of a child: “How did it do all of those things if it's just trying to figure out what the next right word is?”
Pichai says the debate is ongoing, noting, “I have had these experiences talking with Bard as well. There are two views of this. You know, there are a set of people who view this as, look, these are just algorithms. They’re just repeating what [they’ve] seen online. Then there is the view where these algorithms are showing emergent properties, to be creative, to reason, to plan, and so on, right? And personally, I think we need to approach this with humility.”
“Approaching this with humility would mean not putting out unscoped, untested systems and just expecting the world to deal. It would mean taking into consideration the needs and experiences of those your tech impacts,” wrote Bender.
We have reached out to Google and Emily Bender for comment and will update the story as we learn more.