Whether it’s watching DIY tutorials, whipping up delicious recipes, or just cute cat videos to lighten up the mood – YouTube has become a go-to place for everything. Jensen Huang, the NVIDIA chief, said YouTube is his favourite medium of learning, besides using ChatGPT to dissolve plastic.
Just yesterday, OpenAI released its most awaited multimodal ChatGPT. But, what this flamboyant chatbot lacks is limited access to information, now cut-off up to January 2022.
This is where Google wins. After pioneering Transformer with the Attention is all you need paper – all it needs now is YouTube.
Imagine having a multimodal chatbot solely trained on YouTube data. According to leaks and reports, Google DeepMind is training Gemini, its next iteration of generative AI model, which it touts is going to be multimodal. Moreover, it is also training the model based on video transcripts of YouTube, not just the audio. This would make the model rich and heavily informed. No one has done it before.
All the information
Think about it. YouTube isn’t just about watching videos; it’s about descriptions, deciphering comments, and understanding the context of those comments through textual content. If you need help with resetting your phone, there is some guy speaking a foreign language who has got you covered. From cooking tutorials to quantum physics lectures, from cute puppy compilations to historic speeches, YouTube has it all.
Quite clearly, the multimodal race is on. OpenAI’s GPT-4V(ision), where text meets images on ChatGPT is also just a hint of that. People say that OpenAI has actually beat Google in the multimodal race with this release, but it isn’t actually all true. Arguably, the same capabilities that ChatGPT offers now, were already there within Bard for months now.
For Google DeepMind, getting data from YouTube is actually the best thing it could have asked for – multimodal, multilingual, and multiregional.
While Elon Musk’s X was filled with textual data, YouTube is a gold mine for visual, audio, and textual data in almost every single language on earth. Looking at this, Musk has also decided to heavily invest in video content. Same is the case with all the Meta platforms.
The recent announcement of OpenAI’s GPT-4V(ision) into ChatGPT highlights the capabilities of multimodality. According to the demonstrations, if someone puts up a photo of a cycle and asks it how to repair its seat, the chatbot will tell step by step how it can be done. It can even speak now.
Imagine asking ChatGPT a question like, “How does photosynthesis work?” With its multimodal capabilities, ChatGPT can seamlessly pull up relevant videos from YouTube, analyse the visual content, and provide a detailed explanation. It can break down the complex process, point out the key components within the videos, and even answer follow-up questions about the topic.
But OpenAI does not have YouTube, Google does. The search giant has all the information in the world by crawling through all the websites in the world, and YouTube makes it even better.
For a multimodal LLM built on YouTube, it’s about recognising objects, people, and actions within videos using visual cues. It’s about comprehending the audio content, from spoken language to background music. In essence, YouTube encapsulates the essence of multimodality in a single platform. Musk’s Optimus that is able to sort objects based on colours and sizes can possibly learn a lot from YouTube tutorials as well.
But, it is easier said than done. That is why Google is taking its own sweet time to release it. When they do, the gap between information and creation is almost NIL.
Google’s Multimodal Masterpiece
YouTube isn’t just about information though; it’s a platform for human expression in its rawest form. It’s where people share their thoughts, emotions, and stories through videos. The comments section, for better or worse, is a goldmine of language diversity, slang, and sentiment. AI models need exposure to these nuances to engage effectively in human-like conversations.
Interestingly, the multimodal LLM that OpenAI has been working on, codenamed Gobi, is not yet here. The recent announcement of uploading photos and getting back replies in audio format can be just looked at like a plugin for the chatbot. It is just an amalgamation of GPT, the company’s speech-to-text Whisper, and possibly Microsoft’s VALL-E. But not an all round GPT-5.
So, the next time you have a question or need assistance, remember that YouTube is not just a source of answers; it’s a source of insight, emotion, and demonstration. And if you need information from YouTube videos, soon you might not even have to log on to the website. Just type in your prompt in the upcoming Google DeepMind bot, and it will drop the responses for you.
Wonder what it would do to the views on YouTube videos.
The post YouTube is All You Need appeared first on Analytics India Magazine.