The proponent of the blog ‘Inverted Passion’ seems to have found a new purpose – one that goes beyond AGI, game theory, evolution, consciousness, and whatnot. His latest focus is on how machines can help us turn this dream into a present-day reality, though whether or not it can be achieved is questionable and, perhaps, a discussion for another day.
Sporting a T-shirt displaying the text ‘Attention is All You Need’, in the heart of Namma Bengaluru, Wingify founder Paras Chopra revealed his newfound obsession: Turing’s Dream. His initiative envisions a place where human intellectuals and intelligence collide. The three-tiered bookshelf in his research pod stands as a testament to his dedication.
AIM got a firsthand experience after visiting their office, where we met Raghuvamsi Velagala, one of the program participants. We caught him playing with a 3D-printed robotic arm, and he is using RL to teach it manipulation! Around him, other team members were seen engrossed in various activities like documenting the future of LLMs, coding fervently or sipping on the high-caffeine Monster drink.
It’s alive!@0x00raghu 3d printed a robotic arm at Turings’s Dream! pic.twitter.com/FDLYU8YXhf
— Paras Chopra (@paraschopra) November 22, 2024
This was nothing like People+ai or other AI accelerators or incubators we’ve seen in the past. Turing’s Dream website clearly mentions that one shouldn’t apply if they are “building wrappers on existing APIs or are looking for a tangible outcome like a co-founder”. “This is not a startup accelerator or a co-founder matching program.”
Chopra said that Turing’s Dream is a six-week residency program where coders and researchers get to build their AI projects and be permanent members of “an exclusive community comprising all previous and current residents of the hack house – an AGI House of sorts, minus the grandeur and flashy interiors.
“There’s not enough communities for people who love AI for its own sake,” Chopra said.
Chopra also offers free cloud GPUs powered by the E2E Network, costing $2,500 per resident, and access to a network of AI researchers, his peers and investors. As revealed by Chopra, people are indeed building innovative projects.
For example, Arun Balaji, a final-year engineering student, is working with the UPI team to detect money laundering using graph neural networks. Meanwhile, 17-year-old high school student Surya Maddula is set to build an open-air noise cancellation device that cancels noise in the real world, which he also has a patent on!
Earlier this year, AIM spoke to Adithya S Kolavi, an engineering student and AI researcher at CognitiveLab, who is building Omniparse. This library converts unstructured data into structured data for LLMs. He’s also a part of Turing’s Dream second cohort and owes his success to open-source models.
Driven by curiosity, Chopra sees the world quite differently. After bootstrapping a business with recurring profits of over ₹50 crores for years, his next investment clearly revolves around curiosity and obsession – things money can’t buy.
Delicious Treats, Delicious Findings
Both Turing’s Dream and Chopra’s personal research go hand in hand, supporting like-minded folks who are solving the very problems he has identified in his research.
He is super hungry about it, so much so that he wants to find out what goes on behind these LLMs – explains why he has named some of his passion projects ‘Blueberry’ and ‘Tofu’ while he now cooks ‘Noodles’.
Perhaps it is a dig at OpenAI’s o1 (internally known as Project Strawberry) – the Blueberry is sweet, but the Strawberry is sour.
Echoing the sentiment of many AI leaders, Chopra suggested that probability is what forms the core of LLMs, and it isn’t capable of true reasoning.
Further, he said that a brute-force approach to improve LLMs with more data and performance capabilities isn’t going to solve the problem. He believes that connecting external tools and ‘symbolic agents’may just help LLMs reason more genuinely.
Subbarao Kambhampati, a professor at Arizona State University, shared a similar opinion on a YouTube podcast episode. “When they [LLMs] do self-reflection, they actually worsen. However, if you have an external verifier…it is actually enough to improve the performance,” he said.
While benchmarks and scaling laws suggest that LLMs have hit a wall, Chopra is far from satisfied. He believes that LLMs are far from achieving human-level creativity, and their outputs aren’t diverse enough.“LLMs ultimately mimic the data you train them on, so unless you provide substantially new data during finetuning, you won’t impact LLM much.”
While synthetic data is a solution, Chopra believes in using it cautiously. “Natural data has higher diversity than LLM-produced synthetic data, so use synthetic data carefully as it can cause a loss in performance.”
“If datasets drive intelligence in LLMs, why can’t we have LLMs themselves post requests-for-data at things they’re bad at?” he recently asked in a post on X.
Many researchers and experts believe that LLMs can’t reason. A majority of tasks performed by humans don’t necessarily involve novel creative thinking or effective reasoning. Moreover, it is also not fair to underestimate a reasoning approach that is purely derived from training data on a probabilistic basis.
“Memorizing reasoning is the side effect of attempting to learn to score different reasoning paths, just like memorising answers / approximate retrieval is the side effect of training on the answers,” said Boyang ‘Albert’ Li, a professor at Nanyang Technological University at Singapore, adding that solving for reasoning will get us a long way.
Creativity With Caution
Chopra’s project, Tofu, explores a new paradigm for unleashing creativity using LLMs.
His findings suggest that LLMs often lack diversity in their outputs and struggle to find a balance between repetitive and gibberish responses. When he added examples of the output inside prompts, the LLMs did achieve a certain sense of creativity.
“If you think LLMs are creative, it’s probably your own creativity at work,” he said. “You steer LLM to be creative when you prompt it intelligently, change the prompt, give it feedback or nudge it towards some concept space”.
That said, expecting a highly creative LLM may not be the best idea after all. It is also the trend that the industry is heading towards. It may have the potential to form deeper bonds with the user, and in some cases, it may not be the right thing.
“As I’ve consistently pointed out, distributing a ‘magic box’ or a complex, opaque system to a wide audience is fraught with risks. The unpredictability of human behaviour, combined with the vast potential of AI, makes it nearly impossible to foresee every possible misuse,” shared Giada Pistilli, principal ethicist at Hugging Face, an open-source hosting platform, in an interview with AIM. Her statement came in the backdrop of the tragic mishap involving CharacterAI, which OpenAI had also warned about earlier.
“Human-like socialisation with an AI model may produce externalities impacting human-to-human interactions. For instance, users might form social relationships with the AI, reducing their need for human interaction, potentially benefiting lonely individuals but possibly affecting healthy relationships,” said OpenAI in their system card, released earlier this year.
Of course, Chopra emphasised the importance of ethics and safety. He believes that an AI system does what it is optimised for. For instance, Character AI was optimised to build a human relationship, and OpenAI is optimised to respond with factually correct answers.
While asserting the need for engineers to think about safety and ethics, Chopra said sometimes ideas seem so exciting that the question of how it would impact others isn’t the most obvious one that surfaces but is extremely important to think about.
The post The Curious Case of Paras Chopra appeared first on Analytics India Magazine.