In an interesting observation, a user posted on the LocalLLaMA thread of Reddit that Perplexity summarises the content from the top 5-10 results from Google Search.
“Search for the exact same thing on google and perplexity and compare the sources, they match 1:1,” the user said.
So perplexity is a wrapper on top of google search?
Gotta wrap them all! https://t.co/a14L5EfHQY— yi
(@agihippo) March 18, 2024
This means that Perplexity runs a Google Search for every user query, extracts the content from the top results from the results, then summarises it using an LLM, and gives the response to its users.
This is quite similar to what Google does on Gemini. The only difference is Perplexity hosts different models such as Claude 3, GPT-4 Turbo, and Mistral Large for users to choose from. The real question is that did Google give Perplexity the permission to do so.
Another user pointed out that this is an oversimplification of what Perplexity actually offers. “The Co-pilot tool helps to refine or expand the search. A simple search = google result because it’s the base case,” the user explained. But this is just an explanation of why Perplexity is a faster choice.
Several users have been discussing this for a long time on Reddit and X, saying that it is just a front end model of web search that summarises and gives references. “why not just use OpenAI rather than its wrapper,” said a user, also saying there is no innovation in the product.
“Isn’t this always the case?, asked a user, saying, Bing chat, You.com and they are just LLM summarising the web search. “The ultimate RAG application. For the search index, they all licence Bing under the hood,” he explained.
Read: This Tech Bhai from IIT Madras is Making Google Dance
The post Perplexity is Most Likely a Google Search Wrapper appeared first on Analytics India Magazine.

(@agihippo) March 18, 2024