Cover Image for The new Deep Research tool from Perplexity uses DeepSeek R1 technology.
Fri Feb 21 2025

The new Deep Research tool from Perplexity uses DeepSeek R1 technology.

Thanks to the open-source access of DeepSeek R1.

Perplexity has launched a new deep research tool called Deep Research, which uses a customized version of DeepSeek R1, an open-source model. This tool has the capability to conduct comprehensive research and analysis automatically, exploring the web and generating detailed reports on its findings.

The term "Deep Research" may sound familiar, as Google and OpenAI have developed their own versions of tools with the same name for Gemini and ChatGPT, respectively. Meanwhile, xAI has introduced the tool Deepsearch, though with a different approach. Unlike the proprietary solutions from Google and OpenAI, Perplexity's tool utilizes an open model, allowing developers to modify it according to their needs.

Perplexity's CEO, Aravind Srinivas, commented that their research model can offer these services at significantly lower prices, ranging from 10 to 100 times less than their competitors. After a brief confirmation that they had implemented this feature, he also announced that the Deep Research tool would be available for free to users, albeit with certain limitations. Those who are not paying subscribers will have access to a limited number of daily responses, while subscribers of the $20 per month Pro plan will enjoy unlimited access.

Despite using the DeepSeek R1 model, Perplexity has also launched its own open-source version called R1 1776, designed to provide objective and uncensored information. This decision responds to accusations that the original model censored certain content, particularly those critical of the Chinese government.

However, the Deep Research tool is not perfect. It has been reported that Cometario made errors, such as incorrectly attributing the term "stochastic parrots" to researcher Gary Marcus when it was actually coined by Emily M. Bender. Additionally, there are concerns about the accuracy of the data provided by the tool, which is particularly problematic given that it is marketed as a resource for investment and market analysis. Srinivas has expressed his commitment to improving the situation, emphasizing the importance of data accuracy in the financial realm, a field where the impact of misinformation can be significant. All of this highlights how "hallucination" remains a persistent challenge in the functioning of language models.