DeepSeek AI could be smarter than OpenAI's most advanced AI, and it's already available for you to try.
It is important to note that DeepSeek R1 is open source.
A new competitor has emerged in the field of artificial intelligence, and it’s worth paying attention to. Recently, the Chinese company DeepSeek introduced its open-source large language model, called DeepSeek R1. According to the company, this new model surpasses other popular LLMs, such as those from OpenAI, in various important benchmark tests, excelling in tasks related to mathematics, programming, and reasoning.
DeepSeek R1 is, in fact, an evolution of DeepSeek R1 Zero, which had been trained without a conventional method known as supervised fine-tuning. Although Zero proved to be very competent in certain activities, it was described by DeepSeek as having "poor readability and language mixing.” R1 addresses these limitations by incorporating "multi-stage training and cold-start data," before being enhanced through reinforcement learning.
Among the most notable features of DeepSeek R1 is its open-source nature, allowing for expert review and potentially reducing concerns related to privacy and security. Additionally, it is offered for free as a web application, and access to the API is very affordable, costing $0.14 for one million input tokens, compared to the $7.5 charged by OpenAI for its most powerful model.
The performance of DeepSeek R1 is remarkable. During a test, the author asked the model to develop a complex web application that needed to analyze public data and create a dynamic site with information about travel and weather. Surprisingly, DeepSeek generated entirely acceptable HTML code immediately and was able to make improvements to the site based on the feedback received, optimizing the code throughout the process.
Furthermore, when asked for tips to improve chess skills in five minutes, the model provided a series of useful suggestions. However, the author could not implement these adjustments due to a lack of motivation. When attempting to test DeepSeek's intelligence with three statements, the author was surprised to realize that they did not have sufficient knowledge to evaluate or fully understand the responses provided.
Despite its impressive performance, it is important to note that DeepSeek's training costs are significantly lower compared to those of other competing models. This suggests that sophisticated artificial intelligence, capable of reasoning, does not necessarily have to be expensive to train or use.