Remember when DeepSeek briefly shook up the entire artificial intelligence industry by launching its large language model, R1, that was trained for a fraction of the money that OpenAI and other big players were pouring into their models? Thanks to a new paper published by the DeepSeek AI team in the journal Nature, we finally know what it took to train DeepSeek 1: $294,000 and 512 Nvidia H800 chips. The reason it was able to spend less, it seems, is because of the team’s use of trial-and-error-based reinforcement learning techniques.
Most AI models tasked with performing reasoning tasks need to be trained on human-annotated data and demonstrations to “learn” how to solve certain problems, which is both expensive and time-consuming to scale as models are given more challenging tasks. DeepSeek found that it could improve the reasoning and outputs of its model simply by incentivizing it to perform a trial-and-error process until it gets the right answer.
In an article accompanying the paper, Carnegie Mellon University assistant professor Daphne Ippolito and PhD student Yiming Zhang explain the reinforcement method by comparing it to a child playing a video game: “As the child navigates their avatar through the game world, they learn through trial and error that some actions (such as collecting gold coins) earn points, whereas others (such as running into enemies) set their score back to zero. In a similar vein, DeepSeek-R1 was awarded a high score when it answered questions correctly and a low score when it gave wrong answers.”
Previous research showed that using a prompting approach—asking an LLM to provide a step-by-step explanation of how it comes to its output—provides more accurate answers. But the DeepSeek team figured out a way to get better answers through reinforcement by assigning a scoring system to the outputs that R1 produced. That works particularly well with math and programming questions, which usually have a verifiably correct answer. By using this method instead of human-guided reasoning, the LLM was able to come to a correct conclusion on its own as it sought the higher scores.
While the outputs of this method appear to be more accurate, it also obfuscates the machine’s “thought” process a bit more for humans trying to follow along. Asked to produce a reasoning trail for its answer, the model would sometimes switch back and forth between English and Chinese. It also produced explanations that were 10,000 words or more. The method was also only particularly functional for answers with clear right or wrong answers rather than more nuanced or subjective prompts.
Regardless, it’s an interesting window into how DeepSeek has managed to be competitive on a smaller budget. Still, the company itself has plenty of skepticism surrounding it because of its perceived closeness to the Chinese government. Just recently, researchers showed The Washington Post that the company’s model would refuse to produce code with major security flaws when the prompter indicates that they are working with groups considered sensitive by the Chinese government. The researchers also found that the model spat out less secure code when asked to produce work for Tibet, Taiwan, the Falun Gong religious movement, or the Islamic State.
Read the full article here