Deepseek rattles off the US-led AI ecosystem with its latest models. nvidia’s Market capitalization. While sector leaders are working on fallout, small AI companies are seeing opportunities to expand into Chinese startups.
Several AI-related companies told CNBC that the emergence of Deepseek is not a threat, but a “big” opportunity for them.
“Developers want to replace the expensive and closed models of Openai with open source models like the Deepseek R1…” said Andrew Feldman, CEO of Celebrus Systems, an artificial intelligence chip startup. It’s there.
The company competes with Nvidia’s graphics processing units to provide cloud-based services through its own computing cluster. Feldman said the release of the R1 model produced one of the biggest spikes ever in demand for the service.
“R1 shows that growth in the (AI market) is not dominated by a single company. There is no hardware and software moat in the open source model,” Feldman added.
Open source refers to software that makes source code freely available on the web. Unlike competitors such as Openai, Deepseek’s models are open source.
Deepseek also claims that the R1 inference model is comparable to the best of American technology despite being run at a lower cost and being trained without cutting-edge graphics processing units, but the industry’s Watchers and competitors question these claims.
“Low prices can help drive global adoption, like PC and the internet market. The AI market is on a similar secular growth path,” Feldman said.
Inference chip
Deepseek can increase adoption of new chip technology by accelerating the AI cycle from training to the “inference” phase, chip startups and industry experts said.
Inference refers to the act of using and applying AI to make predictions or decisions based on new information, rather than building or training a model.
“Simply put, AI training is about building a tool or algorithm, while inference is about actually deploying this tool for use in real applications.”
Nvidia holds a dominant position in GPUs used for AI training, but many competitors see room for expansion in the “inference” segment, promising that they have low cost and high efficiency Masu.
Although AI training is very computationally intensive, inference could work with stronger chips programmed to perform narrower ranges of tasks, Lee added.
Many AI chip startups told CNBC that demand for inference chips and computing is growing as clients adopt and build on DeepSeek’s open source model.
“(deepseek) demonstrated that smaller open models can be trained to be more capable or more capable than larger proprietary models. This can be done at a fraction of the cost,” the matrix.
“The wide range of small capacity models catalyze the era of inference because of widespread availability,” he told CNBC, adding that he recently saw a surge in interest from global customers seeking to speed up inference plans. Ta.
Robert Wachen, co-founder and COO of AI Chipmaker Etched, said dozens of companies have contacted startups since Deepseek released Reasoning Models.
“Companies are (now) shifting spending from cluster training to inference clusters,” he said.
“DeepSeek-R1 has proven that inference time calculation is the (state-of-the-art) approach of all major model vendors and thinking. Millions of users.”
Jevon’s Paradox
Analysts and industry experts agree that Deepseek’s performance is a boost to the AI reasoning and the broader AI chip industry.
According to a report by Bain & Company, “Deepseek’s performance appears to be based on a set of engineering innovations that improve training costs while significantly reducing inference costs.”
“In bullish scenarios, continuous improvements in efficiency lead to cheaper inferences and spurs greater AI adoption.”

This pattern explains Jevon’s paradox. This is the theory that demand will increase due to cost reductions in new technology.
Financial services and investment firm Wedbush said in a research note last week that it continues to expect AI use across businesses and retail consumers around the world to drive demand.
Speaking to CNBC’s “Fast Money” last week, Sunny Madra, the COO of GROQ, who develops chips for AI reasoning, says there is room for fewer players to grow as the overall demand for AI increases. I suggested.
“The world needs more tokens (units of data processed by AI models), making it possible for Nvidia to supply all people with sufficient chips, giving the market a more aggressive opportunity to sell. You can do it,” Madra said.