That is the primary blog in a series that can examine China’s open source community’s historical advancements prior to now 12 months and its reverberations in shaping your entire ecosystem. Much of 2025’s progress might be traced back to January’s “DeepSeek Moment”, when Hangzhou-based AI company DeepSeek released their R-1 model.
The primary blog addresses strategic changes and the explosion of latest open models and open source players. The second will cover architectural and hardware selections largely by Chinese corporations made within the wake of a growing open ecosystem. The third will analyze outstanding organizations’ trajectories and the long run of the worldwide open source ecosystem.
For AI researchers and developers contributing to and counting on the open source ecosystem and for policymakers understanding the rapidly changing environment, there has never been a greater time to construct and release open models and artifacts, as proven by the past 12 months’s immense growth catalyzed by DeepSeek. Notably, geopolitics has driven adoption; while models developed in China have been dominating across metrics throughout 2025 and recent players leapfrogging one another, Western AI communities are in search of commercially deployable alternatives.
The Seeds of China’s Organic Open Source AI Ecosystem
Before R1, China’s AI industry was still largely centered on closed models. Open models had existed for years, but they were mostly confined to research communities or used only in area of interest scenarios equivalent to privacy-sensitive applications. For many corporations, they weren’t the default selection. Compute resources were tight, and whether to “open or close” was a subject of debate.
DeepSeek’s R1 model lowered the barrier to advanced AI capabilities and offered a transparent pattern to follow, unlocking a second layer. Furthermore, the discharge gave Chinese AI development something extremely invaluable: time. It showed that even with limited resources, rapid progress was still possible through open source and fast iteration. This approach aligned naturally with the goals set out in China’s 2017 “AI+” strategy: combining AI with industry as early as possible, while continuing to accumulate compute capability over the long run.
One 12 months after the discharge of R1, what we see emerging will not be only a set of latest models, but in addition a growing organic open source AI ecosystem.
DeepSeek R1: A Turning Point
For the primary time, an open model from China entered the worldwide mainstream rankings and, over the next 12 months, was repeatedly used as a reference point when recent models were released. DeepSeek’s R1 quickly became probably the most liked model on Hugging Face of all time and the highest liked models aren’t any longer majority U.S.-developed.
But R1’s real significance was not whether it was the strongest model on the time, its importance lay in the way it lowered three barriers.
The primary was the technical barrier. By openly sharing its reasoning paths and post-training methods, R1 turned advanced reasoning, previously locked behind closed APIs, into an engineering asset that might be downloaded, distilled, and fine-tuned. Many teams not needed to coach massive models from scratch to achieve strong reasoning capabilities. Reasoning began to behave like a reusable module, applied time and again across different systems. This also pushed the industry to rethink the connection between model capability and compute cost, a shift that was especially meaningful in a compute-constrained environment like China.
The second was the adoption barrier. R1 was released under the MIT license, making its use, modification, and redistribution straightforward. Firms that had relied on closed models began bringing R1 directly into production. Distillation, secondary training, and domain-specific adaptation became routine engineering work fairly than special projects. As distribution constraints fell away, the model quickly spread into cloud platforms and toolchains, and community discussions shifted from “which model scores higher” to “how will we deploy it, reduce cost, and integrate it into real systems.” Over time, R1 moved beyond being a research artifact and have become a reusable engineering foundation.
The third change was psychological. When the query shifted from “can we do that?” to “how will we do that well?”, decision-making across many corporations modified. For the Chinese AI community, this was also a rare moment of sustained global attention, one which mattered deeply to an ecosystem long seen mainly as a follower.
Together, the lowering of those three barriers meant that the ecosystem began to achieve the flexibility to copy itself.
From DeepSeek to AI+: Strategic Realignmentt
Once open source moved into the mainstream, a natural query followed: how would Chinese corporations’ strategies change?Over the past 12 months, the reply became clear: competition began to shift from model-to-model comparisons toward system level capabilities.
Compared with 2024, the period after the discharge of R1 saw China’s AI landscape settle right into a recent pattern. Large technology corporations took the lead, startups followed quickly, and firms from vertical industries increasingly entered the sphere. While their paths differed, a shared understanding progressively emerged, especially amongst leading players: open source was not a short-term tactic, but a part of long-term competitive strategy.
The variety of competitive Chinese organizations releasing cutting-edge models and repositories skyrocketed. Reflected in Hugging Face Repository Growth of Chinese Firms, the variety of open releases from existing giants substantially increased, with Baidu going from zero releases on Hugging Face in 2024 to over 100 in 2025, and others equivalent to ByteDance and Tencent increasing releases by eight to nine times. An influx of newly open organizations released highly performant models, with Moonshot’s open release, Kimi K2, being a “one other DeepSeek moment”.
Releases became stronger and frequent, with performant models released on a weekly basis; newly created Chinese models consistently became most liked and downloaded every week, boasting highest popularity among the many top most downloaded recent models on Hugging Face. The Top Newly Created Models by Week on Hugging Face shows recent repositories labeled by organization location or base model organization location for popular derivatives.
As seen in Hugging Face’s heatmap data, between February and July 2025, open releases from Chinese corporations became noticeably more energetic. Baidu and Moonshot moved from primarily closed approaches toward open release. Zhipu AI’s GLM and Alibaba’s Qwen went a step further, expanding from simply publishing model weights to constructing engineering systems and ecosystem interfaces. At this stage, comparing raw model performance alone was not enough to win. Competition increasingly centered on ecosystems, application scenarios, and infrastructure_._
This strategy was effectively successful; of newly created models (<1 12 months), downloads for Chinese models have surpassed some other country including the U.S.
Chinese AI players should not coordinating by agreement, but by constraint. What looks like collaboration is healthier understood as alignment under shared technical, economic, and regulatory pressures. This doesn’t mean corporations formed cooperative alliances. Somewhat, under similar constraints around compute, cost, and compliance, they began competing along similar technical foundations and engineering paths. When competition takes place on comparable system structures, the ecosystem starts to indicate the flexibility to spread and grow by itself. Tech leaders from Zhipu AI (Z.ai), Moonshot AI, Alibaba’s Qwen, and Tencent coordinating on shared questions is never seen in other countries.
Global Reception and Response
Positive sentiment toward open source adoption and development has increased worldwide and particularly within the U.S., with broader recognition of how open source leadership is critical in global competitiveness.
DeepSeek has been heavily adopted in global markets, especially in Southeast Asia and Africa. In these markets, aspects equivalent to multilingual support, open-weight availability, and price considerations have supported enterprise use.
Often Western organizations seek non-Chinese models for business deployment. Major releases from U.S. organizations equivalent to OpenAI’s gpt-oss, AI2’s Olmo, and Meta’s Llama 4 received community engagement. Reflection AI announced its efforts to construct frontier American open-weight models. In France, Mistral released their Mistral Large 3 family, constantly developing their open source roots.
At the identical time, major releases within the West construct on Chinese models; in November 2025, Deep Cogito released Cogito v2.1 because the leading U.S. open-weight model. The model was a fine-tuned version of DeepSeek-V3. Startups and researchers globally using open-weight models are sometimes defaulting to if not counting on models developed in China.
The American Truly Open Model (ATOM) project cites DeepSeek and China’s model momentum as a motivator for concerted efforts toward leading in open-weight model development. The project emphasizes the necessity for multiple efforts and its research also highlights OpenAI’s gpt-oss heavy early adoption.
The world continues to be responding, with a brand new open source fervor. 2026 is shaping up for major releases , especially from China and the U.S. Of high relevance are the architectural trends, hardware selections, and organizational directions, which might be covered next on this series.
All data represented is sourced from Hugging Face. For more related data and analyses of 2025 in open source, we encourage you to read the Data Provenance Initiative and Hugging Face’s Economies of Open Intelligence: Tracing Power & Participation within the Model Ecosystem, aiWorld’s Open Source AI Yr In Review 2025, and InterConnects’s 8 plots that specify the state of open models.




