Architectural Decisions in China’s Open-Source AI Ecosystem: Constructing Beyond DeepSeekĀ 

-


Adina Yakefu's avatar

Irene Solaiman's avatar


That is the second blog in a three-part series on China’s open source community’s historical advancements since January 2025’s “DeepSeek Moment.” The primary blog is on the market here.

On this second piece we turn our focus from models to the architectural and hardware selections Chinese firms have made as openness becomes the norm.

For AI researchers and developers contributing to and counting on the open source ecosystem and for policymakers understanding the rapidly changing environment, architectural preferences, modality diversification, license permissiveness, small model popularity, and growing adoption of Chinese hardware point to leadership strategies across a mess of paths. DeepSeek R1’s own characteristics inspired overlap and competition, and contributed to heavier concentrate on domestic hardware in China.



Mixture of Experts (MoE) because the Default Selection

Previously 12 months, leading models from the Chinese community had almost unanimously moved toward Mixture-of-Experts (MoE) architectures, including Kimi K2, MiniMax M2, and Qwen3. While R1 itself was not an MoE model, it proved an important point: strong reasoning might be open, reproducible, and engineered in practice. Under China’s real-world constraints, maintaining high capability while controlling cost, and ensuring models might be trained, deployed, and widely adopted, MoE emerged as a natural solution.Ā 

MoE is sort of a controllable compute distribution system; under a single capability framework, compute resources are allocated across requests and deployment environments by dynamically activating different numbers of experts based on task complexity and value. More importantly, it doesn’t require every inference to devour the complete set of resources, nor does it assume that each one deployment environments share equivalent hardware conditions.

The general direction of Chinese open-source models in 2025 was clear: not necessarily the strongest possible performance, but ability to operate sustainably, deploy flexibly, and evolve constantly, achieving the very best cost performance balance.



The Rush for Supremacy by Modality

Starting in February 2025, open-source activity was now not focused only on text models. It quickly expanded into multimodal and agent-based directions: Any-to-Any models, text-to-image, image-to-video, text-to-video, TTS, 3D, and agents all progressed in parallel. What the community pushed forward was not only model weights, but a full set of engineering assets, including inference deployment, datasets and evaluation, toolchains, workflows, and edge-to-cloud coordination. The parallel emergence of video generation tools, 3D components, distillation datasets, and agent frameworks pointed to something larger than isolated breakthroughs—it pointed to reusable system-level capabilities.Ā 

The competition to guide much like DeepSeek in a non-text modality heated up. StepFun’s released high performance multimodal models, excelling in audio, video, and image generation and processing or editing. Their latest speech-to-speech model Step-Audio-R1.1 boasts state-of-the-art performance, beating proprietary models.Ā  Tencent also reflected this shift through open-source work in video and 3D. Its Hunyuan Video models and projects resembling Hunyuan 3D reflect growing competition beyond text-centric models.



Big Preferences for Small Models

Models within the 0.5B–30B range were easier to run locally, fine-tune, and integrate into business systems and agent workflows. For instance: Among the many Qwen series, Qwen 1.5-0.5B has essentially the most derivative models. In environments with limited compute or strict compliance requirements, these models were much better fitted to long-term operation. At the identical time, leading players often used large MoE models within the 100B–700B range as capability ceilings or “teacher models,” then distilled those capabilities down into many smaller models. This created a transparent structure: just a few very large models at the highest, and lots of practical models underneath. The growing share of small models in monthly summaries reflected real usage needs in the neighborhood.





https://huggingface.co/spaces/cfahlgren1/hub-model-tree-stats



More Permissive Open Source Licenses

After R1, Apache 2.0 became near the default alternative for open models from the Chinese community. More permissive licenses lowered the friction around using, modifying, and deploying models in production, making it much easier for firms to maneuver open models into real systems. Familiarity with standard licenses, resembling Apache 2.0 and MIT, similarly eased usage; prescriptive and tailored licenses add friction through unfamiliarity and latest legal barriers, contributing to the decline seen within the graph below.

Based on the releases of all organizations shown in Chinese Open Source Heatmap



From Model-First to Hardware-First

In 2025, model releases increasingly aligned with inference frameworks, quantization formats, serving engines, and edge runtimes. A outstanding goal was now not simply to make weights downloadable, but to be sure that models could run directly on track domestic hardware—and run reliably and efficiently. This alteration was most visible on the inference side. For instance, With DeepSeek-V3.2-Exp, each Huawei Ascend and Cambricon chips achieved day-zero support, not as cloud demos, but as reproducible inference pipelines released alongside the weights, enabling developers to validate real-world performance directly.

At the identical time, training-side signals began appearing. Ant Group’s Ling open models use optimized training on domestic AI chips to attain near NVIDIA H800 performance, cutting the price of coaching 1 trillion tokens by about 20%. Baidu’s open Qianfan-VL models clearly documented that the model was trained on a cluster of greater than 5,000 Baidu Kunlun P800 accelerators, their flagship AI chip, with details on parallelization and efficiency. Initially of 2026, Zhipu’s GLM-Image and China Telecom’s latest open model – TeleChat3, were each announced as being trained entirely on domestic chips. These disclosures showed that domestic computers were now not limited to inference, but had began to enter key stages of the training pipeline.

On the serving and infrastructure side, engineering capabilities are being systematically open-sourced. Moonshot AI released its serving system: Mooncake, and explicitly supported features resembling prefill/decoding separation. By open-sourcing production-grade experience, these efforts significantly raised the baseline for deployment and operations across the community, making it easier to run models reliably at scale. This direction was echoed across the ecosystem. Baidu’s FastDeploy 2.0 emphasized extreme quantization and cluster-level optimization to scale back inference costs under tight compute budgets. Alibaba’s Qwen ecosystem pursued full-stack integration, tightly aligning models, inference frameworks, quantization strategies, and cloud deployment workflows to reduce friction from development to production. Still, reports of compute constraints in China threaten expansion; Zhipu AI is reportedly restricting usage amid a computing crunch.

When models, tools, and engineering are delivered together, the ecosystem now not grows by adding projects, but by structurally differentiating on a shared foundation-and starting to evolve by itself. How China will reply to U.S. hardware sales and export controls as NVIDIA sells H200s continues to be an open query. Read more concerning the shifting global compute landscape here.Ā 



Reconstruction In Progress

The “DeepSeek Moment” of January 2025 did greater than trigger a wave of latest open models. It forced a deeper reconsideration of how AI systems needs to be built when open source is not any longer optional but foundational and why those underlying selections now carry strategic weight.

Chinese firms aren’t any longer optimizing isolated models. As an alternative, they’re pursuing distinct architectural paths geared toward constructing full ecosystems suited to an open-source world. In an increasingly commoditized model landscape, these decisions signal a transparent shift in competition from model performance to system design.

Our next blog will go deeper into organizational wins and share a few of what we expect to see in 2026.



Source link

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x