📄
Industry News

Breaking Nvidia Monopoly: How GLM-Image and Huawei''s Ascend Chip Topped the Global AI Charts

Kling AI

On January 14, a seismic shift occurred in the global artificial intelligence landscape, catching the attention of both industrial players and capital markets worldwide. GLM-Image, a multimodal image generation model jointly developed by Zhipu AI and Huawei, ascended to the number one spot on the Hugging Face Trending list.

For the uninitiated, Hugging Face is essentially the "World Expo" of open-source models—a central hub where international giants and developers alike showcase their best AI tools. Topping its Trending list is akin to taking center stage at the world's premier tech conference, signifying international recognition of GLM-Image's technical prowess and application value.

CNBC reporting on Chinese AI adapting without Nvidia

U.S. media outlet CNBC noted that this advanced model, trained by Zhipu and Huawei, effectively "breaks the myth" of reliance on U.S. chips. This achievement is not accidental; it is the inevitable result of deep "software-hardware synergy" and a breakthrough across the entire domestic AI industrial chain in China.

The "Full-Stack" Foundation: Huawei Ascend & MindSpore

The critical support behind this achievement is the domestic computing power foundation built by Huawei.

Unlike most previous AI models that relied heavily on foreign GPUs (primarily Nvidia) for training, GLM-Image ran its entire lifecycle—from data preprocessing to massive-scale training—on Huawei's Ascend 800T A2 chips and the MindSpore AI framework.

This fully autonomous "hardware + framework" combination is the real story here. It addresses the core "chokepoint" problem in AI development, proving that training state-of-the-art (SOTA) models is possible without relying on the CUDA ecosystem. The Ascend 910B series (which powers the 800T A2) has demonstrated formidable performance in large cluster environments, offering a viable alternative for the global open-source community.

Deconstructing the Architecture: Why AR + Diffusion Matters

Zhipu AI also achieved significant innovation in the model's architecture. GLM-Image departed from the standard technical routes used by many Western open-source models.

Instead, it utilizes a hybrid "Autoregressive (AR) + Diffusion Decoder" architecture.

  • The "Brain" (Autoregressive): A 9B parameter AR model handles understanding complex instructions, layout planning, and text generation within images.
  • The "Painter" (Diffusion): A 7B parameter diffusion model acts as the decoder, filling in high-fidelity details based on the AR model's blueprint.

This approach solves a notorious pain point in AI image generation: rendering accurate text. Previously, AI-generated images often featured garbled, unreadable text. Thanks to the AR component's strong cognitive capabilities, GLM-Image achieved the highest accuracy in Chinese character generation among open-source models.

This technical path—prioritizing cognitive understanding before generation—mirrors the approach seen in advanced cognitive reasoning models like Nano Banana Pro, which centers on "knowledge + reasoning" to handle complex tasks with greater precision than standard generative models.

Market Reaction: The Rise of Knowledge Atlas (2513.HK)

The "gold standard" value of topping the global chart was immediately reflected in capital market reactions. When news of GLM-Image's open-sourcing first broke, the stock price of Zhipu AI's parent entity, Knowledge Atlas (2513.HK), surged over 16% in a single day. Investors clearly recognized the long-term value of the "domestic chip + autonomous model" combination.

Zhipu AI stock trend and GLM-Image trending on Hugging Face

In fact, since listing on the Hong Kong Stock Exchange on January 8 as the "first global large model stock," Knowledge Atlas has seen its share price increase by over 100%.

Democratizing AI Design: Open Source for All

From a long-term perspective, GLM-Image's success is driven by the synergy of an entire industrial chain. This full-chain capability doesn't just serve tech giants; it significantly lowers barriers for small and medium-sized enterprises (SMEs).

With inference costs as low as RMB 0.1 (approx. $0.01 USD) per image, GLM-Image allows businesses to utilize top-tier AI design tools at a fraction of traditional costs.

Today, the open-source code and weights for GLM-Image are available synchronously on both GitHub and Hugging Face. Developers worldwide can now freely use this "fully autonomous solution," breaking the traditional narrative that cutting-edge model training depends solely on US silicon.

Ready to create magic?

Don't just read about it. Experience the power of Kling 2.6 and turn your ideas into reality today.

You Might Also Like

📝
Industry News2026-03-04

Seedance 2.0 Pricing Revealed: Is the 1 RMB/Sec Cost the Death of Sora 2?

ByteDance's Seedance 2.0 pricing is here: 1 RMB per second for high-quality AI video. Discover how this cost structure challenges Sora 2 and reshapes the industry.

K
Kling 26 Studio
📝
Industry News2026-02-10

Why Seedance 2.0 Was Removed? The Truth Behind StormCrew's Video & Kling 3.0's Defeat

StormCrew's review caused a panic ban of Seedance 2.0. Discover why its 10x cost-effectiveness and distillation tech are crushing Kling 3.0 in AI video.

K
Kling 26 Studio
📝
Industry News2026-02-06

Kling 3 Just Dropped: Will Wan 3 Be the Next Big Shock? (The AI Video Arms Race)

The AI video war is heating up. With Kling 3 setting a new standard, we analyze the rivalry, the history of the Audio Battles, and predict what Wan 3 needs to do to survive.

K
Kling 2.6 Team
📝
Tutorial2026-03-22

The Next Generation of Generation: Unpacking the Wan 2.7 Upgrade

The highly anticipated Wan 2.7 Video release marks a turning point, introducing a multi-modal injection system and a studio-grade workflow for creators.

K
Kling AI
📝
tutorial2026-03-15

The Zero-Cost MoCap Studio: Mastering Kling 3.0 Motion Control for Extreme Action Physics

Master Kling 3.0 Motion Control for extreme action physics. Learn how to create cinematic combat choreography, parkour sequences, and VFX-grade animation without expensive motion capture suits.

K
Kling AI Team
📝
tutorial2026-03-15

Mastering Audio-Visual Sync: My Hands-On Guide to Kling Video 3.0 Omni

A comprehensive guide to Kling Video 3.0 Omni's Native Audio-Visual capabilities. Learn how to achieve accurate mouth movement AI, perfect lip-sync, and complex emotion reproduction for professional-grade AI video content.

K
Kling AI Team
📝
tutorial2026-03-15

The Ultimate AI Workflow: From Nano Banana 2 to Kling 3.0 Motion Control

Master the ultimate cross-modal pipeline combining Nano Banana 2 image generation with Kling 3.0 Motion Control for commercial-grade AI animation. Learn how to create zero-defect video content.

K
Kling AI Team
📝
AI Video Tips2026-03-09

10 Viral Prompts for Kling 3.0 Motion Control: From Dancing Cats to VTubers

Discover 10 viral prompts for Kling 3.0 Motion Control. Learn how to create AI cat dancing videos, animate historical figures, and build VTuber content with Kling 3.0 ai video generator.

K
Kling AI Team
Breaking Nvidia Monopoly: How GLM-Image and Huawei''s Ascend Chip Topped the Global AI Charts | Kling Studio Blog | Kling 2.6 Studio