Zhipu AI breaks US chip reliance with first major model trained on Huawei stack
Zhipu claims GLM-Image achieved industry-leading scores among open-source models for text rendering and Chinese character generation

Chinese artificial intelligence firm Zhipu AI said its new image generation model was trained on chips from Huawei Technologies, making it the first powerful open-source model to be developed on an entirely domestic training stack.
According to Zhipu, the entire training pipeline for GLM-Image, from data preparation to the final training run, was conducted on Huawei’s Ascend Atlas 800T A2 server, incorporating the company’s in-house Ascend AI processors, and with MindSpore, Huawei’s all-in-one machine learning framework.
“We hope this can provide valuable reference for the community to explore the potential of domestic computing power,” Zhipu said.
Powerful multimodal AI models that can natively process text, voice, image and video are widely seen by industry experts as the next frontier of AI models.

Zhipu’s model has a hybrid architecture made up of both autoregressive and diffusion elements, a design that enables the multimodal capabilities pioneered by Google DeepMind’s Nano Banana Pro, which can accurately generate both images and text.