Connect with us

Hi, what are you looking for?

SOFTWARE

Alibaba releases Wan2.2 to uplift cinematic video production

The Wan2.2 series feature a text-to-video model Wan2.2-T2V-A14B and image-to-video model Wan2.2-I2V-A14B, and Wan2.2-TI2V-5B, a hybrid model that supports both text-to-video and image-to-video generation tasks within a single unified framework.

Alibaba released Wan2.2, the industry’s first open-source large video generation models incorporating the MoE (Mixture-of-Experts) architecture, that will significantly elevate the ability of creators and developers to produce cinematic-style videos with a single click.

The Wan2.2 series feature a text-to-video model Wan2.2-T2V-A14B and image-to-video model Wan2.2-I2V-A14B, and Wan2.2-TI2V-5B, a hybrid model that supports both text-to-video and image-to-video generation tasks within a single unified framework.

Built on the MoE architecture and trained on meticulously curated aesthetic data, Wan2.2-T2V-A14B and Wan2.2-I2V-A14B generate videos with cinematic-grade quality and aesthetics, offering creators precise control over key dimensions such as lighting, time of day, color tone, camera angle, frame size, composition, focal length, etc.

The two MoE models also demonstrate significant enhancements in producing complex motions, including vivid facial expressions, dynamic hand gestures, and intricate sports movements. Additionally, the models deliver realistic representations with enhanced instruction following and adherence to physical laws.

To address the issue of high computational consumption in video generation caused by long tokens, Wan2.2-T2V-A14B and Wan2.2-I2V-A14B implement a two-expert design in the denoising process of diffusion models, including a high-noise expert focusing on overall scene layout and a low-noise expert to refine details and textures. Though both models comprise a total of 27 billion parameters, only 14 billion parameters are activated per step, reducing computational consumption by up to 50%.

Advertisement. Scroll to continue reading.

Wan2.2 incorporates fine-grained aesthetic tuning through a cinematic-inspired prompt system that categorizes key dimensions such as lighting, illumination, composition, and color tone. This approach enables Wan2.2 to accurately interpret and convey users’ aesthetic intentions during the generation process.

To enhance generalization capabilities and creative diversity, Wan2.2 was trained on a substantially larger dataset, featuring a 65.6% increase in image data and an 83.2% increase in video data compared to Wan2.1. Wan2.2 demonstrates enhanced performance in producing complex scenes and motions, as well as an enhanced capacity for artistic expression.

A Compact Model to Enhance Efficiency and Scalability

Wan2.2 also introduces its hybrid model Wan2.2-TI2V-5B, a dense model that utilizes a high-compression 3D VAE architecture to achieve a temporal and spatial compression ratio of 4x16x16, enhancing the overall information compression rate to 64. The TI2V-5B can generate a 5-second 720P video in several minutes on a single consumer-grade GPU, enabling efficiency and scalability to developers and content creators.

Wan2.2 models are available to download on Hugging Face and GitHub, as well as Alibaba Cloud’s open-source community, ModelScope. A major contributor to the global open source community, Alibaba open-sourced four Wan2.1 models in February 2025 and Wan 2.1-VACE (Video All-in-one Creation and Editing) in May 2025. To date, the models have attracted over 5.4 million downloads on Hugging Face and ModelScope.

Advertisement. Scroll to continue reading.
Advertisement
Advertisement
Advertisement

Like Us On Facebook

You May Also Like

HEADLINES

Sonara's AI in the Workplace Report found 77% of workers say AI helps them do their job better, signaling that AI is no longer experimental and...

HEADLINES

The program invites nonprofits or government organizations, including academic institutions, to collaborate with IBM on developing solutions that help people learn more effectively, navigate...

HEADLINES

Central to Armor's approach is Nexus, its unified security operations platform built for teams who run their own SOCs. Unlike traditional SOCs that rely...

HEADLINES

“AI innovation is moving faster than ever before and we’re delivering the critical infrastructure our customers need to move fast and adopt AI safely...

HEADLINES

These capabilities allow financial institutions to better understand and anticipate customers’ evolving needs and deliver hyper-personalized service whether the customer is banking online, using...

HEADLINES

The initiative reflects e&’s move beyond traditional natural language processing (NLP)-based chatbots toward governed, action-oriented AI embedded in core enterprise systems.

SOFTWARE

With agentic coding, Xcode can work with greater autonomy toward a developer’s goals — from breaking down tasks to making decisions based on the...

HEADLINES

Lenovo Hybrid AI Advantage is Lenovo’s end-to-end enterprise AI portfolio, providing the foundation for enterprise AI by combining Hybrid AI Factory, Lenovo AI Services,...

Advertisement