Connect with us

Hi, what are you looking for?

SOFTWARE

Alibaba introduces open-source model for digital human video generation

Part of Alibaba’s Wan2.2 video generation series, the new model can generate high-quality animated videos from a single image and an audio clip.

Alibaba has unveiled Wan2.2-S2V (Speech-to-Video), its latest open-source model designed for digital human video creation. This innovative tool converts portrait photos into film-quality avatars capable of speaking, singing, and performing.

Part of Alibaba’s Wan2.2 video generation series, the new model can generate high-quality animated videos from a single image and an audio clip.

Wan2.2-S2V offers versatile character animation capabilities, enabling the creation of videos across multiple framing options, including portrait, bust, and full-body perspectives. It can generate character actions and environmental factors dynamically based on prompt instructions, allowing professional content creators to capture precise visual representations tailored to specific storytelling and design requirements.

Powered by advanced audio-driven animation technology, the model delivers lifelike character performances, ranging from natural dialogue to musical performances, and seamlessly handles multiple characters within a scene. Creators can now transform voice recordings into lifelike animated movements, supporting a diverse range of avatars, from cartoon and animals to stylized characters.

Advertisement. Scroll to continue reading.

To meet the diverse needs of professional content creators, the technology provides flexible output resolutions of 480P and 720P. This ensures high-quality visuals output that meets various professional and creative standards, making it suitable for both social media content and professional presentations.

Innovative Technologies

Wan2.2-S2V transcends traditional talking-head animations by combining text-guided global motion control with audio-driven fine-grained local movements. This enables natural and expressive character performances across complex and challenging scenarios.

Another key breakthrough lies in the model’s innovative frame processing technique. By compressing historical frames of arbitrary length into a single, compact latent representation, the technology significantly reduces computational overhead. This approach allows for remarkably stable long-video generation, addressing a critical challenge in extended animated content production.

The model’s advanced capabilities are further amplified by the model’s comprehensive training methodology. Alibaba’s research team constructed a large-scale audio-visual dataset specifically tailored to film and television production scenarios. Using a multi-resolution training approach, Wan2.2-S2V supports flexible video generation across diverse formats – from vertical short-form content to traditional horizontal film and television productions.

Advertisement. Scroll to continue reading.

Wan2.2-S2V model is available to download on Hugging Face and GitHub, as well as Alibaba Cloud’s open-source community, ModelScope. A major contributor to the global open-source community, Alibaba open sourced Wan2.1 models in February 2025 and Wan 2.2 models in July. To date, the Wan series has generated over 6.9 million downloads on Hugging Face and ModelScope.

Advertisement
Advertisement
Advertisement

Like Us On Facebook

You May Also Like

HEADLINES

Microsoft and Publicis Groupe announce the expansion of their strategic partnership to build a full-stack marketing solution that unifies legacy systems, AI agents and...

Biz Solutions

The latest additions include NetSuite AI Connector Service Companion, support for the NetSuite Model Context Protocol (MCP) Apps extension, and expanded support for NetSuite...

HEADLINES

Women accounted for 39.1% of GenAI course enrollments in the Philippines in 2025, up from 38.4% in 2024, a 0.8 percentage-point increase YoY. While...

HEADLINES

ERICA, or the Enterprise Risk Intelligence Companion Agent, was developed internally by the PLDT Group’s Enterprise Risk Management team in close collaboration with PLDT...

HEADLINES

As generative AI fuels large-scale impersonation imagery and remote work reshapes enterprise security, identity has become the perimeter, and high-assurance verification is essential to...

HEADLINES

IBM will embed Deepgram’s capabilities into watsonx Orchestrate. This collaboration makes Deepgram IBM’s first voice partner, bringing voice AI technology that helps enterprises automate...

HEADLINES

Globe President and CEO Carl Cruz emphasized that connectivity in the Philippines is a critical infrastructure. “In an archipelago of over 7,600 islands, exposed...

HEADLINES

Enterprise complexity is working in the attackers' favor — identity weaknesses were exploited in 89% of investigations, while 87% of attacks involved multiple attack...

Advertisement