Alibaba has been upgraded for the second time in a month. On this field, it’s rapidly expanding its influence.
Alibaba unveiled the open source video creation and editing AI model ‘Wan2.1-VACE (WAN2.1-VACE)’ on the 14th (local time).
This model can generate a video by receiving various kinds of inputs resembling text, images, and pictures, and in addition provides editing for the generated content. It’ll be available in two versions, 14 billion parameters and 1.3 billion parameters.
â–² Reference -based imaging (R2V), â–² editing between images (V2V), â–² masking -based video editing (MV2V) could be performed, and users can freely mix them to perform complex video production.
Alibaba has upgraded the identical 2.1 series -based video model in mid -April.
Particularly, this 12 months, greater than 10 AI models are being released at a high speed. The word ‘Frenzy’s launch’ got here out.
China aimed to meet up with the US not only in language models because the launch of the video model ‘Sora’ of the Open AI last February last 12 months. This effort is achieving visible results. Not only has a variety of models, but Kwaisho’s ‘Kling’ has gained popularity for the US runway model.
Meanwhile, Alibaba’s model Hub, GitHub, Model Scope You’ll be able to download it at no cost from the back.
By Park Chan, reporter cpark@aitimes.com