Luma AI, which gained popularity last yr with its video creation artificial intelligence (AI) model called ‘Dream Machine’, has released its latest model. For the reason that launch of Sora, the features of ‘consistency’ and ‘observance of the laws of physics’ have grow to be the formula for video models.
Luma announced on the sixteenth (local time) that it had launched a model called Ray2 through X (Twitter). This may be used on the Luma website and mobile paid plans.
“This model creates consistent motion and physical visuals which are fast and natural,” said co-founder Amit Jayne Luma. “This dramatically increases production success and makes video storytelling more accessible.”
It was also revealed that the model was trained using 10 times more computing than the prevailing model Ray1. Currently, only text-to-video conversion is feasible, creating videos which are 5 to 10 seconds long. As a recently introduced model, it is claimed to give you the option to create videos in only just a few seconds.
Founder Jane added that image-video and video-video functions will even be added in the longer term.
In X, videos created by Luma and creators appeared one after one other. Even though it is just not possible to evaluate Ray 2’s performance itself because it is a sample, opinions have emerged that it shows faster and more natural movements in comparison with existing video creation models.
Moreover, some users who tried it themselves commented that the realism, camera angles, and lighting were improved.
To commemorate the discharge of Ray2, Luma also held an event to award a prize of $7,000 (roughly 10 million won) to the video creator with essentially the most views on a single platform. The deadline is the twenty second, and the winners might be announced on that day.
Reporter Lim Da-jun ydj@aitimes.com