background Layer 1

Runway introduced the Gen-3 Alpha AI model

Runway has unveiled an alpha version of its Gen-3 artificial intelligence model for generating videos from prompts and static images.

The neural network does an excellent job of creating expressive human characters with a wide range of movements, gestures and emotions, the announcement says. Gen-3 Alpha is trained to accurately identify key frames in a video and create transitions.

“Gen-3 Alpha is the first model in an upcoming series to be trained on a new infrastructure designed for large-scale multimodal learning. This is a significant improvement in accuracy, consistency and movement over Gen-2 and is a step towards creating“General models of the world”" Runway said in a statement.

Gen-3 Alpha can produce five- and ten-second videos at high resolution. Generation time is 45 and 90 seconds, respectively. The co-founder and CTO of the company, Anastasis Germanidis, spoke about this in an interview with TechCrunch .

There is no exact time frame for the public release of Gen-3. The alpha version "will soon be available in the Runway product line with support for all existing modes (text-video, image-video and video-video) and some new ones," Germanidis noted .

Recall that in February, OpenAI introduced the Sora generative AI model for converting text into video. In May, writer and director Paul Trillo generated a video clip using it.

Google DeepMind is developing artificial intelligence technology to create video soundtracks.

We use cookies for analytical purposes and to deliver you the best experience with our website. Continuing to the site, you agree to the Cookie Policy.