Runway can now mimic everything from 35mm disposable cameras to ’80s sci-fi

AI startup Runway, makers of the popular Gen-3 Alpha image generator, debuted a new foundational model that “excels at maintaining stylistic consistency while allowing for broad creative exploration,” per the company.

The new model is called Frames, and it offers users the ability to generate images on a wide variety of subjects while strictly adhering to a consistent visual style and aesthetic. Whether it’s mimicking ’80s camp films like Flash Gordon and Xanadu, aping the format of ’90s-era 35mm disposable cameras or retro anime, generating sweeping landscapes or carefully composed still-life shots, Frames sticks to the artistic style the user dictates.

“With Frames, you can begin to architect worlds that represent very specific points of view and aesthetic characteristics,” the company wrote in Monday’s announcement post. “The model allows you to design with precision the look, feel and atmosphere of the world you want to create.”

Frames is not going to replace the current Gen-3 Alpha generation model but rather augment it. “We’re gradually rolling out access inside Gen-3 Alpha to allow you to build more of your worlds within a larger, more seamless creative flow,” the company wrote.

Gen-3 Alpha is a relatively new model, having been introduced in June 2024. It is built for large-scale multimodal training and marks a “major improvement in fidelity, consistency, and motion over Gen-2, and a step towards building General World Models,” the company announced at the time. The model has recently been updated with more precise camera controls and the capacity for video-to-video generation.

The Gen-3 alpha has been trained on thousands of YouTube videos, a practice that has led to accusations of copyright infringement from YouTube content creators. Tests conducted by 404 Media found that naming a specific creator (say, MrBeast) in the prompt instigated the system to generate images in that creator’s aesthetic.