New York startup Runway announced the Act-One system, which allows, based on a video recording of a person’s performance, to transfer his facial expressions to any other character while preserving all the nuances of facial expressions.
The company has begun rolling out Act-One today—the system is available at no additional cost, and can be used by registered users who have enough funds in their personal account to work with the Gen-3 Alpha video generator that Runway introduced this year.
Facial animation is one of the most difficult tasks in filmmaking, traditionally involving sophisticated motion-capture equipment that captures the movement of reference points on an actor’s face. AI-based systems are designed to make this process much more accessible: Runway Act-One allows you to create facial animation in a variety of styles and genres without the need for motion capture equipment or the need to draw dots on the actor’s face.
An important strength of Act-One is the system’s ability to deliver cinematic quality and realistic results from a variety of camera angles and focal lengths. One actor, using only a consumer-grade camera, can play multiple characters at once – the AI model generates an output stream of any kind, be it photorealism or animation, and the complexity of the scene does not matter. This will help when shooting independent films or digital media that traditionally have not had access to high quality production resources.
The Act-One facial expression system is equipped with a comprehensive set of security features: attempts to create content involving public figures are detected and blocked, and technical tools are used to verify voice rights. Continuous monitoring is intended to ensure responsible use of the platform.