Runway, an NYC based AI video startup, announced Act-One, a new state-of-the-art tool for generating expressive character performances, inside Gen-3 Alpha. The access to Act One is currently limited. Act-One can generate compelling animations by just using video and voice performances as inputs. The tool reduces the reliance on traditional motion capture systems, making it simpler to bring characters to life in production workflows. On their blog, Runway uploaded several videos and styles showcasing the different ways in which the tool is used.
Simplifying Animation for Creators
Act-One simplifies animation by using a single-camera setup to capture actor performances, eliminating the need for motion capture or complex rigging. The tool preserves realistic facial expressions and adapts performances to characters of different proportions. The model delivers high-fidelity animations across various camera angles and supports both live-action and animated content. It expands creative boundaries for professionals as they require only consumer-grade equipment to produce expressive multi-turn dialogue scenes with a single actor.
“Traditional pipelines for facial animation often involve complex, multi-step workflows. These can include motion capture equipment, multiple footage references, manual face rigging, among other techniques. Our approach uses a completely different pipeline, driven directly and only by the performance of an actor and requiring no extra equipment,” per a statement on their blog.
Runway’s Continues its Reign in the GenAI Video Space
Last month, Runway partnered with Lionsgate to introduce AI into filmmaking. Runway aims to bring these tools for artists, and by extension bring their stories to life. This deal would open the doors for many of these stories to appear on the big screen eventually. Runway’s tools have also been employed in Hollywood before.
“I don’t think text prompts are here to stay for a long time. So a lot of our innovation has been on creating control tools,” said Runway CEO Cristóbal Valenzuela in an interview on how AI is coming to Hollywood and the need for giving creators more access and freedom over video generation.
Runway also has an AI Film Festival which is dedicated to celebrating artists who incorporate emerging AI techniques in their short films. Launched two years ago, the festival aims to spark conversation about the growing influence of AI tools in the film industry and to engage with creators from diverse backgrounds, exploring their insights and perspectives.
Others in the Race
OpenAI’s flagship model, Sora, is not publicly available yet, nor is there any update from the company regarding its release but hopefully it will be launched after elections. Genmo also unveiled a research preview of Mochi 1 – an open-source model designed to generate high-quality videos from text prompts.
Earlier this month, Meta also entered the Gen AI space with its MovieGen. Adobe also introduced Gen AI to video with its Adobe Firefly. Luma’s Dream Machine was made freely available for experimentation on its website. In terms of competition from China, Minimax officially launched its Image-to-Video feature. Even Kling introduced new capabilities to its model including a lip sync feature.
The post Runway’s New Model Simplifies Character Animation for Creators appeared first on AIM.