Those of you who have used motion capture platforms in the past know the amount of work it takes to set them up properly. You can now use AI to do it. With Runway’s Act-One, it is now possible to create fun animations using video and voice as inputs. You will just have to provide a video of a performance to animate your character.
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Learn more about Act-One below.
(1/7) pic.twitter.com/p1Q8lR8K7G
— Runway (@runwayml) October 22, 2024
As Runway explains, instead of using a multi-step workflow, you can now use Act-One for animation motion capture.
The model preserves realistic facial expressions and accurately translates performances into characters with proportions different from the original source video.
You can use a consumer grade camera to capture video for your animations. This model works great across various camera angles. Act-One allows for generation of multi-turn dialog scenes.
[HT]