Sora, Hailuo, Dream Machine, and Kling are all great for AI video generation but they all have certain limitations. For example, some of them don’t follow prompt instructions or can’t create AI videos’ with a certain sequence of events. MinT aims to address that. It is a multi-even video generation that lets you bind events to specific periods in your videos. As they explain:
To enable time-aware interactions between event captions and video tokens, we design a time-based positional encoding method, dubbed ReRoPE. This encoding helps to guide the cross-attention operation. By fine-tuning a pre-trained video diffusion transformer on temporally grounded data, our approach produces coherent videos with smoothly connected events.
Thanks to this approach, you can produce videos with subjects that perform a sequence of moves at specific parts of your video. It can produce multi-event videos that other top models struggle with. More info is available here.