There are many ways to teach robots new skills. With LLMs, it will now be easier to help robots learn how to perform complex tasks. SDS (See It, Do It, Sorted) is a new quadrupedal skill learning approach using a single video. Here is how it works:
Leveraging the Visual capabilities of GPT-4o, SDS processes input videos through our novel chain-of-thought promoting technique (SUS) and generates executable reward functions (RFs) that drive the imitation of locomotion skills, through learning a Proximal Policy Optimization (PPO)-based Reinforcement Learning (RL) policy, using environment information from the NVIDIA IsaacGym simulator. SDS autonomously evaluates the RFs by monitoring the individual reward components and supplying training footage and fitness metrics back into GPT-4o, which is then prompted to evolve the RFs to achieve higher task fitness at each iteration.
This system enables quadruped robots to learn skills by watching a single video. It uses GPT-4o to analyze the video and create reward functions, which guide the robot in imitating movements like walking, hopping, or running. A Unitree Go1 robot was used to test this. What’s neat is you don’t need large datasets to train robots with this system.
[HT] [Authors: Jeffrey Li, Maria Stamatopoulou, and Dimitrios Kanoulas]