Learning via imitation comes naturally to humans. In the age of the internet, almost all skills can be learnt by looking at the videos online. However, teaching machines to learn from imitation has been a long-standing goal of machine researchers.
To investigate how well this ability can be replicated in machines, Google, Intel AI labs and the University of California, Berkeley collaborated to present Motion2Vec, an algorithm that learns motion-centric representations of manipulation skills from video demonstrations for imitation learning. The learned representation was then applied to surgical suturing segmentation and pose imitation in simulation and real on the da Vinci robot.
Overview Of Motion2Vec
For the demonstration of Motion2Vec model, the researchers picked the Da Vinci robot for the surgical tasks. For instance, a video of suturing can be segmented by the model based on their timestamps into action segments like needle insertion, needle extraction, needle hand-off and so on. This kind of decomposition can …