Want to Make Your Robot Succeed? Show It Some Tough Love
Wed, April 21, 2021

Want to Make Your Robot Succeed? Show It Some Tough Love

The researchers used reinforcement training to train the robot / Photo Credit: Andrey_Popov (via Shutterstock)

 

University of Southern California (USC) Ph.D. students and lead authors Jiali Duan and Qian Wang, along with adviser Professor C.C. Kay Kuo and co-author Lerrel Pinto of Carnegie Mellon University, found that “training a robot with a human adversary” drastically improved its grasp of objects, according to research news site Science Daily. Co-author and assistant professor of computer science Stefanos Nikolaidis said, “If we want them to learn a manipulation task, such as grasping, so they can help people, we need to challenge them.” 

Titled “Robot Learning via Human Adversarial Games," the study was presented on Nov. 4 at the International Conference on Intelligent Robots and Systems. Nikolaidis and his team employed reinforcement learning, a technique that AI programs “learn” from repeated experimentation. 

In the researchers’ experiment, the robot attempts to hold an object in a computer simulation. The human at the computer observes the robot’s grasp. If successful, the human attempts to take the object from its hand by using the keyboard to signal direction. This will help the robot differentiate between a weak grasp such as holding the top of the bottle, and a firm grasp such as holding it in the middle. A firm grasp makes it harder for a human to snatch the object away. 

As the human adversary rejected unstable grasps, the quickly the robotic system learned to hold objects properly. In an experiment, it achieved a 52% grasping success rate compared to human collaborator’s grasping success rate of 26.5%. The researchers also found that the model trained with a human adversary did better than a simulated adversary (28% grasping success rate). Therefore, it can be concluded that robotic systems learn best with human adversaries. 

Nikolaidis said, “That's because humans can understand stability and robustness better than learned adversaries.” A robot picks up an object and if a human tries to snatch it, the robot’s grip will become firmer. Since it learned to hold an object with a firm grasp, the robot will succeed more often even if it is in a different position. If a human keeps breaking the robot’s gripper, it will never learn and succeed. “The robot needs to be challenged but still be allowed to succeed in order to learn,” Nikolaidis added.