For the first time – A robot learned to imagine itself

0

Artist’s concept of a robot learning to imagine itself.

A robot created by Columbia Engineers learns to understand itself rather than the environment around it.

Our perception of our body isn’t always correct or realistic, as any athlete or fashion-conscious person knows, but it’s a crucial factor in how we behave in society. Your brain is continually preparing for movement while you’re playing ball or getting dressed so you can move your body without bumping, tripping or falling.

Humans develop our body models as infants, and robots are beginning to do the same. A team from Columbia Engineering revealed today that they have developed a robot that, for the first time, can learn a model of its entire body from scratch without any human assistance. The researchers explain how their robot built a kinematic model of itself in a recent paper published in Scientific robotics, and how he used this pattern to plan moves, achieve goals, and avoid obstacles in a range of scenarios. Even damage to his body was automatically detected and corrected.

Columbia Self-Modeling Robot

A robot can learn whole-body morphology via visual self-modeling to adapt to multiple motion planning and control tasks. Credit: Jane Nisselson and Yinuo Qin/ Columbia Engineering

The robot looks like a baby exploring itself in a hall of mirrors

The researchers placed a robotic arm in a circle of five streaming video cameras. The robot watched itself through the cameras as it swayed freely. Like a baby exploring itself for the first time in a hall of mirrors, the robot wiggled and contorted to learn exactly how its body moved in response to various motor commands. After about three hours, the robot stopped. Its internal deep neural network had finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment.

“We were really curious to see how the robot imagined itself,” said Hod Lipson, professor of mechanical engineering and director of Columbia’s Creative Machines Lab, where the work was done. “But you can’t just peek into a neural network, it’s a black box.” After researchers struggled with various visualization techniques, self-image gradually emerged. “It was kind of a gently shimmering cloud that seemed to engulf the robot’s three-dimensional body,” Lipson said. “As the robot moved, the flickering cloud gently followed it.” The robot’s self-model was accurate to about 1% of its working space.


A technical summary of the study. Credit: Columbia Engineering

Self-modeling robots will lead to more autonomous autonomous systems

The ability of robots to model themselves without the assistance of engineers is important for many reasons: not only does it save labor, but it also allows the robot to track its own wear and tear, and even detect and compensate for damage. . The authors argue that this capability is important because we need autonomous systems to be more self-sufficient. A factory robot, for example, could detect that something is wrong and compensate or call for help.

“We humans clearly have a sense of self,” explained the study’s first author, Boyuan Chen, who led the work and is now an assistant professor at Duke University. “Close your eyes and try to imagine how your own body would move if you took action, like stretching your arms forward or taking a step back. Somewhere in our brains we have a sense of self, a model of self that informs us of the volume of our immediate surroundings that we occupy and how that volume changes as we move.

Self-awareness in robots

The work is part of Lipson’s decades-long quest to find ways to grant robots some form of self-awareness. “Self-modeling is a primitive form of self-awareness,” he explained. “If a robot, animal, or human has an accurate model of self, it can function better in the world, it can make better decisions, and it has an evolutionary advantage.”

Researchers are aware of the limitations, risks, and controversies surrounding granting machines greater autonomy through self-awareness. Lipson is quick to admit that the kind of self-awareness demonstrated in this study is, as he noted, “insignificant compared to that of humans, but you have to start somewhere. We must go slowly and carefully, so that we can reap the benefits while minimizing the risks.

Reference: “Fully body visual self-modeling of robot morphologies” by Boyuan Chen, Robert Kwiatkowski, Carl Vondrick and Hod Lipson, July 13, 2022, Scientific robotics.
DOI: 10.1126/scirobotics.abn1944

The study was funded by the Defense Advanced Research Projects Agency, the National Science Foundation, Facebook and Northrop Grumman.

The authors declare no financial or other conflict of interest.

Share.

Comments are closed.