top of page

How This Robot Taught Itself To Walk In Just a Few Hours

  • GaneshMartin (Hosur)
  • Jun 25, 2020
  • 2 min read

Researchers affiliated with Google Robotics have successfully managed to get a robot to teach itself without relying on simulation trials.

  • Using deep reinforcement learning, a type of machine learning that borrows from concepts used in psychology, the scientists could avoid hard-programming every walking-related command as well as avoid simulation tests.

  • While the field of self-taught robotic locomotion is still nascent, this work provides sufficient evidence that it works. The team's results were published to arXiv.

Walking is hard, and what's hard for humans is equally confounding for robots. But with the help of machine learning, a robot learned to walk in just a few hours—a good 12 months faster than the average human. Not bad.

Usually, a roboticist must either hardcode every single robotic step or build a simulated world in which the robot can complete its trial-and-error training. But both of those methods take a lot of time, so researchers affiliated with Google used reinforcement learning so the robot could teach itself how to walk in the real world. This branch of machine learning uses software to gather more information about its surroundings through continually repeating trials and rewarding successful attempts.

Simulation is still an important ingredient of reinforcement learning, but the researchers' was meant to take that kind of testing to the next level. This means researchers let their Minotaur robot roam around a physical environment before ambling across the trial's differing terrains, like flat ground, a soft mattress, and a doormat with geometrical crevices.

Sehoon Ha, an assistant professor at Georgia Institute of Technology (a part of Google Robotics) and lead author of the study, says that it's

difficult to build quick and accurate simulations for a robot to explore. You can model every individual crack in the asphalt, but that doesn't help much when the robot walks down an unfamiliar road in the real world.

"For this reason, we aim to develop a deep [reinforcement learning] system that can learn to walk autonomously in the real world," he wrote in the paper.

But there's a challenging engineering problem when trying to teach a robot to walk—the thing is going to fall...a lot. One way that Ha and the other researchers were able to ensure both automated learning in the real world and safety of the robot was to enable multiple types of learning at once. When a robot learns to walk forward, it may reach the perimeter of the training space, so they allowed the robot to simultaneously practice forward and backward movement so that it could effectively reset itself.

Their methodology was so successful that the robot required no manual resets during its hours of training. For comparison, Ha's prior robot in December 2018 required 100 manual resets.

The other challenge was making sure that the robot really learned to walk by itself, meaning no human intervention whatsoever. The only hard-coding the team used was a command telling the robot to stand up after a fall, but they hope to eventually automate this part of the learning process as well.

Removing time-intensive coding and simulation trials helps roboticists spend more time seeing how the real thing interacts with its surroundings. That'll hopefully propel practical applications for walking robots, such as search-and-rescue and military applications where unfamiliar and often hostile environments are commonplace.

 
 
 

Comments


Follow

Contact

9442621823

Address

tamilnadu, hosur, India

©2017 by ganeshmartinrobotics. Proudly created with Wix.com

bottom of page