Tuesday, May 24, 2022
HomeArtificial intelligenceOne big leap for the little cheetah | MIT News

One big leap for the little cheetah | MIT News

One big leap for the little cheetah MIT News

A furry cheetah runs across a rolling field, narrowing gaps in a rough terrain. The movement may seem effortless, but moving a robot this way is a completely different prospect.

In recent years, four-legged robots, inspired by the movements of cheetahs and other animals, have made great strides, but they are still lagging behind their mammalian counterparts when traveling through a landscape with rapid elevation changes.

You should use Vision in those settings to avoid failure. For example, if you do not see it, it is difficult to avoid stepping in the gap. There are currently several ways to incorporate a vision leg into a locomotive, but most of them are not really suitable for use with emerging robotic systems, ”said Gabriel Magolis, a doctoral student at the Pulkit Agrawal Laboratory, a professor of computer science. Artificial Intelligence Laboratory (CSAIL) at MIT.

Now, Magolis and his partners have improved A system that increases the speed and agility of legged robots When they jump through the gaps in the terrain. The new control system is divided into two parts — one performs real-time input from a video camera mounted on the front of the robot, and the other converts that information into instructions on how to move the robot’s body. Researchers at the MIT Mini Cheetah, a powerful, agile robot built in the laboratory of mechanical engineer Professor Sangbe Kim, tested their system.

Unlike other four-legged robot control systems, this two-part system does not require pre-mapping of the terrain, so the robot can go anywhere. In the future, this will allow robots to go into the woods or climb stairs or administer medication to an elderly person during an emergency response operation.

Magolis co-authored this article with Pulkit Agrawal, Senior Editor-in-Chief of MIT’s Impossible AI Laboratory, who is an Assistant Professor of Professional Development in the Department of Electrical Engineering and Computer Science; Professor Sangbe Kim of the Department of Mechanical Engineering at MIT; And other graduate students at MIT, Tao Chen and Xiang Phu. Other co-authors include Karthik Pygmal, a graduate student at Arizona State University; And Dongguan Kim, an assistant professor at the University of Massachusetts in Amherst. The work will be presented at a conference on robot learning next month.

It is all under control

The use of two separate controllers that work together makes this system a special upgrade.

A controller is an algorithm that converts a process to follow a robot’s position. Most blind controllers – those that do not include vision are robust and efficient, but only allow robots to walk on continuous terrain.

Vision is a complex sensor application for the process and these algorithms are unable to manipulate it efficiently. Vision systems typically rely on the “altitude map” of the terrain, which must be created or generated prior to flight, and the process is usually slow and may fail if the altitude map is incorrect.

To improve their system, the researchers took the best features from these powerful, blind controllers and integrated them with a separate module that manipulates vision in real time.

The robot’s camera captures in-depth images of the oncoming terrain, which are fed to a high-level controller, which allows the robot’s body position (joint angles, body orientation, etc.) to be used. A high level controller is a nervous system that “learns” from experience.

That neural network provides a targeted trajectory and provides torque to each of the 12 joints in the robot using the second controller. This low-level controller is not a neural network but instead relies on a set of concise, physical equations that describe the robot’s movement.

The hierarchy allows you to limit the robot’s behavior, including the use of this low-level controller, which allows it to behave better. With this low-level controller, we use well-defined models that can set limits that would normally not be possible in a learning-based network, ”says Margolis.

Network teaching

The experimental and error-correcting method known as strengthening learning was used by researchers to train high-level controller. They made simulations of a robot running across hundreds of different non-stop terrains It was rewarded for successful crossings.

Over time, I learned from the algorithm what are the actions that maximize the reward.

They then created a physical, gap-like area with a set of wooden planks and tested their control system using small cheetahs.

“It was definitely fun to work with some of MIT’s internally designed robots. The Little Cheetah is a great platform and it is modular and is made mostly of parts you can order online so if we need a new battery or camera it was done by ordering from a regular supplier and with little. It’s a little help from Sangbe’s lab, ” Margolis said.

Assessing the condition of the robot machine was sometimes a challenge. Unlike in simulators, sound is added to real-world sensors, which can add to the effect and influence it. Therefore, for some experiments involving high-precision footwork, the researchers used a motion capture method to measure the true position of the robot.

Their system used only one controller more than the other controllers and passed 90% of the small cheetah territory successfully.

“One of the innovations in our system is to adapt the robot’s gait. If a man is trying to jump through a really wide gap, they can start running fast to increase speed, and they are likely to put both legs together to make a really powerful leap through that gap. Also, our robot can adjust the time and duration of its foot contact to navigate the terrain well, ”says Margolis.

Jumped out of the lab

Although researchers have been able to show that their control system works in a laboratory, Margolis says they still have a long way to go before the system can be implemented in the real world.

In the future, they hope to be able to do all the calculations there so that a more powerful computer can be fitted to the robot. They also want to improve the robot’s state assessment to eliminate the need for a motion capture system. In addition, they like to upgrade the low-level controller so that it can utilize the full range of motion of the robot and improve the high-level controller so that it operates well under different lighting conditions.

“It is remarkable to see the flexibility of machine learning methods that rely on centuries-old prototype-based technologies and avoid carefully designed intermediate processes (e.g., government assessment and trajectory design),” says Kim. “I am excited about the future of mobile robots with more robust optical processing, specially trained for locomotives.”

This research is partly supported by MIT’s Impossible AI Lab, Biometric Robot Lab, Never Lab and the DARPA Machine Common Sense Program.

.

Source

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments