(Representative image) (a) DreamWaQ++ walking on stairs (b) Terrain predicted by DreamWaQ++ compared with the ground truth (gray).Credit: The Korea Advanced Institute of Science and Technology (KAIST)

This robot sees danger, decides its route and powers over obstacles while carrying loads

by · Tech Xplore

A KAIST research team has developed quadrupedal robot technology that not only enables walking by estimating terrain without visual information, but also allows the robot to perceive its surroundings through cameras and LiDAR sensors and make its own decisions while walking, much like animals that visually examine terrain and adjust their steps. This technology is also expected to be extended to various robotic platforms such as wheeled-legged robots and humanoid robots.

A research team led by Professor Hyun Myung from the School of Electrical Engineering, in collaboration with the lab's startup EuRoboTics Co., Ltd., has developed "DreamWaQ++," a quadrupedal robot control technology that recognizes terrain based on visual information and adjusts locomotion strategies in real time.

The work is published in the journal IEEE Transactions on Robotics.

From blind walking to visual control

The previously developed DreamWaQ by this research team is a blind locomotion technology that estimates terrain using only proprioceptive sensing such as joint encoders and inertial sensors, enabling robust movement even without visual information. It allows stable walking even in environments where visual information is difficult to obtain, such as disaster situations, but has the limitation that the robot can only adjust its movement after its legs directly contact obstacles.

Locomotion controller trained with DreamWaQ++.Credit: The Korea Advanced Institute of Science and Technology (KAIST)

The newly developed DreamWaQ++ overcomes this limitation by combining proprioceptive sensing with exteroceptive sensing based on cameras and LiDAR. The key is that it enables perception-based locomotion, in which the robot recognizes obstacles in advance and proactively adjusts its walking strategy, going beyond simple reactive control to understanding and making decisions about the environment.

To achieve this, the research team designed a multimodal reinforcement learning architecture and implemented it to enable real-time control based on lightweight computation. In addition, it simultaneously secures stability by automatically switching to locomotion based on other sensory modalities when sensor errors occur, and scalability that allows application to various robotic platforms.

Real-world tests across tough terrain

Performance was also demonstrated through experiments. The robot equipped with DreamWaQ++ showed performance surpassing existing technologies in various challenging environments.

In stair locomotion experiments, it completed a course of 50 steps (30.03 m horizontally, 7.38 m vertically) in just 35 seconds, outperforming both blind locomotion controllers and commercial perception-based controllers.

In steep slope environments, it stably climbed a 35° incline, which is 3.5 times steeper than the training condition (10°), and actively adjusted its posture to reduce the rear leg motor torque by approximately 1.5 times compared to existing methods.

In addition, in various obstacle scenarios, it demonstrated learning-based perception capability by autonomously selecting more efficient paths without separate path planning, and in uncertain drop terrains, it exhibited exploratory behavior, where it voluntarily stops to inspect the ground before moving.

Along with this, it demonstrated high agility by overcoming obstacles of 41 cm, exceeding the robot's height, even while carrying a payload of 2.5 kg. In simulation, it was shown that it can handle obstacles up to 1.0 m with ANYmal-C (a representative quadrupedal robot developed at ETH Zurich) and up to 1.5 m with KAIST HOUND (a quadrupedal robot developed by Professor Hae-Won Park's group at KAIST).

In particular, even though it was trained only on relatively low obstacles (27 cm), it achieved a success rate of about 80% on higher stairs of 42 cm. This means that the robot is not simply repeating learned situations but has the ability to adapt to new environments on its own.

The research team expects that this technology can be applied in environments where conventional wheeled robots have difficulty accessing, such as disaster response, industrial facility inspection, forestry, and agriculture.

Professor Hyun Myung said, "This research shows that robots have advanced beyond simply moving to a level where they understand the environment and make decisions on their own. We will further expand this into intelligent mobility technologies applicable in various real-world environments."

Publication details
I Made Aswin Nahrendra et al, DreamWaQ++: Obstacle-Aware Quadrupedal Locomotion With Resilient Multimodal Reinforcement Learning, IEEE Transactions on Robotics (2026). DOI: 10.1109/tro.2026.3653774
Journal information: IEEE Transactions on Robotics
Key concepts
Autonomous robotic locomotion

Provided by The Korea Advanced Institute of Science and Technology (KAIST)