Researchers at Universidad Carlos III de Madrid (UC3M) have developed a new methodology for a robot to learn how to move its arms autonomously by combining a type of observational learning with intercommunication between its limbs. This breakthrough, recently presented at the world's most important robotics conference, IROS 2025, represents a further step towards achieving more natural and easily teachable service robots capable of performing assistive tasks in domestic environments, such as setting and clearing the table, ironing, or tidying up the kitchen.
This research addresses one of the most complex problems in current robotics: the coordination of two arms working together. At UC3M, they are achieving this using the ADAM robot (Autonomous Domestic Ambidextrous Manipulator), which is already capable of performing assistive tasks in home environments. “It can, for example, set the table and clear it afterwards, tidy the kitchen, or bring a user a glass of water or medication at the indicated time. It can also help them when they are going out by bringing a coat or an article of clothing,” explains Alicia Mora, one of the researchers from the Mobile Robots Group at the UC3M Robotics Lab working on this line of research.
ADAM has been built to help elderly people with their daily tasks inside their homes or in care facilities, explains the director of the Mobile Robots Group, Ramón Barber, a professor in the UC3M Department of Systems Engineering and Automation: “We all know people for whom simple gestures, such as someone bringing them a glass of water with a pill or setting the table for them, represent a very significant help. That is the main objective of our robot.”
In the paper presented at IROS 2025 a few weeks ago in China by researchers Adrián Prados and Gonzalo Espinoza of the Mobile Robots Group, they propose a revolutionary approach to coordinate the work of the robot's arms: teaching each arm to perform its task independently (via "imitation learning") and then allowing both to "communicate" through a mathematical system called Gaussian Belief Propagation. This method functions as an invisible and constant dialogue between the arms, allowing them to coordinate in real-time to avoid collisions with each other or with obstacles, without needing to stop to recalculate. The result is fluid, efficient, and natural movement, successfully tested in both simulations and real robots intended for domestic assistance.
Teaching a robot to perform daily tasks remains one of the great challenges in robotics. Traditionally, programming a robot implied writing thousands of lines of code to define every movement. In contrast to this approach, imitation learning proposes a more intuitive alternative: that the robot learns how a person does it by observing and replicating human actions. In this paradigm, the human demonstrates the task (by directly moving the robot's arm or recording themselves performing an action) to teach it, for example, to serve water or organize a shelf. However, simply copying a movement is not enough. If the robot learns to pick up a bottle in an exact position and the bottle is shifted slightly, a system that only imitates will repeat the original gesture and fail. Therefore, the true goal of robotic manipulation is not mechanical repetition, but adaptation and the understanding of movement.
The techniques developed by these researchers address this problem by making the learned movements behave like a "rubber band": if the target changes position, the trajectory deforms smoothly to reach it, maintaining the essence of the action. Thus, the robot can adapt to new situations without losing key properties of the movement, such as keeping a bottle vertical so as not to spill the contents. “The ultimate goal is for robots to stop being simple movement recorders and become authentic coworkers, capable of perceiving their environment, anticipating actions, and collaborating safely in human spaces,” points out Adrián Prados.
Perception, reasoning, and action
In practice, the robot's operation is organized into three phases. First, perception, through the collection of data from the environment via sensors. Then, reasoning, where that information is processed to extract relevant data. Finally, action, when the robot decides how to act, whether moving its base, coordinating its arms, or executing a specific task. To do this, ADAM uses 2D and 3D laser sensors, which allow it to measure distances, detect obstacles, and locate objects, as well as RGB cameras with depth information, which generate three-dimensional models of the environment.
One of the most significant challenges is moving from “seeing” objects to understanding their use and the user's context. Traditionally, this understanding was based on common sense databases. Currently, Alberto Méndez, also a researcher in the Mobile Robots Group, is working on incorporating generative models and artificial intelligence that allow the robot to adapt its behavior to the specific situation and what is happening at any given moment.
Although ADAM is currently an experimental platform, with an approximate cost of between 80,000 and 100,000 euros, the technology is considered mature enough to suggest that, within a timeframe of 10 to 15 years, robots of this type could live with us in our homes at a much more affordable cost.
Beyond the technical advances, this work highlights the role of robotics as part of the solution to population aging, a growing challenge in our society. “Every day there are more elderly people in our society and fewer people who can care for them, so these types of technological solutions are going to become increasingly necessary,” concludes Ramón Barber. In this context, "assistive robots are emerging as a key tool to improve the quality of life and autonomy of people."
VIDEO:
https://youtu.be/Ew86EO3wWio