DLR RMUniversal robotic agents are envisaged to perform a wide range of manipulation tasks in everyday environments. A common action observed in many household chores is wiping, such as the absorption of spilled water with a sponge, skimming breadcrumbs off the dining table, or collecting shards of a broken mug using a broom. To cope with this versatility, the agents have to represent the tasks on a high level of abstraction. In this work, we propose to represent the medium in wiping tasks (\eg water, breadcrumbs, or shards) as generic particle distribution. This representation enables us to represent wiping tasks as the desired state change of the particles, which allows the agent to reason about the effects of wiping motions in a qualitative manner. Based on this, we develop three prototypical wiping actions for the generic tasks of absorbing, collecting and skimming. The Cartesian wiping motions are resolved to joint motions exploiting the free degree of freedom of the involved tool. Furthermore, the workspace of the robotic manipulators is used to reason about the reachability of wiping motions. We evaluate our methods in simulated scenarios, as well as in a real experiment with the robotic agent Rollin' Justin.
Daniel Leidner, Wissam Bejjani, Alin Albu-Schäffer, and Michael Beetz "Robotic Agents Representing, Reasoning, and Executing Wiping Tasks for Daily Household Chores", in Proc. of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Singapore, May 2016.
A draft version of the research paper is located at: http://elib.dlr.de/103272/
More information on Rollin' Justin: http://rmc.dlr.de/justin
Planning and Execution of Daily Cleaning Tasks with the Humanoid Service Robot Rollin JustinDLR RM2016-03-08 | Universal robotic agents are envisaged to perform a wide range of manipulation tasks in everyday environments. A common action observed in many household chores is wiping, such as the absorption of spilled water with a sponge, skimming breadcrumbs off the dining table, or collecting shards of a broken mug using a broom. To cope with this versatility, the agents have to represent the tasks on a high level of abstraction. In this work, we propose to represent the medium in wiping tasks (\eg water, breadcrumbs, or shards) as generic particle distribution. This representation enables us to represent wiping tasks as the desired state change of the particles, which allows the agent to reason about the effects of wiping motions in a qualitative manner. Based on this, we develop three prototypical wiping actions for the generic tasks of absorbing, collecting and skimming. The Cartesian wiping motions are resolved to joint motions exploiting the free degree of freedom of the involved tool. Furthermore, the workspace of the robotic manipulators is used to reason about the reachability of wiping motions. We evaluate our methods in simulated scenarios, as well as in a real experiment with the robotic agent Rollin' Justin.
Daniel Leidner, Wissam Bejjani, Alin Albu-Schäffer, and Michael Beetz "Robotic Agents Representing, Reasoning, and Executing Wiping Tasks for Daily Household Chores", in Proc. of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Singapore, May 2016.
A draft version of the research paper is located at: http://elib.dlr.de/103272/
More information on Rollin' Justin: http://rmc.dlr.de/justinARCHES crater explorationDLR RM2024-01-23 | We present the collaborative crater exploration of two planetary rover prototypes with an experiment during the ARCHES Moon analogue campaign on the volcano Mt. Etna, Italy, in 2022. On the volcano, the rovers successfully access the Cisternazza crater, a crater of approximately 150m in width and 30m in depth, featuring steep flanks of partially compacted and partially loose volcanic soil. The experiment shows collaborative manipulation for tethering the two rovers together and the abseiling of one rover into the crater while being supported by a winch, enabling safe crater exploration. Corresponding project: https://www.arches-projekt.de/projekt-arches/Robotic surface finishing using automatic trajectory generationDLR RM2024-01-22 | The video shows a novel approach on robotic surface finishing as developed in the project LEROSH (https://lerosh.de/) . The surface is segmented automatically according to the part geometry and feasible execution strategies are selected. The planning result can be inspected intuitively by the user through augmented reality. The automatically generated trajectories are executed by the DLR SARA (https://www.dlr.de/rm/sara) robot using compliant control.
This research and development project is funded by the German Federal Ministry of Education and Research (BMBF) within the “The Future of Value Creation – Research on Production, Services and Work” program (funding number 02K20D032) and managed by the Project Management Agency Karlsruhe (PTKA). Further funding has been received by the DLR project Factory of the Future Extended (FoF-X).Accuracy meets Safety: PID and ESP Control in Elastic RobotsDLR RM2023-09-29 | This work addresses the problem of global set-point control of elastic joint robots by combining elastic structure preserving (ESP) control with non-collocated integral action. Despite the popularity and extensive research on PID control for rigid joint robots, such schemes largely evaded adoption to elastic joint robots. This is mainly due to the underactuation inherent to these systems, which impedes the direct implementation of PID schemes with non-collocated (link position) feedback. We remedy this issue by using the recently developed concept of “quasi-full actuation,” to achieve a link-side PID control structure with “delayed” integral action. The design follows the structure preserving design philosophy of ESP control and ensures global asymptotic stability and local passivity of the closed loop. A key feature of the proposed controller is the switching logic for the integral action that enables the combination of excellent positioning accuracy in free motion with compliant manipulation in contact with the environment. Its performance is evaluated on an elastic joint testbed and a compliant robot arm.
The results demonstrate that elastic robots may achieve positioning accuracy comparable to rigid joint robots.
This video shows the results of: Keppler, M., Raschel, C., Wandinger, D., Stemmer, A. and Ott, C., 2022. Robust stabilization of elastic joint robots by ESP and PID control: theory and experiments. IEEE RA-L, 7(3), pp.8283-8290.Shape Completion and Grasp Prediction for Fast and Versatile Grasping with a Multi-Fingered HandDLR RM2023-07-26 | Grasping objects with limited or no prior knowl- edge about them is a highly relevant skill in assistive robotics. Still, in this general setting, it has remained an open problem. Strong challenges arise from object shape diversity with only partial visibility. To address these challenges, we present a deep learning pipeline consisting of a shape completion module that is based on a single depth image, and followed by a grasp predictor that is based on the predicted object shape. The shape completion network is based on VQDIF and predicts spatial occupancy values at arbitrary query points. As grasp predictor, we use our two-stage architecture that first generates hand poses using an autoregressive model, and then regresses finger joint configurations per pose. To take this approach to the real world we introduce adapted procedures for training data generation and for the training itself. Critical factors turn out to be sufficient data realism and augmentation as well as special attention to difficult cases during training. We further show how to make the grasp predictions more robust against uncertainties in the relative pose between hand and object and propose a new way to handle ambiguities in the grasp training dataset by adapting the network architecture. Experiments on a physical robot platform demonstrate successful grasping of a wide range of household objects based on a depth image from a single viewpoint.Estimator-Coupled Reinforcement Learning for Robust Purely Tactile In-Hand ManipulationDLR RM2023-07-26 | This paper identifies the culprits of naively com- bining learning-based controllers and state estimators for robotic in-hand manipulation. Specifically, we tackle the chal- lenging task of purely tactile, goal-conditioned dextrous in- hand reorientation with the hand pointing downwards. Here, we observe that due to the limited sensing available, many control strategies that are feasible in simulation do not allow for accurate state estimation. Hence, separately training the controller and the estimator, and combining the two at test time, leads to poor performance. Our proposed solution to this problem involves training a control policy by reinforcement learning coupled with the state estimator in simulation. We show that this approach leads to more robust state estimation and overall higher performance on the task while maintaining an interpretability advantage over fully end-to-end learning approaches. Due to our unified learning scheme and an end- to-end gpu-accalerated implementation, learning only takes 5h to 8h on a single GPU. In simulation experiments with the DLR-Hand II and for four significantly different object shapes, we provide an in-depth analysis of the performance of our approach. Finally, we show the successful sim2real transfer with rotating the objects to all 24 possible π/2-orientations.EmPReSs - Empowerment in tomorrows`s productionDLR RM2023-07-18 | This video provides an overview of our recently concluded interdisciplinary research project EmPReSs that aimed at integrating human labor and AI-assisted robots in industrial assembly lines. It features the development of our Mixed-Skill concept, detailing its application in creating more adaptable and people-focused production environments. The video further outlines our efforts in assessing and implementing measures to enhance worker empowerment, which have been demonstrated via a real-world prototype. Further details can be found in the associated journal paper (in German): “Soziotechnisches Assistenzsystem zur lernförderlichen Arbeitsgestaltung in der robotergestützten Montage”, Alin Albu-Schäffer, Norbert Huchler, Ingmar Kessler, Florian Lay, Alexander Perzylo, Michael Seidler, Franz Steinmetz, Roman Weitschat. Gruppe. Interaktion. Organisation. Zeitschrift für Angewandte Organisationspsychologie (GIO) 54, 79–93 (2023).
Project website: • bidt: https://en.bidt.digital/research-project/empowerment-in-tomorrows-production-rethinking-mixed-skill-factories-and-collaborative-robot-systems/ Institutions: • DLR: https://www.dlr.de/rm/en/desktopdefault.aspx/tabid-17830 • fortiss: fortiss.org/en/research/projects/detail/empress • ISF: https://www.isf-muenchen.de/projekt/empress-empowerment-in-der-produktion-von-morgen-mixed-skill-factories-und-kollaborative-robotersysteme-neu-denken/Hybrid Force-Impedance Control for Fast End-Effector MotionsDLR RM2023-05-08 | This video presents a unified hybrid force-impedance framework for highly dynamic end-effector motions. The unified framework features compliant behavior in the free (motion) task directions and explicit force tracking in the constrained directions. Advantageously, the involved force subspace in contact direction is fully dynamically decoupled from dynamics in the motion subspace. Further details can be found in the following paper: “Hybrid Force-Impedance Control for Fast End-Effector Motions”, Maged Iskandar, Christian Ott, Alin Albu-Schäffer, Bruno Siciliano, and Alexander Dietrich, IEEE Robotics and Automation Letters (RA-L), 2023. doi: 10.1109/LRA.2023.3270036.
Publication: IEEE-RA-L: ieeexplore.ieee.org/document/10107744 Open-access: https://elib.dlr.de/194975/1/Iskandar_RAL_2023a.pdfLearning Fluid Flow Visualizations from In-Flight Images with TuftsDLR RM2023-04-21 | To better understand fluid flows around aerial systems, strips of wire or rope, widely known as tufts, are often used to visualize the local flow direction. This paper presents a computer vision system that automatically extracts the shape of tufts from images, which have been collected during real flights of a helicopter and an unmanned aerial vehicle (UAV). As images from these aerial systems present challenges to both the model-based computer vision and the end-to-end supervised deep learning techniques, we propose a semantic segmentation pipeline that consists of three uncertainty-based modules namely, (a) active learning for object detection, (b) label propagation for object classification, and (c) weakly supervised instance segmentation. Overall, these probabilistic approaches facilitate the learning process without requiring any manual annotations of semantic segmentation masks. Empirically, we motivate our design choices through comparative assessments and provide real-world demonstrations of the proposed concept, for the first time to our knowledge. The project website can be accessed via the link: sites.google.com/view/tuftrecognition/homeMattias and EDAN winning at CYBATHLON Challenges March 2023DLR RM2023-03-31 | CYBATHLON, a non-profit project of ETH Zurich, acts as a platform that challenges teams from all over the world to develop assistive technologies suitable for everyday use with and for people with disabilities. In 2023 CYBATHLON Challenges, DLR’s EDAN team won the assistant robot race with pilot Mattias Atzenhofer.
Full live stream here: youtube.com/watch?v=1mD6l3VkqGE CYBATHLON website: https://cybathlon.ethz.ch/en Read more on EDAN here: https://www.dlr.de/rm/en/desktopdefault.aspx/tabid-17921
SHERP vehicles, which the WFP already uses successfully in crisis areas, are off-roaders which can move in any terrain, even in water or swamps, and can overcome climbing obstacles of up to one metre. The vehicle in Oberpfaffenhofen was equipped with several sensors for real-time monitoring of its surroundings and automated for remote control. In order to lose radio contact with the control system in the future, the SHERPs must be able to make safety and emergency stops at any time. To do this, they capture their surroundings with perception sensors, depth cameras stereo cameras and LIDAR systems.A VR-based telepresence robot with aerial manipulation capabilitiesDLR RM2022-10-20 | The DLR SAM demonstrates advanced aerial manipulation capabilities over an extended duration. Key innovation is a fully onboard perception system that creates VR of the robot's workspace in real-time. With this, a human operator can not only feel the sense of touch though force-feedback control, but also obtain 3D visual feedback about the remote environment.
To be presented at IROS 2022 late breaking results session. For more details, please check the paper: arxiv.org/abs/2210.09678RECALL: Rehearsal-free Continual Learning for Object Classification (IROS 2022)DLR RM2022-10-20 | RECALL is a new approach for lifelong robot learning or continual learning. This work's paper was published at the IEEE International Conference on Intelligent Robots and Systems (IROS2022) in Japan.
You can find more information and code in our repository: github.com/DLR-RM/RECALL And you can download our dataset at zenodo: zenodo.org/record/7054171A Multi-body Tracking Framework - From Rigid Objects to Kinematic StructuresDLR RM2022-08-03 | For more details, please check the paper and our source code:
github.com/DLR-RM/3DObjectTracking/tree/master/ICGLive Stream on 29/0622: ARCHES experiment. Analogue planetary exploration experiment on Mount Etna.DLR RM2022-06-29 | ...Arches VorbereitungenDLR RM2022-06-28 | Ein Wissenschaftsteam des DLRs, des KITs, und der ESA unter Leitung vom Institut für Robotik und Mechatronik bereitet sich seit Jahren auf eine Analog-Weltraummission in Sizilien vor. Die Mission ARCHES, in der Woche von 28. Juni bis 2. Juli auf dem Vulkan Ätna in Sizilien stattfindet. Während der Vorbereitungen erzählen drei jungen Forschende des DLRs und der ESA was sie machen und worauf sie sich bei der Mission freuen. Live-Event: Mittwoch 29 Juni um 13:30 streamen die Wissenschaftlerinnen und Wissenschaftler live die Experimente vom Ätna.Humanoid robot David shows in-hand manipulation skillsDLR RM2022-06-13 | David demonstrates advanced manipulation skills with the 7-DoF arm and fully articulated 5-finger hand using a pipette. To localize the object, we combine multi-object tracking with proprioceptive measurements. Together with path planning, this allows for controlled in-hand manipulation.The Time Domain Passivity Approach for High Delays (TDPA-HD): Experiments with 3s Roundtrip-DelayDLR RM2022-04-13 | ...Iterative Corresponding Geometry (ICG) - Highly Efficient 3D Object Tracking - CVPR 2022DLR RM2022-03-08 | For more details, please check the paper and our source code: github.com/DLR-RM/3DObjectTrackingA Model for Multi-View Residual Covariances based on Perspective DeformationDLR RM2022-02-08 | In this work, we derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups. The core of our approach is the formulation of the residual covariances as a combination of geometric and photometric noise sources. And our key novel contribution is the derivation of a term modelling how local 2D patches suffer from perspective deformation when imaging 3D surfaces around a point. Together, these add up to an efficient and general formulation which not only improves the accuracy of both feature-based and direct methods, but can also be used to estimate more accurate measures of the state entropy and hence better founded point visibility thresholds. We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment, improving their accuracy with a negligible overhead.
Alejandro Fontán, Laura Oliva, Javier Civera and Rudolph Triebel A Model for Multi-View Residual Covariances based on Perspective Deformation IEEE Robotics and Automation Letters 2022
DLR (CC BY-NC-ND 3.0)IROS 2021 Analyzing the Performance Limits of Articulated Soft Robots based on the ESPi FrameworkDLR RM2022-01-05 | The video presents the content of the paper at IROS 2021: "Analyzing the Performance Limits of Articulated Soft Robots based on the ESPi Framework: Applications to Damping and Impedance Control" by Manuel Keppler, Florian Loeffl, David Wandinger, and Clara Raschel, and Christian Ott.
In situations of harsh impacts, damping injection directly on the link of an articulated soft robot is challenging and usually requires high actuator torques at the moment of impact.
In this work, we discuss the underlying reasons and analyze the performance limitations arising in the implementation of basic impedance elements, such as springs and dampers, through the elastic structure preserving impedance (ESPi) control framework. Using the insights obtained, we present a way to design impedance controllers with a damping design based on dynamic extensions. Inspired by the design of shock absorbers and the muscle-tendon model, the presented damping layout requires substantially smaller actuator torques in situations where the robot is subject to harsh impacts.
The implementation is facilitated through the ESPi control framework resulting in a physically intuitive impedance design. The resulting closed-loop system can be interpreted as an interconnection of passive Euler Lagrange systems, which again, yields a passive system. The design’s passive nature ensures stability in the free motion case and enables the robot to interact robustly and safely with its environment. The work focuses on robotic systems with no inertial coupling between the motor and link dynamics. Experimental results, obtained with the presented design on a dedicated series elastic actuator (SEA) test bed, are reported and discussed.
The Paper was published in the IEEE Robotics and Automation Letters 2021.
DLR (CC BY-NC-ND 3.0)Christmas Special: The Common Justin (Mockumentary)DLR RM2021-12-16 | For this Christmas special, the team is observing the most rare of creatures in its natural habitat - the 'Common Justin'.Das Institut für Robotik und Mechatronik am DLRDLR RM2021-12-16 | Das Institut entwickelt Roboter, die es den Menschen ermöglichen, wirkungsvoller, effizient und sicherer mit der Umwelt zu interagieren. Die Roboter sollen in Umgebungen wirken, die für Menschen unzugänglich oder gefährlich sind, den Menschen aber auch während der Arbeit und im alltäglichen Leben unterstützen und entlasten. Unsere Roboter reproduzieren und erweitern auf funktionaler Ebene Manipulations- und Fortbewegungsfähigkeiten des Menschen. Allgemeiner verstanden, führen unsere Roboter jegliche Aufgaben der Fortbewegung und Interaktion mit der Umwelt in einer möglichst autonomen Art und Weise aus. Zentral ist dabei die Mensch-Roboter-Interaktion, die sowohl auf physischer als auch auf kognitiver Ebene abläuft.Real-time probabilistic object detection of household objectsDLR RM2021-12-01 | The video demonstrates a real-time probabilistic object detection, which returns uncertainty estimates of the predictions from deep neural networks. The key technology herein is a sparse Gaussian Processes with the so-called the Neural Tangent Kernel, which can provide uncertainty estimates of neural network predictions fast and reliably. Concretely, we show that an object detector can not only "know the known" objects, but also can "know the unknown" objects by providing confidence measures for both object classes and their 2D location in an image. The live demonstration took place at the 5th Annual Conference on Robot Learning in November of 2021.
License: CC-BY 3.0Smooth Exploration for Robotic Reinforcement LearningDLR RM2021-10-28 | The video presents real robot experiments from our paper at CoRL 2021: "Smooth Exploration for Robotic Reinforcement Learning" by Antonin Raffin, Jens Kober and Freek Stulp.
Reinforcement learning (RL) enables robots to learn skills from interactions with the real world. In practice, the unstructured step-based exploration used in Deep RL -- often very successful in simulation -- leads to jerky motion patterns on real robots. Consequences of the resulting shaky behavior are poor exploration, or even damage to the robot. We address these issues by adapting state-dependent exploration (SDE) to current Deep RL algorithms. To enable this adaptation, we propose two extensions to the original SDE, using more general features and re-sampling the noise periodically, which leads to a new exploration method generalized state-dependent exploration (gSDE). We evaluate gSDE both in simulation, on PyBullet continuous control tasks, and directly on three different real robots: a tendon-driven elastic robot, a quadruped and an RC car. The noise sampling interval of gSDE enables a compromise between performance and smoothness, which allows training directly on the real robots without loss of performance.
DLR (CC-BY 3.0)Predictive Uncertainty Estimation of Neural Networks with Sparse Gaussian ProcessesDLR RM2021-10-27 | This video presents an uncertainty estimation algorithm for planetary exploration aerial robots - ARDEA. The robot detects rovers and landers semantically. The key technology behind is a sparse Gaussian Processes with the Neural Tangent Kernel, which provides fast and reliable uncertainty estimates for neural networks. An in-depth discussion of the presented results is provided in the paper "Trust Your Robots! Predictive Uncertainty Estimation of Neural Networks with Sparse Gaussian Processes" at the 5th Conference on Robot Learning (CoRL) 2021.
DLR (CC-BY 3.0)Towards Robust Monocular Visual Odometry for Flying Robots on Planetary MissionsDLR RM2021-10-06 | The video presents our paper at IROS 2021: "Towards Robust Monocular Visual Odometry for Flying Robots on Planetary Missions" by M. Wudenka, M. G. Müller, N. Demmel, A. Wedler, R. Triebel, D. Cremers, W. Stürzl
Abstract: In the future, extraterrestrial expeditions will not only be conducted by rovers but also by flying robots. The technical demonstration drone Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by terrain traversability. Robust self-localization is crucial for that. Cameras that are lightweight, cheap and information-rich sensors are already used to estimate the ego-motion of vehicles. However, methods proven to work in man-made environments cannot simply be deployed on other planets. The highly repetitive textures present in the wastelands of Mars pose a huge challenge to descriptor matching based approaches. In this paper, we present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking to obtain feature correspondences between images and a refined keyframe selection criterion. In contrast to most other approaches, our framework can also handle rotation-only motions that are particularly challenging for monocular odometry systems. Furthermore, we present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix. This way we obtain an implicit measure of uncertainty. We evaluate the validity of our approach on all sequences of a challenging real-world dataset captured in a Mars-like environment and show that it outperforms state-of-the-art approaches. The source code is publicly available at: github.com/DLR-RM/granite An updated version of the paper is available at arxiv.org/abs/2109.05509Online Centroidal Angular Momentum Reference Generation and Motion OptimizationDLR RM2021-10-01 | This video presents a push recovery algorithm for humanoid robots in balancing scenarios by exploiting the system’s rotational dynamics. The robot actively generates centroidal angular momentum (CAM) references based on the force magnitude and direction of the push to counteract the disturbance and maintain its balance. An in-depth discussion of the presented results is provided in the paper “Online Centroidal Angular Momentum Reference Generation and Motion Optimization for Humanoid Push Recovery”.INSTR: Unknown Object Segmentation from Stereo ImagesDLR RM2021-09-30 | This is the IROS2021 presentation of our recent work "Unknown Object Segmentation from Stereo Images" by M. Durner, W. Boerdijk, M. Sundermeyer, W. Friedl, Z.-C. Marton, and R. Triebel
Abstract: Although instance-aware perception is a key prerequisite for many autonomous robotic applications, most of the methods only partially solve the problem by focusing solely on known object categories. However, for robots interacting in dynamic and cluttered environments, this is not realistic and severely limits the range of potential applications. Therefore, we propose a novel object instance segmentation approach that does not require any semantic or geometric information of the objects beforehand. In contrast to existing works, we do not explicitly use depth data as input, but rely on the insight that slight viewpoint changes, which for example are provided by stereo image pairs, are often sufficient to determine object boundaries and thus to segment objects. Focusing on the versatility of stereo sensors, we employ a transformer-based architecture that maps directly from the pair of input images to the object instances. This has the major advantage that instead of a noisy, and potentially incomplete depth map as an input, on which the segmentation is computed, we use the original image pair to infer the object instances and a dense depth map. In experiments in several different application domains, we show that our Instance Stereo Transformer (INSTR) algorithm outperforms current state-of-the-art methods that are based on depth maps. Training code and pretrained models are available at github.com/DLR-RM/instr.OAISYS: A Photorealistic Terrain Simulation Pipeline for Unstructured Outdoor EnvironmentsDLR RM2021-09-29 | This is the IROS2021 presentation of our recent work "A Photorealistic Terrain Simulation Pipeline for Unstructured Outdoor Environments" by M. G. Müller, M. Durner, A. Gawel, W. Stürzl, R. Triebel, R. Siegwart.
Abstract: Suitable datasets are an integral part of robotics research, especially for training neural networks in robot perception. However, in many domains, suitable real-world data are scarce and cannot be easily obtained. This problem is especially prevalent for unstructured outdoor environments, in particular, planetary ones. Recent advances in photorealistic simulations help researchers to simulate close-to-real data in many domains. Yet, there exists no high-quality synthetic data for planetary exploration tasks. Also, existing simulators lack the fidelity required for generating planetary data, which is inherently less structured than human environments. Synthetic planetary data requires careful modeling and annotation of many different terrain aspect and details, such as textures and distributions of rocks, to become a valuable test-bed for robotics. To fill this gap, we present a novel simulator specifically designed for the needs of planetary robotics visual tasks, but also applicable for other outdoor environments. Our simulator is capable of generating large varieties of (planetary)outdoor scenes with rich generation of meta data, such as multi-level semantic and instance annotations. To demonstrate thewide applicability of this new simulator, we evaluate its performance on typical robotics applications, i.e. semantic segmentation, instance segmentation, and visual SLAM. Our simulator is accessible undergithub.com/DLR-RM/oaisys.Virtual Lab Tour with the DLR Humanoid Robot TORO – Humanoids2020DLR RM2021-08-05 | This live demonstration was recorded during the virtual lab tour as part of the Humanoids conference 2020 in Munich. The video shows our humanoid robot TORO walking dynamically over compliant, elevated and rough terrain, as well as maintaining its balance in the presence of strong external pushes.
To find out more: https://www.dlr.de/rm/en/desktopdefault.aspx/tabid-11678/#gallery/28603Intuitive Task-Level Programming by Demonstration through Semantic Skill RecognitionDLR RM2021-07-28 | Intuitive robot programming for non-experts will be essential to increasing automation in small and medium-sized enterprises (SMEs). Programming by Demonstration (PbD) is a fast and intuitive approach, whereas programs created with Task-Level Programming (TLP) are easy to understand and flexible in their execution. In this paper, we propose an approach which combines these complementary advantages of PbD and TLP. Users define complete task-level programs including all parameters through PbD alone. Therefore, we call this approach Task-Level Programming by Demonstration (TLPbD). TLPbD extends skill-based approaches by enabling experts to semantically annotate robot skills with their conditions and effects, which facilitates online skill recognition from pure demonstrations by a non-expert. In a user study with 21 participants, the approach is compared with an existing intuitive TLP approach. The results show that the new approach drastically reduces the programming time while at the same time being more intuitive, reducing mental load, and achieving the same or even better skill sequences.
The video first demonstrates the three different tasks used in the user study (Section IV). Both approaches, Task-Level Programming by Demonstration (TLPbD) and Task-Level Programming (TLP), are shown in comparison. While all three tasks are programmed with TLPbD, the first task cannot be finished using TLP in the same time.
In the second half of the video, the first task is programmed again using TLPbD. Afterwards, a user skill is added mahually. Then, the created program is executed and its flexibility is shown.
Paper: https://elib.dlr.de/128339/PULSAR (Prototype of an Ultra Large Structure Assembly Robot)DLR RM2021-07-20 | Recently, the European Union funded the project PULSAR (Prototype of an Ultra Large Structure Assembly Robot) through the Space Robotic Technologies program within Horizon 2020. PULSAR aims to develop and demonstrate the technology that will allow on-orbit precise assembly of a segmented mirror using an autonomous robotic system. Within PULSAR, DLR, together with CSEM (Switzerland), Space Application Services (Belgium) and Magellium, (France) developed the demonstrator of Precise Assembly of Mirror Tiles (dPAMT), focused on assembling a functional section of a primary mirror using an autonomous mobile robot. The demonstrator used a KUKA KMR robot, endowed with a KUKA iiwa arm, for manipulating the segmented mirror tiles through the HOTDOCK standard interface, and performing the assembly using compliant control. The active mirror tiles have the capacity to perform adaptive motions in order to guarantee proper performance of the resulting mirror. The demonstrator successfully showed the autonomous planning and execution of the assembly, and paves the way to turn into reality the ambition of creating a large telescope assembled directly in space.
Further informations: https://www.h2020-pulsar.eu/Kooperatives heterogenes RoboterteamDLR RM2021-06-22 | Kooperative Kartenerstellung im heterogenen Roboterteam und autonomer Probennahme auf dem International Astronautical Congress (IAC) 2018 sowie Lokalisierung und Kartenerstellung mit drei Agenten in der mondanalogen Umgebung des Vulkans Ätna. Die dargestellten Tests wurden in Rahmen der Rahmen des Helmholtz-Zukunftsprojektes ARCHES durchgeführt. Informationen über das Projekt ARCHES: https://www.arches-projekt.de/projekt-arches/ https://elib.dlr.de/136354/ Informationen über das System ARDEA: https://www.dlr.de/rm/desktopdefault.aspx/tabid-11715/#gallery/29283 Informationen über das System LRU: https://www.dlr.de/rm/desktopdefault.aspx/tabid-11431/20129_read-47344/The SMiLE Control Center for Robotic Tele-Healthcare AssistanceDLR RM2021-04-21 | The Smile ecosystem for robotic healthcare assistance involves different robotic systems such as the humanoid robot Justin and the wheelchair robot EDAN. The robots can work autonomously but also be controlled by humans from remote or locally for example via a tablet interface. To supervise the robots and extend their capabilities, a control center was developed from which a communication to the SMiLE robots can be established. From this centralized control center, a tele-healthcare assistant can precisely control the robots distributed over Bavaria with the haptic interaction device HUG.
Das SMiLE Ökosystem für robotische Pflegeassistenz besteht aus verschiedenartigen robotischen Systemen wie dem humanoiden Roboter Justin und dem Rollstuhlroboter EDAN. Die Roboter können autonom arbeiten, oder durch einen Menschen lokal oder aus der Distanz zum Beispiel mit einem Tablet-Interface gesteuert werden. Um die Roboter zu überwachen und ihre Fähigkeiten zu erweitern, wurde ein Kontrollzentrum entwickelt von dem aus eine Kommunikation zu den SMiLE Robotern aufgebaut werden kann. Über dieses zentrale Kontrollzentrum kann ein Tele-Pflegeassistent die über ganz Bayern verteilten Roboter präzise über das haptische Interaktionsgerät HUG gesteuert werden.On Time-Optimal Control of Elastic Joints under Input ConstraintsDLR RM2021-02-17 | M. Keppler and A. De Luca, "On Time-Optimal Control of Elastic Joints under Input Constraints," Published in: 2020 59th IEEE Conference on Decision and Control (CDC) DOI: 10.1109/CDC42340.2020.9304224
We highlight the equivalence between the motion of an elastic joint and the two-body problem in classical mechanics. Based on this observation, a change of coordinates is introduced that reduces the two-body problem to a pair of decoupled one-body problems. This allows to treat the rest-to-rest motion problem with bounded actuator torque in an elegant geometric fashion. Instead of dealing directly with the fourth-order dynamics, we consider two equivalent masses whose motions have to be synchronized in separate phase spaces. Based on this idea, we derive a complete synthesis method for time-optimal rest-to-rest motions of this elastic system. The solution is a bang-bang control policy with one or three switches. We also introduce the concept of natural motions, when the minimum-time solution for the elastic and the rigid system is the same. The closed-form solutions obtained with our purely geometric approach verify the standard optimality conditions.
DLR (CC-BY 3.0)VITA - Virtual Therapy ArmDLR RM2021-01-22 | Das VITA -System dient der Rehabilitation und der Therapie von Menschen mit neurologischen und motorischen Einschränkungen und das mit Hilfe der Virtuellen Realität (VR). Anwendung findet das VITA-System z.B. bei der Behandlung von Phantomschmerzen nach einer Amputation oder in der Mobilisierung nach einem Schlaganfall. In der virtuellen VITA-Umgebung können beeinträchtigte Benutzer*innen eine voll funktionsfähige Darstellung ihrer beeinträchtigten Extremität(en) in der virtuellen Realität steuern. Steuerung und Erkennung der beabsichtigten Bewegung basiert auf eigenen Muskelsignalen und einem modernen maschinellen Lernsystem.A Sparse Gaussian Approach to Region-Based 6DoF Object Tracking - ACCV 2020DLR RM2020-12-18 | For more details, please check the paper and our source code: openaccess.thecvf.com/content/ACCV2020/html/Stoiber_A_Sparse_Gaussian_Approach_to_Region-Based_6DoF_Object_Tracking_ACCV_2020_paper.html github.com/DLR-RM/RBGTMulti-Task Teleoperation of the Suspended Aerial ManipulatorDLR RM2020-12-08 | The proposed framework allows for the human operator to not only command the end-effector of the SAM, but also move the flying base in order to achieve a desired camera view. Relying on the kinematic redundancy of the SAM, the proposed framework ensures that base motion can be performed without disturbing the end-effector. This is desired in case the task area is being occluded by the arm or to align the camera view with the joystick motion. Although the operator and the robot are side by side, the former only relies on camera images and haptic feedback, which shows the applicability of the approach to a real teleoperation scenario.Hierarchical Tracking Control With Arbitrary Task Dimensions: Application to Trajectory TrackingDLR RM2020-11-18 | Hierarchical Tracking Control With Arbitrary Task Dimensions: Application to Trajectory Tracking on Submanifolds
Hierarchical impedance control has been recently shown to effectively allow trajectory tracking, while guaranteeing the order of priorities during the execution. Nevertheless, the choice of the tasks is required to be such that, after being properly decoupled, they are all feasible and lead to an invertible Jacobian matrix. In this work, a modification is proposed that removes both these restrictions. The user is free to specify as many tasks as desired and especially without necessarily guaranteeing in advance that none of the tasks will become singular during the execution. Whenever tasks with higher priority use-up all the degrees of freedom, all the other tasks are naturally ignored. Still, as soon as some of the tasks with higher priority become singular, then the freed-up controllability is used to execute the next task in the stack. This is realized automatically, without any rearrangement of the tasks in the priority stack. As an application, the case of trajectory tracking on a submanifold of the workspace is considered, in which multiple charts of the atlas are used for the tasks. Simulations are used to validate the stability analysis.
Journal: IEEE Robotics and Automation Letters ( Volume: 5, Issue: 4, Oct. 2020) Conference: 2020 International Conference on Intelligent Robots and Systems (IROS)
Elib: https://elib.dlr.de/135645/ DOI: 10.1109/LRA.2020.3010449A Smooth Uniting Controller for Robotic ManipulatorsDLR RM2020-11-02 | A Smooth Uniting Controller for Robotic Manipulators: An Extension of the Adaptive Variance Algorithm (AVA)
The compliant behavior that a robotic manipulator realizes in the proximity of the desired goal is typically undesirable when the robot starts far away from the goal itself. In the latter case, high gains can produce motor torques which are unfeasible or too dangerous for interactions with humans and the environment. In this paper, a control algorithm is proposed that guarantees smooth high-gain/low-gain transitions to accommodate both the local and global requirements. The building block for this method is the recently proposed Adaptive Variance Algorithm (AVA). The theoretical proof of the result is validated with experiments on a humanoid robot.
Conference: 2020 American Control Conference (ACC) Dates: 1-3 July 2020 Website: http://acc2020.a2c2.org
Elib: https://elib.dlr.de/135643/ Extra: DOI: 10.23919/ACC45564.2020.9147828DOT: Dynamic Object Tracking for Visual SLAMDLR RM2020-10-26 | In this video we present DOT (Dynamic Object Tracking), a front-end that added to existing visual localization systems can significantly improve their robustness and accuracy in highly dynamic environments. DOT combines instance segmentation and multi-view geometry to generate masks for dynamic objects in order to allow visual localization systems based on rigid scene models to avoid such image areas in their optimizations.AHEAD (Autonomous Humanitarian Emergency Aid Devices): Remote-controlled access to crisis regionsDLR RM2020-10-23 | As part of a new collaborative project, researchers from the Institute of Robotics and Mechatronics of the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) and a consortium of additional DLR institutes and technology partners are investigating how aid supplies can be safely brought to their destinations using remote-controlled trucks. Robot-controlled vehicles are to be used on routes that pose a great risk to human drivers, such as the impassable and flood-prone areas of South Sudan. They will be controlled by telepresence from a safe location. The launch of the joint project with the United Nations World Food Programme (WFP) took place in Oberpfaffenhofen on 21 October.A Sparse Gaussian Approach to Region-Based 6DoF Object Tracking - Real-World ExperimentsDLR RM2020-10-02 | For more details, please check the paper and our source code: github.com/DLR-RM/RBGTA Sparse Gaussian Approach to Region-Based 6DoF Object Tracking - Approach and EvaluationDLR RM2020-10-02 | For more details, please check the paper and our source code: github.com/DLR-RM/RBGT3D Scene Reconstruction from a Single Viewport (ECCV 2020, long)DLR RM2020-08-21 | Code: github.com/DLR-RM/SingleViewReconstruction
Robot task programming often leads to inefficient plans, as opportunities for parallelization and precomputation are usually not exploited by the programmer. This inefficiency is often especially obvious in mobile manipulation, where path planning and pose estimation algorithms are time-consuming operations. In this paper, we introduce the concept of Resource-Aware Task Nodes (RATNs), a powerful descriptive action model for robots.
Next, we propose an algorithm that executes so-called Concurrent Dataflow Task Networks (CDTNs), robot plans consisting of RATNs. It optimizes programmed plans based on two sources of information: 1. The control flow represented in the original task plan, whose constraints are relaxed to generate opportunities for parallelization and precomputation. 2. Dependencies between actions pertaining to resources, data flows and world model changes, the latter being equivalent to preconditions and effects.
CDTNs have been integrated in our open-source task programming framework RAFCON (dlr-rm.github.io/RAFCON), and we show that it leads to 11% - 29% improvement in terms of execution time in two simulated mobile manipulation scenarios.SwarmRail: A Novel Overhead Robot System for Indoor Transport and Mobile ManipulationDLR RM2020-06-10 | SwarmRail represents a novel solution to overhead manipulation from a mobile unit that drives in an above ground rail-structure. The concept is based on the combination of omnidirectional mobile platform and L-shaped rail profiles that form a through-going central gap. This gap makes possible mounting a robotic manipulator arm overhead at the underside of the mobile platform. Compared to existing solutions, SwarmRail enables continuous overhead manipulation while traversing rail crossings. It also can be operated in a robot swarm, as it allows for concurrent operation of a group of mobile SwarmRail units inside a single rail network. Experiments on a first functional demonstrator confirm the functional capability of the concept. Potential fields of applications reach from industry over logistics to vertical farming.