CMU Robotics Institute
NavLab 1984 - 1994
updated
Saurabh Gupta
Assistant Professor
Electrical and Computer Engineering, University of Illinois Urbana-Champaign
May 3, 2024
Robot Learning by Understanding Egocentric Videos
Abstract:
True gains of machine learning in AI sub-fields such as computer vision and natural language processing have come about from the use of large-scale diverse datasets for learning. In this talk, I will discuss how we can leverage large-scale diverse data in the form of egocentric videos (first-person videos of humans conducting different tasks) to similarly scale up policy learning for robots. A central challenge is the gap in embodiment and intentions. I will describe how we can leverage video data in spite of this gap by learning at different levels of abstraction. I will demonstrate applications of this principle for a) acquiring low-level visuomotor subroutines and high-level value functions for navigation, and b) building an interactive understanding of objects, through observation of human hands, for manipulation.
Bio:
Saurabh Gupta is an Assistant Professor in the ECE Department at UIUC. Before starting at UIUC in 2019, he received his Ph.D. from UC Berkeley in 2018 and spent the following year as a Research Scientist at Facebook AI Research in Pittsburgh. His research interests span computer vision, robotics, and machine learning, with a focus on building agents that can intelligently interact with the physical world around them. He received the President’s Gold Medal at IIT Delhi in 2011, the Google Fellowship in Computer Vision in 2015, an Amazon Research Award in 2020, and an NSF CAREER Award in 2022.
Dieter Fox
Professor, University of Washington
Senior Director of Robotics Research, NVIDIA
Where is RobotGPT?
Abstract:
The last years have seen astonishing progress in the capabilities of generative AI techniques, particularly in the areas of language and visual understanding and generation. Key to the success of these models are the use of image and text data sets of unprecedented scale along with models that are able to digest such large datasets. We are now seeing the first examples of leveraging such models to equip robots with open-world visual understanding and reasoning capabilities. Unfortunately, however, we have not achieved the RobotGPT moment; these models still struggle with reasoning about geometry and physical interactions in the real world, resulting in brittle performance on seemingly simple tasks such as manipulating objects in the open world. A crucial reason for this problem is the lack of data suitable to train powerful, general models for robot decision making and control.
In this talk, I will discuss approaches to generating large datasets for training robot manipulation capabilities, with a focus on the role simulation can play in this context. I will show some of our prior work, where we demonstrated robust sim-to-real transfer of manipulation skills trained in simulation, and then present a path toward generating large scale demonstration sets that could help train robust, open-world robot manipulation models.
Bio:
Dieter Fox is Senior Director of Robotics Research at NVIDIA and Professor in the Allen School of Computer Science & Engineering at the University of Washington, where he heads the UW Robotics and State Estimation Lab. Dieter’s research is in robotics and artificial intelligence, with a focus on state estimation and perception applied to problems such as robot manipulation, mapping, and object detection and tracking. He has published more than 200 technical papers and is the co-author of the textbook “Probabilistic Robotics”. He is a Fellow of the IEEE, AAAI, and ACM, and recipient of the 2020 IEEE Pioneer in Robotics and Automation Award and the 2023 John McCarthy Award. He was an editor of the IEEE Transactions on Robotics, program co-chair of the 2008 AAAI Conference on Artificial Intelligence, and program chair of the 2013 Robotics: Science and Systems conference.
https://www.cs.cmu.edu/news/2024/wildfire-drones
Learn more from the researchers here:
theairlab.org/wildfire
https://www.cs.cmu.edu/~softagents/
https://imaging.cs.cmu.edu
We are conducting research to develop Unmanned Aerial Systems to aid in wildfire monitoring. The hazardous, dynamic, and visually degraded environment of wildfire gives rise to many unsolved fundamental research challenges.
Planning: how should the system decide when and where to observe in a constantly evolving and uncertain environment?
Perception: how do we overcome severe visual degradation to detect crew members and obstacles?
Forecasting: how can we use our observations to predict how the environment will evolve in the short and long term?
Integration: how do we incorporate all these challenges into a cohesive closed-loop system?
We aim to conduct integrative research that enables autonomous systems to operate robustly under high uncertainty and risk.
Krzysztof Skonieczny
Associate Professor
Electrical and Computer Engineering, Concordia University
April 19, 2024
https://www.ri.cmu.edu/event/reduced-gravity-flights-and-field-testing-for-lunar-and-planetary-rovers/
Abstract:
As humanity returns to the Moon and is developing outposts and related infrastructure, we need to understand how robots and work machines will behave in this harsh environment. It is challenging to find representative testing environments on Earth for Lunar and planetary rovers. To investigate the effects of reduced-gravity on interactions with granular terrains, parabolic flights subject not just the robot but also the soil grains to effectively reduced-g; this technique has enabled us to study the reductions in traction and the increases in sinkage and associated soil mobilization experienced in Lunar-g. Field testing rovers is another essential method for planning future Lunar operations. Our theoretical work in optimal control, and associated experiments in a planetary analogue terrain, have led to skid-steer rover trajectories that can be 10%-20% more energy-efficient than point-turn / straight-line paths.
Bio:
Dr. Krzysztof (Chris) Skonieczny is an Associate Professor in Electrical and Computer Engineering at Concordia University and a Tier 2 Canada Research Chair in Aerospace Robotics. He completed Bachelor’s and Master’s degrees in aerospace engineering at the University of Toronto, and a PhD in robotics at CMU. He collaborates with the Canadian Space Agency, as well as NASA and the European Space Agency. His research interests include experimentation and modeling of reduced-gravity robot-terrain interactions, advanced rover and wheel design, computer vision for terrain-sensing applications, and utilizing Lunar/planetary regolith for 3D printing and construction.
April 12, 2024
Carnegie Mellon University
School of Computer Science
Jonathan Hurst
Co-Founder
Chief Robot Officer
April 11, 2024
Human-Centric Robots and How Learning Enables Generality
Abstract:
Humans have dreamt of robot helpers forever. What’s new is that this dream is becoming real. New developments in AI, building on foundations of hardware and passive dynamics, enable vastly improved generality. Robots can step out of highly structured environments and become more human-centric: operating in human spaces, interacting with people, and doing some basic human workflows. At Agility Robotics, our bipedal human-centric robot, Digit, is learning skills inside a digital twin of real-world customer environments, and beginning to achieve performance that exceeds any prior control approach, with less engineering time invested to learn new skills. By connecting a Large Language Model, Digit can convert natural language high-level requests into complex robot instructions, composing the library of skills together, using human context to achieve real work in the human world. All of this is new – and it is never going back: AI will drive a fast-following robot revolution that is going to change the way we live.
Bio:
Jonathan W. Hurst is Chief Robot Officer and co-founder of Agility Robotics, and Professor and co-founder of the Oregon State University Robotics Institute. He holds a B.S. in mechanical engineering and an M.S. and Ph.D. in robotics, all from Carnegie Mellon University. Throughout his career, his research has focused on understanding the fundamental science and engineering best practices for robotic legged locomotion and physical interaction. At OSU, he led the team that developed ATRIAS, the first robot to reproduce human walking gait dynamics, and Cassie, which holds the world record for the fastest 100 meter dash by a bipedal robot. At Agility Robotics, Hurst is building upon this R&D foundation to develop human-centric, multi-purpose robots such as Digit, the first commercially available bipedal robot made for real-world logistics work. Hurst spends every day working to realize his lifelong vision of robots going where people go, generating greater productivity across the economy, and improving quality of life for all.
Jia Deng
Associate Professor
Department of Computer Science, Princeton University
April 5, 2024
Toward an ImageNet Moment for Synthetic Data
Abstract:
Data, especially large-scale labeled data, has been a critical driver of progress in computer vision. However, many important tasks remain starved of high-quality data. Synthetic data from computer graphics is a promising solution to this challenge, but still remains in limited use. This talk will present our work on Infinigen, a procedural synthetic data generator designed to create unlimited high-quality labelled data for computer vision. Infinigen is entirely procedural: every asset, from shape to texture, is generated from scratch via randomized mathematical rules. I will present our initial system focused on natural objects, and our ongoing work, which expands coverage to indoor environments.
Bio:
Jia Deng is an Associate Professor of Computer Science at Princeton University. His research focuses on computer vision and machine learning. He received his Ph.D. from Princeton University and his B.Eng. from Tsinghua University, both in computer science. He is a recipient of the Sloan Research Fellowship, the NSF CAREER award, and the ONR Young Investigator award.
Simon Lucey
Director, Australian Institute for Machine Learning (AIML)
Professor, University of Adelaide
March 19, 2024
Learning with Less
Abstract:
The performance of an AI is nearly always associated with the amount of data you have at your disposal. Self-supervised machine learning can help – mitigating tedious human supervision – but the need for massive training datasets in modern AI seems unquenchable. Sometimes it is not the amount of data, but the mismatch of statistics between the train and test sets – commonly referred to as bias – that limits the utility of an AI. In this talk I will explore a new direction based on the concept of a “neural prior” that relies on no training dataset whatsoever. A neural prior speaks to the remarkable ability of neural networks to both memorise training and generalise to unseen testing examples. Though never explicitly enforced, the chosen architecture of a neural network applies an implicit neural prior to regularise its predictions. It is this property we will leverage for problems that historically suffer from a paucity of training data or out-of-distribution bias. We will demonstrate the practical application of neural priors to augmented reality, autonomous driving and noisy signal recovery – with many of these outputs already being taken up in industry.
Bio:
Simon Lucey Ph.D. is the Director of the Australian Institute for Machine Learning (AIML) and a professor in the School of Computer and Mathematical Sciences, at the University of Adelaide. Prior to this he was an associate research professor at Carnegie Mellon University’s Robotics Institute (RI) in Pittsburgh USA; where he spent over 10 years as an academic. He was also Principal Research Scientist at the autonomous vehicle company Argo AI from 2017-2022. He has received various career awards, including an Australian Research Council Future Fellowship (2009-2013). He is also currently a member of the Australian Government’s AI Expert Group, and their National Robotics Strategy committee. Simon’s research interests span computer vision, machine learning, and robotics. He enjoys drawing inspiration from AI researchers of the past to attempt to unlock computational and mathematic models that underlie the processes of visual perception.
Large generative models for human motion, analogous to ChatGPT for text, will enable human motion synthesis and prediction for a wide range of applications such as character animation, humanoid robots, AR/VR motion tracking, and healthcare. This model would generate diverse, realistic human motions and behaviors, including kinematics and dynamics, and could be conditioned on various inputs, such as audio, video, text, or medical data. However, building such a model requires a massive and diverse training dataset of high-quality 3D human motion, which is currently limited by the labor-intensive and confined data collection processes. In this talk, I will delve into several projects that innovate human motion capture technologies, aiming to amass a large-scale repository of human motion data, encompassing a broad spectrum of activities performed in diverse real-world environments. Additionally, I will highlight our recent effort in building sophisticated generative models for human motion. These models are characterized by their ability to produce high-fidelity outputs, adapt to various conditioning inputs, and offer precise controllability.
—
C. Karen Liu is a professor in the Computer Science Department at Stanford University. Liu’s research interests are in computer graphics and robotics, including physics-based animation, character animation, optimal control, reinforcement learning, and computational biomechanics. She developed computational approaches to modeling realistic and natural human movements, learning complex control policies for humanoids and assistive robots, and advancing fundamental numerical simulation and optimal control algorithms. The algorithms and software developed in her lab have fostered interdisciplinary collaboration with researchers in robotics, computer graphics, mechanical engineering, biomechanics, neuroscience, and biology. Liu received a National Science Foundation CAREER Award, an Alfred P. Sloan Fellowship, and was named Young Innovators Under 35 by Technology Review. Liu also received the ACM SIGGRAPH Significant New Researcher Award for her contribution in the field of computer graphics. In 2021, Liu was inducted to ACM SIGGRAPH Academy.
https://tml.stanford.edu
Dr. Michael Yip
Associate Professor
Dept. of Electrical and Computer Engineering,
The University of California San Diego
February 16, 2024
Teaching a Robot to Perform Surgery: From 3D Image Understanding to Deformable Manipulation
Abstract:
Robot manipulation of rigid household objects and environments has made massive strides in the past few years due to the achievements in computer vision and reinforcement learning communities. One area that has taken off at a slower pace is in manipulating deformable objects. For example, surgical robotics are used today via teleoperation from a human-in-the-loop, but replacing the human’s visual understanding and task performance with an AI remains a lofty and puzzling challenge. How do you build intuition and control of how to deform, stretch, or cut anatomical tissue, find hemorrhages and suction blood and bodily fluids from view, or simply localize your robot within a dynamically changing and deformable world in real-time?
In this talk, I will discuss our work to automate robotic surgery and how we build new modeling and learning schemes for deformable robot manipulation and visual servoing. I will discuss how we analyze a multimodal spectrum of sensory information to solve real-to-sim and sim-to-real problems, while towing a fine line between physics-based models and the less-explainable yet highly successful latent space embeddings. I will show how this translates beyond the operating room and into general robot manipulation.
Bio:
Michael Yip, Ph.D., is an Associate Professor at the University of California San Diego and the Director of the Advanced Robotics and Controls Laboratory. His research expertise is at the intersection of robotics, machine learning, and computer vision, enabling robots to work with deformable objects and environments with image guidance and tactile perception. The work has been applied to automating robotic surgery, enabling snake robot locomotion, coordinating multiple robot arms, autonomous driving, and search and rescue. Dr. Yip and his research group have won numerous best paper awards at major robotics conferences and journals. Dr. Yip was previously a Research Associate with Disney Research, a Visiting Professor at Stanford University, and a Visiting Professor with Amazon Robotics. He received a B.Sc. from the University of Waterloo, an M.S. from University of British Columbia, and his Ph.D. from Stanford University.
J. Mike Walker '66 Chair Professor
Mechanical Engineering, Texas A&M University
November 17, 2023
https://www.ri.cmu.edu/event/robots-at-the-johnson-space-center-and-future-plans/
Robots at the Johnson Space Center and Future Plans
Abstract:
The seminar will review a series of robotic systems built at the Johnson Space Center over the last 20 years. These will include wearable robots (exoskeletons, powered gloves and jetpacks), manipulation systems (ISS cranes down to human scale) and lunar mobility systems (human surface mobility and robotic rovers). As all robotics presentations should, this will include some fun videos.
Bio:
Having recently retired from NASA, Dr. Robert Ambrose is now the J. Mike Walker Chair in Mechanical Engineering at Texas A&M University and Associate Director of the Texas A&M Space Institute. He will outline his plans to extend the work of his NASA team, with projects in surface mobility, robotic manipulation and human augmentation. Dr. Ambrose is the Texas A&M Director for Space and Robotic Initiatives, and the Director of Space and Robotics at the Bush Defense Complex. He was elected to the National Academy of Engineering, serves as the VP of the IEEE Robotics and Automation Society for Industrial Activities, and retired from NASA as a member of the Senior Executive Service.
Robert Ambrose received his Ph.D. from the University of Texas at Austin in Mechanical Engineering and his M.S. and B. S. degrees from Washington University in St. Louis. He has previously worked as a researcher in academia (UT Austin), as an engineer at an FFRDC (MITRE), and as a Project Leader at a small startup company (Metrica, Inc).
With NASA’s Johnson Space Center from 2000-2021, he served as a Project Manager, Branch Chief and later as the Division Chief for the Software, Robotics and Simulation Division. Dr. Ambrose’s Division supported the International Space Station (ISS), software and simulation for the Space X, Boeing, and Orion Spacecraft, and the development of exercise equipment, wearable robotics and jetpacks used by astronauts in space. He led the design of futuristic machines like Robonaut, the Chariot rovers, Centaur, Valkyrie, MRV, Resource Prospector / VIPER rovers, and the LTV Rover that are paving the way for space exploration. Dr. Ambrose also served for 7 years at NASA Headquarters as the Principal Technologist for Robotics and Autonomous Systems. He is married to Dr. Catherine G. Ambrose with homes in Colorado and Texas. He may be reached at rambrose@tamu.edu.
Marc Deisenroth
DeepMind Chair of Machine Learning and Artificial Intelligence
University College London
October 27, 2023
Data-Efficient Learning for Robotics and Reinforcement Learning
Abstract:
Data efficiency, i.e., learning from small datasets, is of practical importance in many real-world applications and decision-making systems. Data efficiency can be achieved in multiple ways, such as probabilistic modeling, where models and predictions are equipped with meaningful uncertainty estimates, transfer learning, or the incorporation of valuable prior knowledge.
In this talk, I will focus on how robot learning can benefit from data-efficient learning algorithms. We will discuss three different ways to use data efficiently in reinforcement learning and robotics settings: model-based reinforcement learning, transfer learning, and offline reinforcement learning.
Bio:
Professor Marc Deisenroth is the DeepMind Chair of Machine Learning and Artificial Intelligence at University College London, Deputy Director of the UCL Centre for Artificial Intelligence, and part of the UNESCO Chair on Artificial Intelligence at UCL. He also holds a visiting faculty position at the University of Johannesburg. Marc co-leads the Sustainability and Machine Learning Group at UCL. His research interests center around data-efficient machine learning, probabilistic modeling and autonomous decision making with applications in weather, nuclear fusion, and robotics.
Marc was Program Chair of EWRL 2012, Workshops Chair of RSS 2013, EXPO Chair at ICML 2020, Tutorials Chair at NeurIPS 2021, and Program Chair at ICLR 2022. He is an elected member of the ICML Board. He received Paper Awards at ICRA 2014, ICCAS 2016, ICML 2020, AISTATS 2021, and FAccT 2023. Marc is co-author of the book Mathematics for Machine Learning, published by Cambridge University Press.
Associate Professor
Department of Computer Science & Engineering, University of Connecticut
October 13 2023
https://www.ri.cmu.edu/event/learning-and-control-for-safety-efficiency-and-resiliency-of-embodied-ai/
Learning and Control for Safety, Efficiency, and Resiliency of Embodied AI
Abstract:
The rapid evolution of ubiquitous sensing, communication, and computation technologies has revolutionized of cyber-physical systems (CPS) across virous domains like robotics, smart grids, aerospace, and smart cities. Integrating learning into dynamic systems control presents significant Embodied AI opportunities. However, current decision-making frameworks lack comprehensive understanding of the tridirectional relationship among communication, learning and control, posing challenges for multi-agent systems in complex environments. In the first part of the talk, we focus on learning and control with information sharing that leverages communication capabilities. We design an uncertainty quantification method for collaborative perception in connected autonomous vehicles (CAVs). Our findings demonstrate that communication among multiple agents can enhance object detection accuracy and reduce uncertainty. Building upon this, we develop a safe and scalable deep multi-agent reinforcement learning (MARL) framework that leverages shared information among agents to improve system safety and efficiency. We validate the benefits of communication in MARL, particularly in the context of CAVs in challenging mixed traffic scenarios. We incentivize agents to communicate and coordinate with a novel reward reallocation scheme based on Shapley value for MARL. Additionally, we present our theoretical analysis of robust MARL methods under state uncertainties, such as uncertainty quantification in the perception modules or worst-case adversarial state perturbations. In the second part of the talk, we briefly outline our research contributions on robust MARL and data-driven robust optimization for autonomous mobility-on-demand (AMoD) systems and sustainable mobility. We also highlight our research results concerning CPS security. Through our findings, we aim to advance Embodied AI and CPS for safety, efficiency, and resiliency in dynamic environments.
Bio:
Fei Miao is currently Pratt & Whitney Associate Professor of the School of Computing, a Courtesy Faculty of the Department of Electrical & Computer Engineering, University of Connecticut, where she joined in 2017. She is affiliated to the Institute of Advanced Systems Engineering and Eversource Energy Center. She was a postdoc researcher at the GRASP Lab and the PRECISE Lab of UPenn from 2016 to 2017. She received the Ph.D. degree and the Best Doctoral Dissertation Award in Electrical and Systems Engineering, with a dual M.S. degree in Statistics from the University of Pennsylvania in 2016. She received the B.S. degree in Automation from Shanghai Jiao Tong University in 2010. Her research focuses on multi-agent reinforcement learning, robust optimization, uncertainty quantification, and game theory, to address safety, efficiency, robustness, and security challenges of Embodied AI and CPS, for systems such as connected autonomous vehicles, sustainable and intelligent transportation systems, and smart cities. Dr. Miao is a receipt of the NSF CAREER award and a couple of other awards from NSF. She received the Best Paper Award and Best Paper Award Finalist at the 12th and 6th ACM/IEEE International Conference on Cyber-Physical Systems (ICCPS) in 2021 and 2015, Best Paper Award at the 2023 AAAI DACC workshop, respectively.
Assistant Professor
Robotics and Embodied AI Lab, Stanford University
October 6, 2023
https://www.ri.cmu.edu/event/learning-meets-gravity-robots-that-learn-to-embrace-dynamics-from-data/
Carnegie Mellon University Robotics Institute Seminar Series 2023
Learning Meets Gravity: Robots that Learn to Embrace Dynamics from Data
Abstract:
Despite the incredible capabilities (speed and repeatability) of our hardware today, many robot manipulators are deliberately programmed to avoid dynamics – moving slow enough so they can adhere to quasi-static assumptions of the world. In contrast, people frequently (and subconsciously) make use of dynamic phenomena to manipulate everyday objects – from unfurling blankets, to tossing trash – to improve efficiency and physical reach range. These abilities are made possible by an intuition of physics, a cornerstone of intelligence. How do we impart the same on robots?
Modeling the complex dynamics of the unstructured world is challenging. However, by enabling robots to directly learn perception-action feedback loops from raw sensory data, we show that it is possible to relax the need for accurate physics models. Thereby allowing robots to (i) acquire dynamic skills for complex objects, (ii) adapt to new scenarios using visual feedback, and (iii) use their dynamic interactions to improve their understanding of the world. Learning from data allows us to change the way we think about dynamics – from avoiding it to embracing it – simplifying a number of classically challenging problems, leading to new robot capabilities.
Bio:
Shuran Song is an Assistant Professor at Stanford University, leading the Robotics and Embodied AI Lab (Real@Stanford). Before joining Stanford, she was faculty at Columbia University. Shuran received her Ph.D. in Computer Science at Princeton University, BEng. at HKUST. Her research interests lie at the intersection of computer vision and robotics. Song’s research has been recognized through several awards including the Best Paper Awards at RSS’22 and T-RO’20, Best System Paper Awards at CoRL’21, RSS’19, and finalist at RSS, ICRA, CVPR, and IROS. She is also a recipient of the NSF Career Award, Sloan Foundation fellowship as well as research awards from Microsoft, Toyota Research, Google, Amazon, and JP Morgan. To learn more about Shuran’s work please visit: shurans.github.io
More result videos: extreme-parkour.github.io
Group website: https://www.cs.cmu.edu/~dpathak/
TLDR: A low-cost robot does extreme parkour including high jumps on obstacles 2x its height, long jumps across gaps 2x its length, handstand on stairs, and running across tilted ramps.
Authors: Xuxin Cheng*, Kexin Shi*, Ananye Agarwal, Deepak Pathak
Abstract: Humans can perform parkour by traversing obstacles in a highly dynamic fashion requiring precise eye-muscle coordination and movement. Getting robots to do the same task requires overcoming similar challenges. Classically, this is done by independently engineering perception, actuation, and control systems to very low tolerances. This restricts them to tightly controlled settings such as a predetermined obstacle course in labs. In contrast, humans are able to learn parkour through practice without significantly changing their underlying biology. In this paper, we take a similar approach to developing robot parkour on a small low-cost robot with imprecise actuation and a single front-facing depth camera for perception which is low-frequency, jittery, and prone to artifacts. We show how a single neural net policy operating directly from a camera image, trained in simulation with large-scale RL, can overcome imprecise sensing and actuation to output highly precise control behavior end-to-end. We show our robot can perform a high jump on obstacles 2x its height, long jump across gaps 2x its length, do a handstand and run across tilted ramps, and generalize to novel obstacle courses with different physical properties.
Background Music from ashamaluevmusic.com
RoboAgent can complete 12 manipulation skills across differing scenes. This research points toward a robotic learning platform adaptable to changing environments. Unlike past research, the team demonstrated their work in real environments — not simulation — and did so with far less data than previous projects.
Read the full story
https://www.ri.cmu.edu/parenting-a-3-year-old-robot/
The research team includes Kumar, Tulsiani, Gupta, Bharadwaj, Sharma and Jay Vakil from Meta AI. More information about RoboAgent and RoboSet is available on the project’s website.
Robotics' Growing Role in Cognitive Science
Inventing the Future: Al and CS in the 21st Century
June 4, 1998
In Honor of Raj Reddy's 60th birthday in 1998 a symposium and celebration was held where many leaders in the field of AI and Computer Science spoke.
Herbert Simon gave this forward-looking talk that is not only historically relevant, but speaks to the issues surrounding artificial intelligence we are discussing today.
From Wikipedia:
Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American political scientist, with a Ph.D. in political science, whose work also influenced the fields of computer science, economics, and cognitive psychology. His primary research interest was decision-making within organizations and he is best known for the theories of "bounded rationality" and "satisficing".
He received the Nobel Memorial Prize in Economic Sciences in 1978 and the Turing Award in computer science in 1975. His research was noted for its interdisciplinary nature and spanned across the fields of cognitive science, computer science, public administration, management, and political science] He was at Carnegie Mellon University for most of his career, from 1949 to 2001, where he helped found the Carnegie Mellon School of Computer Science, one of the first such departments in the world.
Notably, Simon was among the pioneers of several modern-day scientific domains such as artificial intelligence, information processing, decision-making, problem-solving, organization theory, and complex systems. He was among the earliest to analyze the architecture of complexity and to propose a preferential attachment mechanism to explain power law distributions.
https://mobility21.cmu.edu
The Intelligent Coordination and Logistics Lab at Carnegie Mellon University's Robotics Institute has been developing and field testing a smartphone app called PedPal in conjunction with pathVu, a University of Pittsburgh-based start-up.
PedPal has been developed to assist pedestrians with disabilities and/or mobility challenges to safely cross signalized intersections. The PedPal app allows its user to communicate directly with the intersection to indicate how much time is needed for crossing.
Learn more at the CMU Robotics project page:
https://www.ri.cmu.edu/project/pedpal/
Or the Intelligent Coordination and Logistics Laboratory site:
http://www.ozone.ri.cmu.edu
CMU Researchers Create Fabric and Sensing System To Measure Contact and Pressure
Read more about it here:
https://www.ri.cmu.edu/sweater-wrapped-robots-can-feel-and-react-to-human-touch/
farm-ng.com/pages/farm-ng-uc-anr-farmbot-ai-challenge-details
To learn more about Robotics Institute Education Programs - including the MRSD Program visit:
https://www.ri.cmu.edu/ri-education/
To learn more about recent MRSD Team projects visit:
https://mrsd.ri.cmu.edu/project-examples/student-project-websites/spring-2022-fall-2022/
Deputy Manager
Mobility and Robotics Systems, NASA Jet Propulsion Laboratory
Friday, April 14, 2023
https://www.ri.cmu.edu/event/mars-robots-and-robotics-at-nasa-jpl/
Abstract:
In this seminar I’ll discuss Mars robots, the unprecedented results we’re seeing with the latest Mars mission, and how we got here. Perseverance’s manipulation and sampling systems have collected samples from unique locations at twice the rate of any prior mission. 88% of all driving has been autonomous. This has enabled the mission to achieve its prime objective to select, core, and deploy a high value sample collection on the surface of Mars within one Mars year of landing. The Ingenuity helicopter has completed 49 flights on Mars. I’ll provide an overview of robotics at JPL and discuss some open problems that if addressed could further enhance future space robotics.
Bio:
Vandi Verma is the Deputy Manager for Mobility and Robotics Systems at NASA Jet Propulsion Laboratory, and the Chief Engineer of Robotic Operations for the Mars 2020 mission with the Perseverance rover and Ingenuity helicopter. As Deputy Manager for Mobility and Robotics she leads about 200 JPL roboticists developing new technology for future missions and working on a variety of JPL robotic missions. Robotics capabilities she has worked on are in regular use on the Perseverance, and Curiosity rovers, and in human spaceflight projects. She has been engaged in robotic operations on Mars since 2008 with the Mars Exploration Rovers Spirit and Opportunity, Curiosity rover, Perseverance rover, and Ingenuity helicopter. She graduated from CMU RI with a Ph.D. in Robotics in 2005.
Remember! All potential participants or those just considering entering the races are welcome. Individuals and team efforts are particularly encouraged. If you can't build one yourself, we hope you lean on a colleague, friend, or associate in another division. There is strength (and sometimes better Mobots!) through collaboration and sharing of skills/knowledge.
You know you want to try it, so don't shy away! And if you don't build one, come cheer on a friend...
Many thanks to our generous sponsors!
Remember, the Mobot Races are open to the entire campus (Alums as well)!
We race whatever the weather!
Quick summary: Mobot participants will race autonomous vehicles (MObile roBOTs) they have built along a slalom-type course on the paved walk in front of Wean Hall.
The purpose of the competition is simple, to generate technological excitement, provide hands-on experience for our undergraduates, and showcase the cleverness and technical competence of Carnegie Mellon undergraduates and all community members (including alumni). We hope to stimulate inter-disciplinary activity toward producing something that is technically noteworthy. The problem and competition never gets old: the solutions are many!
Our Historic Website!
https://www.cs.cmu.edu/mobot/
From event post:
https://www.cs.cmu.edu/calendar/160780157
https://www.ri.cmu.edu/event/teruko-yata-memorial-lecture-2023/
Brenna Argall
Associate Professor of Computer Science
McCormick School of Engineering, Northwestern University
Thursday, April 13, 2023
Mobility and Manipulation Independence
with Interface-Aware Robotics Intelligence
Dr. Brenna Argall is an associate professor of Mechanical Engineering, Electrical Engineering & Computer Science and Physical Medicine & Rehabilitation at Northwestern University. Her research lies at the intersection of robotics autonomy, machine learning and human rehabilitation. She is director of the assistive & rehabilitation robotics laboratory (argallab) at the Rehabilitation Institute of Chicago (RIC, now the Shirley Ryan AbilityLab), the nation’s premier rehabilitation hospital. The mission of the argallab is to advance human ability by leveraging robotics autonomy.
Argall is a 2016 recipient of the NSF CAREER award, and was named one of the 40 under 40 by Crain’s Chicago Business. Her Ph.D. in Robotics (2009) was received from the Robotics Institute at Carnegie Mellon University, where she was a member of the CORAL Research Group. Her B.S. in Mathematics (2002) also was received from Carnegie Mellon, where she minored in Music and Biological Sciences. Prior to joining Northwestern and RIC, she was a postdoctoral fellow (2009-2011) in the Learning Algorithms and Systems Laboratory at the École Polytechnique Fédérale de Lausanne (EPFL). Prior to graduate school she held a Computational Biology position in the Laboratory of Brain & Cognition at the National Institutes of Health (NIH).
Associate Professor
Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology
April 7, 2023
Structures and Environments for Generalist Agents
https://www.ri.cmu.edu/event/structures-and-environments-for-generalist-agents/
Abstract:
We are entering an era of highly general AI, enabled by supervised models of the Internet. However, it remains an open question how intelligence emerged in the first place, before there was an Internet to imitate. Understanding the emergence of skillful behavior, without expert data to imitate, has been a longstanding goal of reinforcement learning (RL), but the majority of previous work has been strikingly narrow in scope (e.g., controlling a single robot for a specific manipulation task). In this talk I will share some of our group’s recent work toward generalist RL agents. I will cover 1) quasimetric RL, which employs geometric structures inherent in decision-making problems to expedite multi-task learning, 2) incorporating latent variables into agent policies to generate a greater diversity of behaviors, and 3) developing scalable environments that support more open-ended tasks.
Bio:
Phillip Isola is the Class of 1948 Career Development associate professor in EECS at MIT. He studies computer vision, machine learning, and AI. He
completed his Ph.D. in Brain & Cognitive Sciences at MIT, and has since spent time at UC Berkeley, OpenAI, and Google Research. His research has been recognized by the PAMI Young Researcher Award, a Packard Fellowship, and a Sloan Fellowship, among other awards. His current research focuses on trying to scientifically understand human-like intelligence.
https://www.ri.cmu.edu/event/next-generation-robot-perception-hierarchical-representations-certifiable-algorithms-and-self-supervised-learning/
Luca Carlone
Leonardo Career Development Associate Professor
Department of Aeronautics and Astronautics, Massachusetts Institute of Technology
Friday, March 31, 2023
Spatial perception —the robot’s ability to sense and understand the surrounding environment— is a key enabler for robot navigation, manipulation, and human-robot interaction. Recent advances in perception algorithms and systems have enabled robots to create large-scale geometric maps of unknown environments and detect objects of interest. Despite these advances, a large gap still separates robot and human perception: Humans are able to quickly form a holistic representation of the scene that encompasses both geometric and semantic aspects, are robust to a broad range of perceptual conditions, and are able to learn without low-level supervision. This talk discusses recent efforts to bridge these gaps. First, we show that scalable metric-semantic scene understanding requires hierarchical representations; these hierarchical representations, or 3D scene graphs, are key to efficient storage and inference, and enable real-time perception algorithms. Second, we discuss progress in the design of certifiable algorithms for robust estimation; in particular we discuss the notion of “estimation contracts”, which provide first-of-a-kind performance guarantees for estimation problems arising in robot perception. Finally, we observe that certification and self-supervision are twin challenges, and the design of certifiable perception algorithms enables a natural self-supervised learning scheme; we apply this insight to 3D object pose estimation and present self-supervised algorithms that perform on par with state-of-the-art, fully supervised methods, while not requiring manual 3D annotations.
Bio:
Luca Carlone is the Leonardo Career Development Associate Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). He received his PhD from the Polytechnic University of Turin in 2012. He joined LIDS as a postdoctoral associate (2015) and later as a Research Scientist (2016), after spending two years as a postdoctoral fellow at the Georgia Institute of Technology (2013-2015). His research interests include nonlinear estimation, numerical and distributed optimization, and probabilistic inference, applied to sensing, perception, and decision-making in single and multi-robot systems. His work includes seminal results on certifiably correct algorithms for localization and mapping, as well as approaches for visual-inertial navigation and distributed mapping. He is a recipient of the Best Student Paper Award at IROS 2021, the Best Paper Award in Robot Vision at ICRA 2020, a 2020 Honorable Mention from the IEEE Robotics and Automation Letters, a Track Best Paper award at the 2021 IEEE Aerospace Conference, the 2017 Transactions on Robotics King-Sun Fu Memorial Best Paper Award, the Best Paper Award at WAFR 2016, the Best Student Paper Award at the 2018 Symposium on VLSI Circuits, and he was best paper finalist at RSS 2015, RSS 2021, and WACV 2023. He is also a recipient of the AIAA Aeronautics and Astronautics Advising Award (2022), the NSF CAREER Award (2021), the RSS Early Career Award (2020), the Sloan Research Fellowship (2023), the Google Daydream Award (2019), the Amazon Research Award (2020, 2022), and the MIT AeroAstro Vickie Kerrebrock Faculty Award (2020). He is an IEEE senior member and an AIAA associate fellow. At MIT, he teaches “Robotics: Science and Systems,” the introduction to robotics for MIT undergraduates, and he created the graduate-level course “Visual Navigation for Autonomous Vehicles”, which covers mathematical foundations and fast C++ implementations of spatial perception algorithms for drones and autonomous vehicles.
Lerrel Pinto
Assistant Professor of Computer Science
Robotics and Machine Learning, New York University
Friday, March 24, 2023
A Constructivist’s Guide to Robot Learning
Over the last decade, a variety of paradigms have sought to teach robots complex and dexterous behaviors in real-world environments. On one end of the spectrum we have nativist approaches that bake in fundamental human knowledge through physics models, simulators and knowledge graphs. While on the other end of the spectrum we have tabula-rasa approaches that teach robots from scratch. In this talk I will argue for the need for better constructivist approaches to robotics, i.e. techniques that take guidance from humans while allowing robots to continuously adapt in changing scenarios. The constructivist guide I propose will focus on three elements. First, creating physical interfaces to allow humans to provide robots with rich and dexterous data. Second, developing adaptive learning mechanisms to allow robots to continually fine-tune in their environments. Third, architecting models that allow robots to learn from un-curated play. Applications of such a learning paradigm will be demonstrated on mobile manipulators in home environments, industrial robots on precision tasks, and multi-fingered hands on dexterous manipulation.
Bio:
Lerrel Pinto is an Assistant Professor of Computer Science at NYU. His research interests focus on machine learning for robots. He received a Ph.D. degree from CMU in 2019 after which he did a Postdoc at UC Berkeley. His work on large-scale robot learning received the Best Student Paper Award at ICRA 2016, Best Paper Award finalist at IROS 2019, and CoRL 2022. Several of his works have been featured in popular media such as The Wall Street Journal, TechCrunch, MIT Tech Review, Wired, and BuzzFeed among others. His recent work can be found on www.lerrelpinto.com.
Assistant Professor
University of Michigan
February 24, 2023
https://www.ri.cmu.edu/event/understanding-the-physical-world-from-images/
Understanding the Physical World from Images
If I show you a photo of a place you have never been to, you can easily imagine what you could do in that picture. Your understanding goes from the surfaces you see to the ones you know are there but cannot see, and can even include reasoning about how interaction would change the scene. My research aims to give computers this same level of physical understanding and I believe that this physical understanding will be critical for autonomous agents, as well as for enabling new insights in a surprisingly wide variety of research fields.
This talk will show my work on understanding the physical world from images, done in conjunction with my students both past and present. I will first show how we can reconstruct 3D scenes, including invisible surfaces, from a single RGB image. We have developed an approach that learns to predict a scene-scale implicit function using realistic 3D supervision that can be gathered by consumers or robots instead of by using artist-created watertight 3D assets. After showing reconstructions from our system in everyday scenarios, I will talk about how measuring the world can unlock new insights in science, from millimeter-sized bird bones to solar physics data where a pixel is a few hundred miles wide. I will conclude by showing work towards understanding interaction, especially focusing on hands and the objects they hold.
Bio:
David Fouhey is an assistant professor at the University of Michigan. He received a Ph.D. in robotics from Carnegie Mellon University and was then a postdoctoral fellow at UC Berkeley. His work has been recognized by a NSF CAREER award, and NSF and NDSEG fellowships. He has spent time at the University of Oxford’s Visual Geometry Group, INRIA Paris, and Microsoft Research.
Chief Operating Officer
Sarcos Technology and Robotics Corporation
https://www.ri.cmu.edu/event/re2-robotics-from-ri-spinout-to-acquisition/
RE2 Robotics: from RI spinout to Acquisition
Abstract: It was July 2001. Jorgen Pedersen founded RE2 Robotics. It was supposed to be a temporary venture while he figured out his next career move. But the journey took an unexpected course. RE2 became a leading developer of mobile manipulation systems. Fast forward to 2022, RE2 Robotics exited via an acquisition to Sarcos Technology and Robotics Corporation for $100M. In this talk, Jorgen will share the 20 year journey of RE2 Robotics, which includes bootstrapping a robotics business, leveraging Government funding as non-dilutive investment, pivoting at critical moments, raising capital in order to scale, commercializing cutting edge robotics technology, and most importantly, recognizing the importance of vision, mission, and core values to build a strong culture that can overcome any obstacle.
Bio: Jorgen Pedersen is the Chief Operating Officer of Sarcos Technology and Robotics Corporation. In this role, he contributes to the company strategy, creates and drives operational vision, and streamlines operations across business functions.
Pedersen joined Sarcos in April 2022 in connection with its acquisition of RE2. Prior to joining Sarcos, Pedersen had served as Chief Executive Officer of RE2 since 2001, when he founded RE2. As CEO of RE2, Pedersen was responsible for overseeing all aspects of RE2’s business, including its strategic direction, developing partnerships and alliances, and overseeing day-to-day operations. Prior to founding RE2, Pedersen was at Carnegie Mellon’s National Robotics Engineering Center. Pedersen has served as Chairman and Vice Chairman of the Robotics Division of the National Defense Industrial Association (NDIA) and a member of the Board of Trustees for NDIA and the Board of Directors for the National Advanced Mobility Consortium.
Pedersen has received numerous awards, including being recognized as the 2016 Carnegie Science Start-up Entrepreneur of the Year recipient. He has also been presented with an Army SBIR Achievement Award and the Tibbetts Award for SBIR Excellence.
Pedersen is currently on the Board of Directors of the Pittsburgh Robotics Network and Catalyst Connection. He holds a Master of Science degree in Robotics and a Bachelor of Science degree in Electrical and Computer Engineering from Carnegie Mellon University.
Russ Tedrake
Professor
Electrical Engineering & Computer Science, MIT
January 27, 2023
Motion Planning Around Obstacles with Graphs of Convex Sets
Abstract: In this talk, I’ll describe a new approach to planning that strongly leverages both continuous and discrete/combinatorial optimization. The framework is fairly general, but I will focus on a particular application of the framework to planning continuous curves around obstacles. Traditionally, these sort of motion planning problems have either been solved by trajectory optimization approaches, which suffer with local minima in the presence of obstacles, or by sampling-based motion planning algorithms, which can struggle with derivative constraints and sample-complexity in very high dimensions. In the proposed framework, called Graphs of Convex Sets (GCS), we can recast the trajectory optimization problem over a parametric class of continuous curves into a problem combining convex optimization formulations for graph search and for motion planning.
The result is a non-convex optimization problem whose convex relaxation is very tight — to the point that we can very often solve very complex motion planning problems to global optimality using the convex relaxation plus a cheap rounding strategy. I will describe numerical experiments of GCS applied to a quadrotor flying through buildings and robotic arms moving through confined spaces. On a seven-degree-of-freedom manipulator, GCS can outperform widely-used sampling-based planners by finding higher-quality trajectories in less time, and in 14 dimensions (or more) it can solve problems to global optimality which are hard to approach with sampling-based techniques. Finally, I’ll discuss new extensions using GCS for planning on manifolds and task and motion planning.
Brief Bio: Professor Tedrake is the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics and Mechanical Engineering at MIT, the Director of the Center for Robotics at the Computer Vision and Artificial Intelligence Lab and leader of the MIT’s entry in the DARPA Robotics Challenge. Russ is also the Vice President of Robotics Research at the Toyota Research Institute. He is a recipient of the 2021 Jamieson Teaching Award, the NSF CAREER Award, the MIT Jerome Saltzer Award for undergraduate teaching, the DARPA Young Faculty Award in Mathematics, the 2012 Ruth and Joel Spira Teaching Award, and was named a Microsoft Research New Faculty Fellow.
Professor Tedrake’s research is focused on finding elegant control solutions for interesting (underactuated, stochastic, and/or difficult to model) dynamical systems that he can build and experiment with. He is particularly interested in finding connections between mechanics (especially non-smooth mechanics) and machine learning/optimization theory which enable robust control design for complex mechanical systems. These days he is primarily focused in merging more of the powerful tools from systems theory with machine learning for robotic manipulation.
Professor Tedrake received his B.S.E. in Computer Engineering from the University of Michigan, Ann Arbor, in 1999, and his Ph.D. in Electrical Engineering and Computer Science from MIT in 2004, working with Sebastian Seung. After graduation, he joined the MIT Brain and Cognitive Sciences Department as a Postdoctoral Associate. During his education, he has also spent time at Microsoft, Microsoft Research, and the Santa Fe Institute.
https://www.cs.cmu.edu/news/2023/autonomous-zamboni-machine
MRSD Team AIce
https://mrsdprojects.ri.cmu.edu/2022teami/
Story Correction: The Penguins invited the AI on Ice team to watch a game against the Florida Panthers at PPG Paints Arena.
Byron Boots
Amazon Professor
Machine Learning in the Paul G. Allen School of Computer Science, University of Washington
November 18, 2022
Abstract: In this talk I will discuss several different ways in which ideas from machine learning and model predictive control (MPC) can be combined to build intelligent, adaptive robotic systems. I’ll begin by showing how to learn models for MPC that perform well on a given control task. Next, I’ll introduce an online learning perspective on MPC that unifies well-known algorithms and provides a prescriptive way to generate new ones. Finally, I will discuss how MPC can be combined with model-free reinforcement learning to build fast, reactive systems that can improve their performance with experience. Along the way, I’ll show how these approaches can be applied to the development of high-speed ground vehicles.
Bio: Byron Boots is the Amazon Professor of Machine Learning in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. Byron’s group performs fundamental and applied research in machine learning, artificial intelligence, and robotics with a focus on developing theory and systems that tightly integrate perception, learning, and control. His work has been applied to a range of problems including localization and mapping, motion planning, robotic manipulation, quadrupedal locomotion, and high-speed navigation. Byron has received several awards including “Best Paper” Awards from ICML, AISTATS, RSS, and IJRR. He is also the recipient of the RSS Early Career Award, the DARPA Young Faculty Award, the NSF CAREER Award, and the Outstanding Junior Faculty Research Award from the College of Computing at Georgia Tech. Byron received his PhD from the Machine Learning Department at Carnegie Mellon University.
CMU, Berkeley Researchers Design System Creating Robust Legged Robot
Aaron Aupperlee
This little robot can go almost anywhere.
Researchers at Carnegie Mellon University’s School of Computer Science and the University of California, Berkeley, have designed a robotic system that enables a low-cost and relatively small legged robot to climb and descend stairs nearly its height; traverse rocky, slippery, uneven, steep and varied terrain; walk across gaps; scale rocks and curbs, and even operate in the dark.
“Empowering small robots to climb stairs and handle a variety of environments is crucial to developing robots that will be useful in people’s homes as well as search-n-rescue operations,” said Deepak Pathak, an assistant professor at the Robotics Institute. “This system creates a robust and adaptable robot that could perform many everyday tasks.”
Read the whole story here:
https://www.cs.cmu.edu/news/2022/visual-locomotion
Assistant Professor
Computer Science & Electrical Engineering, Stanford University
November 4, 2022
Robots Should Reduce, Reuse, and Recycle
https://www.ri.cmu.edu/event/ri-seminar-chelsea-finn-stanford-university-assistant-professor-2022-11-04/
Abstract: Despite numerous successes in deep robotic learning over the past decade, the generalization and versatility of robots across environments and tasks has remained a major challenge. This is because much of reinforcement and imitation learning research trains agents from scratch in a single or a few environments, training special-purpose policies from special-purpose datasets. In contrast, the rest of machine learning has drawn considerable success from repeatedly reusing broad datasets and recycling pre-trained models for a variety of purposes. Replicating this success in robotics is no easy feat, since robot data doesn’t simply exist in vast quantities on the internet. In this talk, I will discuss how our embodied learning algorithms need to reduce, reuse, and recycle — reducing the need for special-purpose online data collection, reusing existing data, and recycling pre-trained models with various downstream tasks. Towards this goal, I will present research that studies zero-shot robot generalization to new tasks and language commands, using diverse data containing many distinct tasks. I will also discuss how we might develop recyclable pre-trained models for robot learning using large-scale datasets, including language-annotated videos of humans. In all cases, the evaluation will emphasize generalization, including to new objects, new scenes, and new tasks. I’ll conclude by discussing some important open questions and future directions.
Brief Bio: Chelsea Finn is an Assistant Professor in Computer Science and Electrical Engineering at Stanford University, and the William George and Ida Mary Hoover Faculty Fellow. Professor Finn’s research interests lie in the ability to enable robots and other agents to develop broadly intelligent behavior through learning and interaction. Her work lies at the intersection of machine learning and robotic control, including topics such as end-to-end learning of visual perception and robotic manipulation skills, deep reinforcement learning of general skills from autonomously collected experience, and meta-learning algorithms that can enable fast learning of new concepts and behaviors. Professor Finn received her Bachelors degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley.
Senior Information Scientist
RAND Corporation
October 14, 2022
https://www.ri.cmu.edu/event/ri-seminar-nidhi-kalra-rand-corporation-senior-information-scientist-2022-10-14/
What (else) can you do with a robotics degree?
Abstract: In 2004, half-way through my robotics Ph.D., I had a panic-inducing thought: What if I don’t want to build robots for the rest of my life? What can I do with this degree?! Nearly twenty years later, I have some answers: tackle climate change in Latin America, educate Congress about autonomous vehicles, improve how statistics are used in the criminal justice system, help pass legislation on electric school buses, enable the World Bank to think more clearly about uncertainty in their lending, and face angry citizens in California’s Bay Delta with nothing but a powerpoint presentation to save you. In this half-career, half-technical seminar, I hope to help you think broadly and boldly about what (else) you can do with a robotics degree. I’ll deep dive into two projects – in climate change and criminal justice reform – to illustrate the skills that a roboticist can bring to these problems. I’ll suggest some specific ways you can create opportunities for yourself, and I’ll reflect on both the benefits and costs of a career that trajects away from robotics.
Brief Bio: Nidhi Kalra is a senior information scientist at the RAND Corporation. Her research focuses on how organizations make robust decisions in the face of deep uncertainty, primarily with applications to climate change, autonomous vehicle policy, water resource management, and energy policy. Her clients include national and international leaders in these fields, including the World Bank, the Inter-American Development Bank, the US Bureau of Reclamation , the California Energy Commission, and the California Department of Water Resources. She has testified on autonomous vehicle policy at three congressional hearings. Kalra also helps organizations improve how they make robust decisions, particularly in the face of climate change. In addition to her work at RAND, Nidhi is currently the vice president of the Society for Decision Making Under Deep Uncertainty and an appointed commissioner for California 100, a statewide initiative focused on inspiring a vision and strategy for California’s next century. Previously, in 2018, Kalra served as senior technology policy adviser to then-U.S. Senator Kamala D. Harris. In 2013, she served as a senior decision scientist in the Office of the Chief Economist of Sustainable Development at the World Bank, where she helped launch the World Bank’s portfolio in robust decision making. Kalra developed educational technology tools to promote literacy among blind children in India, a project that went on to receive the Louis Braille Touch of Genius Prize for Innovation. She holds a Ph.D. in robotics from Carnegie Mellon University’s Robotics Institute.
Assistant Professor & Samueli Fellow
Electrical & Computer Engineering, UCLA
October 7, 2022
https://www.ri.cmu.edu/event/ri-seminar-ankur-mehta-ucla-assistant-professor-samueli-fellow-2022-10-07/
Towards $1 robots
Abstract: Robots are pretty great — they can make some hard tasks easy, some dangerous tasks safe, or some unthinkable tasks possible. And they’re just plain fun to boot. But how many robots have you interacted with recently? And where do you think that puts you compared to the rest of the world’s people? In contrast to computation, automating physical interactions continues to be limited in scope and breadth. I’d like to change that. But in particular, I’d like to do so in a way that’s accessible to everyone, everywhere. In our lab, we work to lower barriers to robotics design, creation, and operation through material and mechanism design, computational tools, and mathematical analysis. We hope that with our efforts, everyone will be soon able to enjoy the benefits of robotics to work, to learn, and to play.
Brief Bio: Prof. Ankur Mehta is an assistant professor of Electrical and Computer Engineering at UCLA, and directs the Laboratory for Embedded Machines and Ubiquitous Robots (LEMUR). Pushing towards his visions of a future filled with robots, his research interests involve printable robotics, rapid design and fabrication, control systems, and multi-agent networks. He has received the DARPA Young Faculty award, NSF CAREER award, and a Samueli fellowship; he has also received best paper awards in the IEEE Robotics & Automation Magazine and the International Conference on Intelligent Robots and Systems (IROS). Prior to joining the UCLA faculty, Prof. Mehta was a postdoc at MIT’s Computer Science and Artificial Intelligence Laboratories investigating design automation for printable robots. Before to that, he conducted research as a graduate student at UC Berkeley in wireless sensor networks and systems, small autonomous aerial robots and rockets, control systems, and micro-electro-mechanical systems (MEMS). When not in the lab, Ankur enjoys puzzles, ultimate frisbee, board games, and social dancing.
https://labs.ri.cmu.edu/aiira/
Bren Professor of Aerospace and Control and Dynamical Systems
Department of Aerospace , Caltech
September 23, 2022
https://www.ri.cmu.edu/event/ri-seminar-soon-jo-chung-caltech-bren-professor-of-aerospace-and-control-and-dynamical-systems-2022-09-23/
Safe and Stable Learning for Agile Robots without Reinforcement Learning
Abstract: My research group (https://aerospacerobotics.caltech.edu/) is working to systematically leverage AI and Machine Learning techniques towards achieving safe and stable autonomy of safety-critical robotic systems, such as robot swarms and autonomous flying cars. Another example is LEONARDO, the world’s first bipedal robot that can walk, fly, slackline, and skateboard. Stability and safety are often research problems of control theory, while conventional black-box AI approaches lack much-needed robustness, scalability, and interpretability, which are indispensable to designing control and autonomy engines for safety-critical aerospace and robotic systems. I will present some recent results using contraction-based incremental stability tools for deriving formal robustness and stability guarantees of various learning-based and data-driven control problems, with some illustrative examples including learning-to-fly control with adaptive meta learning, learning-based swarm control and planning synthesis, and optimal motion planning with stochastic nonlinear dynamics and chance constraints. Recent results on neural-network-based contraction metrics (NCMs) as a stability certificate for safe motion planning and control will also be discussed.
Brief Bio: Soon-Jo Chung is Bren Professor of Aerospace and Control and Dynamical Systems in the California Institute of Technology. Prof. Chung is also a Senior Research Scientist of the Jet Propulsion Laboratory, which Caltech manages for NASA. From 2009 to 2016, Prof. Chung was a faculty member at the University of Illinois at Urbana-Champaign. Professor Chung’s research focuses on distributed spacecraft systems, space autonomous systems, and aerospace robotics, and in particular, on the theory and application of control, estimation, learning-based control and planning, and navigation of autonomous space and air vehicles. He is the recipient of the University of Illinois Engineering Dean’s Award for Excellence in Research, the Arnold Beckman Faculty Fellowship of the U of Illinois Center for Advanced Study, the AFOSR Young Investigator Program (YIP) award, the NSF CAREER award, a 2020 Honorable Mention for the IEEE Robotics and Automation Letters Best Paper Award, three best conference paper awards, including the AIAA Guidance, Navigation, and Control Conference and AIAA InfoTech, and five best student paper awards or finalist awards. Prof. Chung is an Associate Editor of the IEEE Transactions on Automatic Control and the AIAA Journal of Guidance, Control, and Dynamics. He was an Associate Editor of the IEEE Transactions on Robotics, and the Guest Editor of a Special Section on Aerial Swarm Robotics published in the IEEE Transactions on Robotics.
https://www.ri.cmu.edu/robotics-groups/textiles-lab/
https://www.ri.cmu.edu/ri-faculty/james-mccann/
They drove the heavily instrumented ATV aggressively at speeds up to 30 miles an hour. They slid through turns, took it up and down hills, and even got it stuck in the mud — all while gathering data such as video, the speed of each wheel and the amount of suspension shock travel from seven types of sensors.
The resulting dataset, called TartanDrive, includes about 200,000 of these real-world interactions. The researchers believe the data is the largest real-world, multimodal, off-road driving dataset, both in terms of the number of interactions and types of sensors. The five hours of data could be useful for training a self-driving vehicle to navigate off road.
Associate Professor
Robotics & Mechanical Engineering , Oregon State University
May 2, 2022
Snakes & Spiders, Robots & Geometry
https://www.ri.cmu.edu/event/snakes-spiders-robots-geometry/
Abstract: Locomotion and perception are a common thread between robotics and biology. Understanding these phenomena at a mechanical level involves nonlinear dynamics and the coordination of many degrees of freedom. In this talk, I will discuss geometric approaches to organizing this information in two problem domains: Undulatory locomotion of snakes and swimmers, and vibration propagation in spider webs. In the first section, I will discuss how differential geometry and Lie group theory provide insight into the locomotion of undulating systems through a vocabulary of lengths, areas, and curvatures. In particular, a tool called the *Lie bracket* combines these geometric concepts to describe the effects of cyclic changes in the locomotor’s shape, such as the gaits used by swimming or crawling systems. Building on these results, I will demonstrate that the geometric techniques are useful beyond the “clean” ideal systems on which they have traditionally been developed, and can provide insight into the motion of systems with considerably more complex dynamics, such as locomotors in granular media. In the second section, I will turn my attention to vibration propagation through spiders’ webs. Due to poor eyesight, many spiders rely on web vibrations for situational awareness. Web-borne vibrations are used to determine the location of prey, predators, and potential mates. The influence of web geometry and composition on web vibrations is important for understanding spider’s behavior and ecology. Past studies on web vibrations have experimentally measured the frequency response of web geometries by removing threads from existing webs. We have constructed physical artificial webs and computer models to better understand the effect of web structure on vibration transmission. These models provide insight into the propagation of vibrations through the webs, the frequency response of the bare web, and the influence of the spider’s mass and stiffness on the vibration transmission patterns.
Brief Bio: Ross L. Hatton is an Associate Professor of Robotics and Mechanical Engineering at Oregon State University, where he directs the Laboratory for Robotics and Applied Mechanics. He received PhD and MS degrees in Mechanical Engineering from Carnegie Mellon University, following an SB in the same from Massachusetts Institute of Technology. His research focuses on understanding the fundamental mechanics of locomotion and sensory perception, making advances in mathematical theory accessible to an engineering audience, and on finding abstractions that facilitate human control of unconventional locomotors. Hatton’s group also works with local industry to transfer modern developments in robotics from the lab to the factory or commercial production.
Assistant Professor of Computer Science
Stanford University
April 14, 2022
Teruko Yata Memorial Lecture
Leveraging Language and Video Demonstrations for Learning Robot Manipulation Skills and Enabling Closed-Loop Task Planning
https://www.ri.cmu.edu/event/teruka-yata-memorial-lecture/
Humans have gradually developed language, mastered complex motor skills, created and utilized sophisticated tools. The act of conceptualization is fundamental to these abilities because it allows humans to mentally represent, summarize and abstract diverse knowledge and skills. By means of abstraction, concepts that we learn from a limited number of examples can be extended to a potentially infinite set of new and unanticipated situations. Abstract concepts can also be more easily taught to others by demonstration.
I will present work that gives robots the ability to acquire a variety of manipulation concepts that act as mental representations of verbs in a natural language instruction. We propose to use learning from human demonstrations of manipulation actions as recorded in large-scale video data sets that are annotated with natural language instructions. In extensive simulation experiments, we show that the policy learned in the proposed way can perform a large percentage of the 78 different manipulation tasks on which it was trained. We show that this multi-task policy generalizes over variations of the environment. We also show examples of successful generalization over novel but similar instructions.
I will also present work that enables a robot to sequence these newly acquired manipulation skills for long-horizon task planning. Specifically, I will focus on work that uses the same human video demonstrations annotated with natural language to ground symbolic pre- and postconditions of manipulation skills in visual data. I will show how this enables closed-loop task planning involving a large variety of skills, objects and their symbolic states.
I will close this talk by discussing the lessons learned and interesting open questions that still remain.
—
Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively.
Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several Early Career and Best Paper awards, most notably the 2019 IEEE Robotics and Automation Society Early Career Award and the 2020 Robotics: Science and Systems Early Career Award.
—
About the Lecture: The Yata Memorial Lecture in Robotics is part of the School of Computer Science Distinguished Lecture Series. Teruko Yata was a postdoctoral fellow in the Robotics Institute from 2000 until her untimely death in 2002. After graduating from the University of Tsukuba, working under the guidance of Prof. Yuta, she came to the United States. At Carnegie Mellon, she served as a post-doctoral fellow in the Robotics Institute for three years, under Chuck Thorpe. Teruko’s accomplishments in the field of ultrasonic sensing were highly regarded and won her the Best Student Paper Award at the International Conference on Robotics and Automation in 1999. It was frequently noted, and we always remember, that “the quality of her work was exceeded only by her kindness and thoughtfulness as a friend.” Join us in paying tribute to our extraordinary colleague and friend through this most unique and exciting lecture.
April 8, 2022
This quickly made rough edit is in chronological order and includes some behind the scenes troubleshooting and test runs.
Professor and Department Head
Robotics Engineering Department, Worcester Polytechnic Institute (WPI)
April 1, 2022
https://www.ri.cmu.edu/event/ri-seminar-jiang-xiao-worcester-polytechnic-institute-wpi-professor-and-department-head-2022-04-01/
Perception-Action Synergy in Uncertain Environments
Abstract: Many robotic applications require a robot to operate in an environment with unknowns or uncertainty, at least initially, before it gathers enough information about the environment. In such a case, a robot must rely on sensing and perception to feel its way around. Moreover, it has to couple sensing/perception and motion synergistically in real time, such that perception guides motion, while motion enables better perception. In this talk, I will introduce our research in combining perception and motion of a robot to achieve autonomous contact-rich assembly, object recognition, object modeling, and constrained manipulation in uncertain or unknown environments, under force/torque, RGBD, or touch sensing. I will also introduce our recent work on integrated semantic SLAM and accurate loop closure detection, SmSLAM+LCD.
Brief Biosketch: Jing Xiao received her Ph.D. degree in Computer, Information, and Control Engineering from the University of Michigan, Ann Arbor, Michigan. She is the Deans’ Excellence Professor, William B. Smith Distinguished Fellow in Robotics Engineering, Professor and Head of the Robotics Engineering Department, Worcester Polytechnic Institute (WPI). She is also the Site Director of the NSF Industry/University Cooperative Research Center on Robots and Sensors for Human Well-being. She joined WPI as the Director of the Robotics Engineering Program in 2018 from the University of North Carolina at Charlotte, where she received the College of Computing Outstanding Faculty Research Award in 2015. She led the Robotics Engineering Program to become the Robotics Engineering Department in July 2020. Jing Xiao is an IEEE Fellow. Her research spans robotics, haptics, and intelligent systems. She has co-authored a monograph and published extensively in major robotics journals, conferences, and books.
Zackory Erickson
Assistant Professor
Robotics Institute,
Carnegie Mellon University
March 25, 2022
Haptic Perspective-taking from Vision and Force
Abstract: Physically collaborative robots present an opportunity to positively impact society across many domains. However, robots currently lack the ability to infer how their actions physically affect people. This is especially true for robotic caregiving tasks that involve manipulating deformable cloth around the human body, such as dressing and bathing assistance. In this talk, I will introduce haptic perspective-taking—the act of predicting a person’s haptic sense of touch during physical contact. We will discuss robot learning methods for haptic perspective-taking that leverage vision and haptic data. These methods aim to enable a robot to perform decision-making according to how their actions would apply pressure onto the human body. We will also explore generalizing this concept of haptic perspective-taking to scenarios where cloth interacts with everyday objects other than the human body.
Brief Biosketch: Zackory Erickson is an Assistant Professor in The Robotics Institute at Carnegie Mellon University, where he leads the Robotic Caregiving and Human Interaction (RCHI) Lab. His research focuses on developing new computational, control, and sensing methods for intelligent physical human-robot interaction and healthcare robots. Zackory received his PhD in Robotics and M.S. in Computer Science from Georgia Tech and B.S. in Computer Science at the University of Wisconsin–La Crosse
Carnegie Mellon is going to the Moon. The culmination of many years, hundreds of individuals and too many hours to count has brought us to this pivotal moment in history where we will launch two lunar rovers over the next two years. Success will depend on a number of factors, one of which is outfitting a Mission Control Center where mission critical operations will be monitored. With this campaign, we turn to this much needed Mission Control Center, which will be located right here at CMU.
Carnegie Mellon's rover and spacecraft teams have identified a location on campus for a mission control center, but it will need significant upgrades to be mission-ready in time to go to the moon.
To help set up this facility, our goal is to raise $80,000. These funds will be used for equipment for the room and will support the purchase of essential devices such as servers, computers and communications hardware.
Please visit
https://crowdfunding.cmu.edu/campaigns/cmu-mission-control#/
RI Seminar: Leila Bridgeman
Assistant Professor of Mechanical Engineering & Materials Science
Duke University
March 18, 2022
Abstract: Despite its diverse areas of application, the desire to optimize performance and guarantee acceptable behaviour in the face of inevitable uncertainty is pervasive throughout control theory. This creates a fundamental challenge since the necessity of robustly stable control schemes often favors conservative designs, while the desire to optimize performance typically demands the opposite. While many applications hinge on the ability to robustly and reliably regulate system behaviour, the large-scale nature of modern power systems, combined with increasingly significant nonlinearities introduced by distributed, renewable power generation presents a major challenge. This talk will discuss how a distinctive perspective on the foundational results of input-output stability theory can lead to new controller design methods that aid in solving this and other key modern control problems.
Brief Biosketch: Leila Bridgeman earned B.Sc. and M.Sc. degrees in Applied Mathematics in 2008 and 2010 from McGill University, Montreal, QC, Canada, where she completed her Ph.D. in Mechanical Engineering, earning McGill’s 2016 D.W. Ambridge Prize for outstanding dissertation in the physical sciences and engineering. Her graduate studies involved research semesters at University of Michigan, University of Bern, and University of Victoria, along with an internship at Mitsubishi Electric Research Laboratories (MERL) in Boston, MA. She is now an assistant professor of Mechanical Engineering and Materials Science and a member of the Robotics Group at Duke University. Through her research, Leila strives to bridge the gap between theoretical results in robust and optimal control and their use in practice. She explores how the tools of numerical analysis and input-output stability theory can be applied to the most challenging of controls problems, including the control of delayed, open-loop unstable, and nonminimum-phase systems. Her focus has been on the development of readily applicable controller synthesis and stability analysis methods based on the evaluation of linear matrix inequalities (LMIs). Resulting publications have considered applications of this work to robotic, process control, and time-delay systems.
Matthew Johnson-Roberson
Professor / Director of RI
Robotics Institute,
Carnegie Mellon University
February 4, 2022
Lessons from the Field: Deep Learning and Machine Perception for field robots
Abstract: Mobile robots now deliver vast amounts of sensor data from large unstructured environments. In attempting to process and interpret this data there are many unique challenges in bridging the gap between prerecorded data sets and the field. This talk will present recent work addressing the application of machine learning techniques to mobile robotic perception. We will discuss solutions to the assessment of risk in self-driving vehicles, thermal cameras for object detection and mapping and finally object detection and grasping and manipulation in underwater contexts. Real field data will guide this process and we will show results on deployed field robotic vehicles.
Brief Bio: Matthew Johnson-Roberson is director of Carnegie Mellon Robotics Institute and a Professor in the School of Computer Science. He received a PhD from the University of Sydney in 2010. He has held prior postdoctoral appointments with the Centre for Autonomous Systems – CAS at KTH Royal Institute of Technology in Stockholm and the Australian Centre for Field Robotics at the University of Sydney. He co-founded Refraction AI a last-mile autonomous vehicle delivery company. He has worked in robotic perception since the first DARPA grand challenge and his group focuses on enabling robots to better see and understand their environment.
Assistant Professor
Computer Science, University of Southern California
January 28, 2022
Towards Robust Human-Robot Interaction: A Quality Diversity Approach
https://www.ri.cmu.edu/event/owards-robust-human-robot-interaction-a-quality-diversity-approach/
Abstract: The growth of scale and complexity of interactions between humans and robots highlights the need for new computational methods to automatically evaluate novel algorithms and applications. Exploring the diverse scenarios of interaction between humans and robots in simulation can improve understanding of complex human-robot interaction systems and avoid potentially costly failures in real-world settings. In this talk, I propose formulating the problem of automatic scenario generation in human-robot interaction as a quality diversity problem, where the goal is not to find a single global optimum, but a diverse range of failure scenarios that explore both environments and human actions. I show how standard quality diversity algorithms can discover surprising and unexpected failure cases in the shared autonomy domain. I then discuss the development of a new class of quality diversity algorithms that significantly improve the search of the scenario space and the integration of these algorithms with generative models, which enables the generation of complex and realistic scenarios. Finally, I discuss applications in procedural content generation and human preference learning.
Brief Bio: Stefanos Nikolaidis is an Assistant Professor of Computer Science at the University of Southern California and leads the Interactive and Collaborative Autonomous Robotics Systems (ICAROS) lab. His research draws upon expertise on artificial intelligence, human-robot interaction, procedural content generation and quality diversity optimization and leads to end-to-end solutions that enable deployed robotic systems to act robustly when interacting with people in practical, real-world applications. Stefanos completed his PhD at Carnegie Mellon’s Robotics Institute and received an MS from MIT, a MEng from the University of Tokyo and a BS from the National Technical University of Athens. His research has been recognized with an oral presentation at the Conference on Neural Information Processing Systems and best paper awards and nominations from the IEEE/ACM International Conference on Human-Robot Interaction, the International Conference on Intelligent Robots and Systems, and the International Symposium on Robotics.