MultiFab: Vision-Assisted Multi-Material 3D PrintingMIT CSAIL2024-10-11 | MultiFab: Vision-Assisted Multi-Material 3D PrintingDiffuseBot: Making robots with genAI & physics-based simulationMIT CSAIL2024-01-10 | Paper: DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative Diffusion Models openreview.net/pdf?id=1zo4iioUEs Publication: NeurIPS Lead authors: Tsun-Hsuan Johnson Wang (tsunw@mit.edu), Yilun Du (yilundu@gmail.com & yilundu@mit.edu), Chuang Gan (chuangg@mit.edu), and Daniela Rus (rus@csail.mit.edu) Lab: Distributed Robotics Laboratory - https://www.csail.mit.edu/research/distributed-robotics-laboratory Funding: The NSF EFRI program, the MIT Watson AI Lab, and the CSAIL - GIST program
Videographer: Mike Grimmett Director: Rachel Gordon PA: Alex ShippsAI advances robotic dexterity w/in-hand reorientationMIT CSAIL2023-12-14 | Visual dexterity: In-hand reorientation of novel and complex object shapes: Publication: Science Robotics https://www.science.org/doi/10.1126/s... Lead authors: Tao Chen (taochen@mit.edu) & Pulkit Agrawal (pulkitag@mit.edu) https://taochenshh.github.io/projects...
Improbable AI Lab: https://people.csail.mit.edu/pulkitag/
Videographer: Mike Grimmett Director: Rachel Gordon PA: Alex ShippsInside the lab: MIT CSAILMIT CSAIL2023-11-29 | For more on MIT CSAIL and the Stata Center: https://www.csail.mit.edu/
Videographer: Mike Grimmett Director: Rachel Gordon PA: Alex ShippsMIT App Inventor: Using AI to democratize mobile techMIT CSAIL2023-11-16 | https://appinventor.mit.edu Hal Abelson - http://groups.csail.mit.edu/mac/users/hal/hal.html Evan Patton - evanpatton.com
Videographer: Mike Grimmett Director: Rachel Gordon PA: Alex ShippsMIT CSAIL Office Hours Episode 1: RoboticsMIT CSAIL2023-11-07 | 3 labs. Different robotic solutions of the future.
Videographer: Mike Grimmett Director: Rachel Gordon PA: Alex Shipps54 Questions with an MIT AI researcherMIT CSAIL2023-10-26 | Aspen Kennedy Hopkins Website: aspenhopkins.com Aleksander Madry: Website: https://madry.mit.edu
Videographer: Mike Grimmett Director: Rachel Gordon PA: Alex ShippsUsing AI to protect against AI image manipulationMIT CSAIL2023-08-18 | Article: https://news.mit.edu/2023/using-ai-protect-against-ai-image-manipulation-0731 Paper: arxiv.org/abs/2302.06588 Authors: Hadi Salman, Aleksander Madry, Alaa Khaddaj, Guillaume Leclerc, Andrew Ilyas
Videographer: Mike Grimmett Director: Rachel Gordon PA: Alex ShippsMIT’s DribbleBots vs. The New England RevolutionMIT CSAIL2023-08-08 | Improbable AI Laboratory: https://cap.csail.mit.edu/improbable-ai-lab-lab-tours DribbleBots: https://www.csail.mit.edu/news/four-legged-robotic-system-playing-soccer-various-terrains Director: Rachel Gordon Videographer: Mike Grimmett54 Questions with an MIT HackerMIT CSAIL2023-07-25 | Article: https://news.mit.edu/2022/researchers-discover-hardware-vulnerability-apple-m1-0610 MIT Secure Hardware Design: http://csg.csail.mit.edu/6.S983/ twitter.com/0xjprx
Videographer: Mike Grimmett Director: Rachel Gordon PA: Alex ShippsSensor skin gives robots a human touchMIT CSAIL2023-07-19 | Sensor Skin Gives Robots a ‘Human Touch’ Paper: https://is.gd/MuCa_Finger Conference: ICRA 2023 Sponsors: GIST
MIT CSAIL authors: Cedric Honnet, Yunyi Zhu, Martin Nisser, Chao Liu, Byungchul Kim, Daniela Rus, Stefanie Mueller,
GIST authors: Jae Hun Seol, Jongho Lee
In collaboration with GIST: https://www.eecs.mit.edu/tag/gwangju-institute-of-science-and-technology-gist/
Video Director: Mike Grimmett Producer: Rachel GordonLupe Fiasco, Fox Harrell, & Nick Montfort: AI, Computational Creativity, & the Art of TeachingMIT CSAIL2023-06-21 | Video Director: Rachel Gordon Videographer: Mike GrimmettThis is CSAILMIT CSAIL2023-06-06 | For more: https://www.csail.mit.edu/
Videographer: Mike Grimmett Producer: Rachel GordonInventing liquid neural networksMIT CSAIL2023-05-09 | Paper: science.org/doi/10.1126/scirobotics.adc8892 Publication/Event: Science Robotics Key authors: Makram Chahine, Ramin Hasani Mathias Lechner, Alexander Amini, and Daniela Rus MIT News article: https://news.mit.edu/2023/drones-navigate-unseen-environments-liquid-neural-networks-0419 Video Director: Rachel Gordon Videographer: Mike GrimmettMIT CSAIL Explains: Large Language Models: Part 2MIT CSAIL2023-05-03 | Jacob Andreas/MIT CSAIL Research Group: https://lingo.csail.mit.edu
Videographer: Mike Grimmett Producer: Rachel GordonMIT CSAIL Explains: Large Language Models: Part 1MIT CSAIL2023-04-25 | Jacob Andreas/MIT CSAIL Research Group: https://lingo.csail.mit.edu Videographer: Mike Grimmett Producer: Rachel GordonDrones navigate unseen environments with liquid neural networksMIT CSAIL2023-04-19 | Key authors: Makram Chahine, Ramin Hasani Mathias Lechner, Alexander Amini, and Daniela Rus Date: April 20, 2023 Publication/Event: Science Robotics Paper: bit.ly/3Aai6GW MIT News article: bit.ly/3AaUYIr Videographer: Mike Grimmett Producer: Rachel GordonA soccer-playing robot equipped for various terrainsMIT CSAIL2023-04-03 | MIT News spotlight link: https://news.mit.edu/2023/legged-robotic-system-playing-soccer-various-terrains-0403
Key authors: Yandong Ji (yandong@mit.edu)∗, Gabriel B. Margolis (gmargo@mit.edu)*, and Pulkit Agrawal (pulkitag@mit.edu)
Videographer: Mike Grimmett Producer: Rachel GordonA design tool for color-changing mosaicsMIT CSAIL2023-03-23 | Authors: Ticha Melody Sethapakdi (ticha@mit.edu) and Stefanie Mueller (stefanie.mueller@mit.edu) Paper: http://groups.csail.mit.edu/hcie/files/research-projects/polagons/Polagons.pdf Conference: CHI 2023
Videographer: Mike Grimmett Producer: Rachel GordonDetecting AV failures in MIT’s MiniCityMIT CSAIL2023-03-21 | Infrastructure-based End-to-End Learning and Prevention of Driver Failure arxiv.org/abs/2303.12224 Authors: Noam Buckman, Shiva Sreeram, Mathias Lechner, Yutong Ban, Ramin Hasani, Sertac Karaman, Daniela Rus Conference: ICRA 2023 Sponsors: Toyota Research InstituteMIT’s robot dog isnt afraid of a little snow!MIT CSAIL2023-03-10 | Authors: Gabriel B Margolis & Pulkit Agrawal Conference: Conference on Robot Learning (CoRL 2022) Sponsorship info: Supported by the DARPA Machine Common Sense Program, the MIT-IBM Watson AI Lab, the NSF AI Institute for Artificial Intelligence and Fundamental Interactions, the United States Air Force Research Laboratory, and the US Air Force AI Accelerator.
Walk These Ways: Tuning Robot Control for Generalization with Multiplicity of Behavior arxiv.org/abs/2212.03238
GitHub repository - github.com/Improbable-AI/walk-these-ways open-source code for this controllerBehind MITs Robot DogMIT CSAIL2023-01-11 | Using machine learning techniques, MIT’s “Robot Dog” can run, climb, and even dance. #shorts #technology #robotMIT computer scientists give their favorite programming hacksMIT CSAIL2023-01-11 | MIT CSAIL grad students talk about the hacks and shortcuts they use to make their programming lives easier.MIT computer scientists on what you should know before going to MITMIT CSAIL2022-11-10 | MIT CSAIL grad students give their opinions on what they think people should know before going to MIT.MIT CSAIL Researcher Explains: AI Image GeneratorsMIT CSAIL2022-10-27 | MIT CSAIL PhD student Yilun Du discusses the potential applications of generative art beyond the explosion of images that put the web into creative hysterics.
Read more about it here: bit.ly/3FoMrWoRobotic cubes w/reprogrammable materials selectively self-assembleMIT CSAIL2022-10-20 | Researchers created a method for magnetically programming materials to make cubes that are very picky about who they connect with, enabling more scalable self-assembly.
Read more about it here: bit.ly/405xqQKMIT system “sees” inner structure of the body during physical rehabMIT CSAIL2022-10-06 | Researchers created a motion and muscle engagement monitoring system for unsupervised physical rehabilitation, which could help with injuries and better mobility for the elderly and athletes.
Read more about it here: bit.ly/3Dnw3EdMIT system lets robots use grasped tools w/the right amount of forceMIT CSAIL2022-09-22 | Researchers created a system that lets robots effectively use grasped tools with the right amount of force.
Read more about it here: bit.ly/3fgK5xFMIT computer scientists talk about their programming pet peevesMIT CSAIL2022-09-15 | MIT CSAIL grad students share the things that other programmers do that annoy them the most.MIT computer scientists explain neural networks in ten secondsMIT CSAIL2022-08-23 | MIT CSAIL grad students try to explain neural networks in ten seconds. Emphasis on the word "try."MIT computer scientists confess their worst programming habitsMIT CSAIL2022-08-03 | MIT CSAIL grad students admit to the programming habits they know they shouldn't do...but do anyway.KineCAM: An instant camera for animated paper photosMIT CSAIL2022-07-21 | KineCAM is instant camera that takes kinegrams, which are paper photos that show animation. Originating as an MIT class project, KineCAM is made of inexpensive parts and easy to construct. For more info visit: https://news.mit.edu/2022/new-twist-old-school-animation-kinecam-0721 Or read the technical paper here: https://hcie.csail.mit.edu/research/kinecam/kinecam.htmlMIT computer scientists give their opinions on cryptoMIT CSAIL2022-07-07 | We asked MIT CSAIL grad students their thoughts on cryptocurrency, and if they personally used it.Robots learn how to shape Play-DohMIT CSAIL2022-06-23 | Elasto-plastic materials like Play-Doh can be difficult for robots to manipulate. RoboCraft is a system that enables a robot to learn how to shape these materials in just ten minutes. Read the technical paper at http://hxu.rocks/robocraft/An open source simulator for self-driving carsMIT CSAIL2022-06-21 | VISTA is a data-driven, photorealistic simulator for autonomous driving. It can simulate not just live video but LiDAR data and event cameras, and also incorporate other simulated vehicles to model complex driving situations. VISTA is open source and the code can be found below. Technical paper: arxiv.org/pdf/2111.12083.pdf Code: github.com/vista-simulator/vista Documentation: https://vista.csail.mit.edu/Reconstructing 3D scenes from photos with extremely different viewsMIT CSAIL2022-06-17 | Existing methods that reconstruct 3D scenes from 2D images rely on the images that contain some of the same features. Virtual correspondence is a method of 3D reconstruction that works even with images taken from extremely different views that do not show the same features. For the technical paper and more information, visit virtual-correspondence.github.io News article: https://news.mit.edu/2022/seeing-whole-from-some-parts-0617MIT computer scientists on the most important unsolved problem in computer scienceMIT CSAIL2022-06-08 | MIT CSAIL grad students speak about what they think is the most important unsolved problem in computer science today.Wearable assistive robotics with integrated sensingMIT CSAIL2022-05-02 | MIT CSAIL has developed a new way to rapidly design and fabricate soft pneumatic actuators with integrated sensing. Such actuators can be used as the backbone in a variety of applications such as assistive wearables, robotics, and rehabilitative technologies. Read the technical paper at http://pneuact.csail.mit.edu/file/CHI2022.pdfA new state of the art for unsupervised computer visionMIT CSAIL2022-04-21 | Scientists from MIT CSAIL created an algorithm to solve one of the hardest tasks in computer vision: assigning every single pixel of the world a label without any human supervision. Technical paper: arxiv.org/abs/2203.08414 Project website: mhamilton.net/stego.htmlMITs Mini Cheetah robot runs faster than everMIT CSAIL2022-03-17 | A new method allows MIT's Mini Cheetah to learn how to run fast and adapt to walking on challenging terrain. This learning-based method outperforms previous human-designed methods and allowed the Mini Cheetah to set a record for speed. More info: https://news.mit.edu/2022/3-questions-how-mit-mini-cheetah-learns-run-fast-0317 Visit the project page at sites.google.com/view/model-free-speed
The work was supported by DARPA Machine Common Sense Program, Naver Labs, MIT Biomimetic Robotics Lab, and the NSF AI Institute of AI and Fundamental Interactions. The research was conducted at the Improbable AI Lab.
Video edited by Tom BuehlerShapeshifting Robots for Space ExplorationMIT CSAIL2022-02-23 | ElectroVoxels are robotic cubes that can reconfigure using electromagnets. The cubes don't need motors or propellant to move, and can operate in zero gravity. Read the technical paper at http://groups.csail.mit.edu/hcie/files/research-projects/ElectroVoxel/Nisser_ICRA_22_CamReady_0207.pdfEmbedding Invisible Codes in Objects for Augmented RealityMIT CSAIL2022-01-28 | InfraredTags is a system for fabricating objects with embedded codes that are only visible to infrared cameras. These codes can be used for purposes such as metadata or interaction with devices through augmented reality. Read the technical paper: https://groups.csail.mit.edu/hcie/files/research-projects/infraredtags/2022-CHI-InfraredTags-paper.pdfMIT computer scientists on the research paper that most influenced themMIT CSAIL2021-12-14 | MIT CSAIL grad students and postdocs talk about their favorite computer science research paper and how it influenced their work.Robots that Evolve like AnimalsMIT CSAIL2021-12-06 | Scientists from MIT CSAIL created a new system for co-optimizing the brain and body of soft intelligent robots, and found that they often grew to resemble existing natural creatures while outperforming hand-designed robots. For the code and more info, visit evolutiongym.github.io Or read the new article at https://news.mit.edu/2021/system-designing-training-intelligent-soft-robots-1207Teaching Robots Dexterous Hand ManipulationMIT CSAIL2021-11-05 | A new model-free framework reorients over 2000 diverse objects with both the hand facing upward and downward, in a step towards more human-like manipulation. Read the technical paper at taochenshh.github.io/projects/in-hand-reorientation Winner of Best Paper at the 2021 Conference on Robot LearningRoboat III: A Robotic Boat Transportation SystemMIT CSAIL2021-10-27 | Roboat III is the latest phase of a robotic boat system that can autonomously navigate crowded urban waterways. Roboat can be adapted to several different use cases, such as personal transport, package delivery, waste disposal, and on-demand infrastructure. For more info visit roboat.orgDesigning Custom Wearbles for Health and Motion SensingMIT CSAIL2021-09-22 | EIT-kit is a toolkit for designing wearable devices that use electrical conductivity to sense motion and monitor health. Technical paper at https://groups.csail.mit.edu/hcie/files/research-projects/eit-kit/2021-UIST-eit-kit-paper.pdf3D Printing Devices with Embedded SensingMIT CSAIL2021-09-14 | MetaSense is a method for 3D printing mechanisms with embedded sensing capabilities. More info at https://news.mit.edu/2021/3d-printed-objects-sense-interaction-0914 Technical paper at https://hcie.csail.mit.edu/research/metasense/metasense-paper.pdfA Smart Laser Cutter than Automatically Identifies What its CuttingMIT CSAIL2021-08-19 | Smart laser cutter system that uses machine learning to identify materials. Technical paper at https://hcie.csail.mit.edu/research/sensicut/sensicut.html More info at https://news.mit.edu/2021/smart-laser-cutter-system-detects-different-materials-0819Finding the optimal shape for robotsMIT CSAIL2021-07-13 | Machine learning is used to find the best shape for a robot to complete a given task. Technical paper: https://people.csail.mit.edu/jiex/papers/DiffHand/paper.pdf More info at https://news.mit.edu/2021/contact-aware-robot-design-0719Robot-assisted DressingMIT CSAIL2021-07-12 | Project page: safe-dressing.github.io Technical paper: http://www.roboticsproceedings.org/rss17/p050.pdf
Basic safety needs in the paleolithic era have largely evolved with the onset of the industrial and cognitive revolutions. We interact a little less with raw materials, and interface a little more with machines.
Robots don’t have the same hardwired behavioral awareness and control, so secure collaboration with humans requires methodical planning and coordination. You can likely assume your friend can fill up your morning coffee cup without spilling on you, but for a robot, this seemingly simple task requires careful observation and comprehension of human behavior.
Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have recently created a new algorithm to help a robot find efficient motion plans to ensure physical safety of its human counterpart. In this case, the bot helped put a jacket on a human, which could potentially prove to be a powerful tool in expanding assistance for those with disabilities or limited mobility.
“Developing algorithms to prevent physical harm without unnecessarily impacting the task efficiency is a critical challenge,” says MIT PhD student Shen Li, a lead author on a new paper about the research. “By allowing robots to make non-harmful impact with humans, our method can find efficient robot trajectories to dress the human with a safety guarantee.”
Proper human modeling -- how the human moves, reacts, and responds -- is necessary to enable successful robot motion planning in human-robot interactive tasks. A robot can achieve fluent interaction if the human model is perfect, but in many cases, there’s no flawless blueprint.
A robot shipped to a person at home, for example, would have a very narrow, “default” model of how a human could interact with it during an assisted dressing task. It wouldn’t account for the vast variability in human reactions, dependent on a myriad of variables such as personality and habits. A screaming toddler would react differently to putting on a coat or shirt than a frail elderly person, or those with disabilities who might have rapid fatigue or decreased dexterity.
If that robot is tasked with dressing, and plans a trajectory solely based on that default model, the robot could clumsily bump into the human, resulting in an uncomfortable experience or even possible injury. However, if it’s too conservative in ensuring safety, it might pessimistically assume that all space nearby is unsafe, and then fail to move, something known as the "freezing robot" problem.
To provide a theoretical guarantee of human safety, the team's algorithm reasons about the uncertainty in the human model. Instead of having a single, default model where the robot only understands one potential reaction, the team gave the machine an understanding of many possible models, to more closely mimic how a human can understand other humans. As the robot gathers more data, it will reduce uncertainty and refine those models.
To resolve the freezing robot problem, the team redefined safety for human-aware motion planners as either collision avoidance or safe impact in the event of a collision. Often, especially in robot-assisted tasks of activities of daily living, collisions cannot be fully avoided. This allowed the robot to make non-harmful contact with the human to make progress, so long as the robot's impact on the human is low. With this two-pronged definition of safety, the robot could safely complete the dressing task in a shorter period of time.
For example, let’s say there are two possible models of how a human could react to dressing. “Model One” is that the human will move up during dressing, and “Model Two” is that the human will move down during dressing. With the team’s algorithm, when the robot is planning its motion, instead of selecting one model, it will try to ensure safety for both models. No matter if the person is moving up or down, the trajectory found by the robot will be safe.
To paint a more holistic picture of these interactions, future efforts will focus on investigating the subjective feelings of safety in addition to the physical during the robot-assisted dressing task.
“This multifaceted approach combines set theory, human-aware safety constraints, human motion prediction, and feedback control for safe human-robot interaction,” says Assistant Professor in The Robotics Institute at Carnegie Mellon University (Fall 2021) Zackory Erickson. “This research could potentially be applied to a wide variety of assistive robotics scenarios, towards the ultimate goal of enabling robots to provide safer physical assistance to people with disabilities.”