Future Interfaces GroupWe present a new, miniaturizable type of shape-changing display using embedded electroosmotic pumps (EEOPs). Our pumps, controlled and powered directly by applied voltage, are 1.5mm in thickness, and allow complete stackups under 5mm. Nonetheless, they can move their entire volume's worth of fluid in 1 second, and generate pressures of +/-50kPa, enough to create dynamic, millimeter-scale tactile features on a surface that can withstand typical interaction forces. These are the requisite technical ingredients to enable, for example, a pop-up keyboard on a flat smartphone. We experimentally quantify the mechanical and psychophysical performance of our displays and conclude with a set of example interfaces.
Citation: Shultz, Craig and Harrison, Chris. 2023. Flat Panel Haptics: Embedded Electroosmotic Pumps for Scalable Shape Displays. To appear in Proceedings of the 41st Annual SIGCHI Conference on Human Factors in Computing Systems (April 23 – 30, 2023). CHI '23. ACM, New York, NY.
Flat Panel Haptics: Embedded Electroosmotic Pumps for Scalable Shape DisplaysFuture Interfaces Group2023-04-22 | We present a new, miniaturizable type of shape-changing display using embedded electroosmotic pumps (EEOPs). Our pumps, controlled and powered directly by applied voltage, are 1.5mm in thickness, and allow complete stackups under 5mm. Nonetheless, they can move their entire volume's worth of fluid in 1 second, and generate pressures of +/-50kPa, enough to create dynamic, millimeter-scale tactile features on a surface that can withstand typical interaction forces. These are the requisite technical ingredients to enable, for example, a pop-up keyboard on a flat smartphone. We experimentally quantify the mechanical and psychophysical performance of our displays and conclude with a set of example interfaces.
Citation: Shultz, Craig and Harrison, Chris. 2023. Flat Panel Haptics: Embedded Electroosmotic Pumps for Scalable Shape Displays. To appear in Proceedings of the 41st Annual SIGCHI Conference on Human Factors in Computing Systems (April 23 – 30, 2023). CHI '23. ACM, New York, NY.SmartPoser (ACM UIST 2023 Talk)Future Interfaces Group2023-11-06 | Demo Video: youtu.be/AHh2vYQVb_8
Abstract: The ability to track a user’s arm pose could be valuable in a wide range of applications, including fitness, rehabilitation, augmented reality input, life logging, and context-aware assistants. Unfortunately, this capability is not readily available to consumers. Systems either require cameras, which carry privacy issues, or utilize multiple worn IMUs or markers. In this work, we describe how an off-the-shelf smartphone and smartwatch can work together to accurately estimate arm pose. Moving beyond prior work, we take advantage of more recent ultra-wideband (UWB) functionality on these devices to capture absolute distance between the two devices. This measurement is the perfect complement to inertial data, which is relative and suffers from drift. We quantify the performance of our software-only approach using off-the-shelf devices, showing it can estimate the wrist and elbow joints without the user having to provide training data.
Citation: Nathan DeVrio*, Vimal Mollyn*, and Chris Harrison. 2023. SmartPoser: Arm Pose Estimation with a Smartphone and Smartwatch Using UWB and IMU Data. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23). Association for Computing Machinery, New York, NY, USA, Article 79, 1–11. doi.org/10.1145/3586183.3606821 *Equal contributionPantœnna (ACM UIST 2023 Talk)Future Interfaces Group2023-11-06 | Demo video: youtu.be/ya_KWEJTKsU
Abstract: Methods for faithfully capturing a user's holistic pose have immediate uses in AR/VR, ranging from multimodal input to expressive avatars. Although body-tracking has received the most attention, the mouth is also of particular importance, given that it is the channel for both speech and facial expression. In this work, we describe a new RF-based approach for capturing mouth pose using an antenna integrated into the underside of a VR/AR headset. Our approach side-steps privacy issues inherent in camera-based methods, while simultaneously supporting silent facial expressions that audio-based methods cannot. Further, compared to bio-sensing methods such as EMG and EIT, our method requires no contact with the wearer's body and can be fully self-contained in the headset, offering a high degree of physical robustness and user practicality. We detail our implementation along with results from two user studies, which show a mean 3D error of 2.6 mm for 11 mouth keypoints across worn sessions without re-calibration.
Citation: Daehwa Kim and Chris Harrison. 2023. Pantœnna: Mouth pose estimation for ar/vr headsets using low-profile antenna and impedance characteristic sensing. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23). Association for Computing Machinery, New York, NY, USA, Article 83, 1–12. doi.org/10.1145/3586183.3606805Fluid Reality (ACM UIST 2023 Talk)Future Interfaces Group2023-10-31 | Academic talk at ACM UIST 2023.
More details: figlab.com/research/2023/FluidRealityPantœnna: Mouth Pose Estimation for VR/AR Headsets Using Low-Profile AntennaFuture Interfaces Group2023-10-30 | Methods for faithfully capturing a user's holistic pose have immediate uses in AR/VR, ranging from multimodal input to expressive avatars. Although body-tracking has received the most attention, the mouth is also of particular importance, given that it is the channel for both speech and facial expression. In this work, we describe a new RF-based approach for capturing mouth pose using an antenna integrated into the underside of a VR/AR headset. Our approach side-steps privacy issues inherent in camera-based methods, while simultaneously supporting silent facial expressions that audio-based methods cannot. Further, compared to bio-sensing methods such as EMG and EIT, our method requires no contact with the wearer's body and can be fully self-contained in the headset, offering a high degree of physical robustness and user practicality. We detail our implementation along with results from two user studies, which show a mean 3D error of 2.6 mm for 11 mouth keypoints across worn sessions without re-calibration.
Citation: Daehwa Kim and Chris Harrison. 2023. Pantœnna: Mouth pose estimation for ar/vr headsets using low-profile antenna and impedance characteristic sensing. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23). Association for Computing Machinery, New York, NY, USA, Article 83, 1–12. doi.org/10.1145/3586183.3606805Expressive, Scalable, Mid-Air Haptics with Synthetic JetsFuture Interfaces Group2023-10-30 | Non-contact, mid-air haptic devices have been utilized for a wide variety of experiences, including those in extended reality, public displays, medical, and automotive domains. In this work, we explore the use of synthetic jets as a promising and under-explored mid-air haptic feedback method. We show how synthetic jets can scale from compact, low-powered devices, all the way to large, long-range, and steerable devices. We built seven functional prototypes targeting different application domains, in order to illustrate the broad applicability of our approach. These example devices are capable of rendering complex haptic effects, varying in both time and space. We quantify the physical performance of our designs using spatial pressure and wind flow measurements, and validate their compelling effect on users with stimuli recognition and qualitative studies.
Vivian Shen, Chris Harrison, and Craig Shultz. 2023. Expressive, Scalable, Mid-Air Haptics with Synthetic Jets. ACM Trans. Comput.-Hum. Interact. doi.org/10.1145/3635150Fluid Reality: High-Resolution, Untethered Haptic Gloves Using Electroosmotic Pump ArraysFuture Interfaces Group2023-10-30 | Virtual and augmented reality headsets are making significant progress in audio-visual immersion and consumer adoption. However, their haptic immersion remains low, due in part to the limitations of vibrotactile actuators which dominate the AR/VR market. In this work, we present a new approach to create high-resolution shape-changing fingerpad arrays with 20 haptic pixels per square cm. Unlike prior pneumatic approaches, our actuators are low-profile (5mm thick), low-power (approximately 10mW/pixel), and entirely self-contained, with no tubing or wires running to external infrastructure. We show how multiple actuator arrays can be built into a five-finger, 160-actuator haptic glove that is untethered, lightweight (207g, including all drive electronics and battery), and has the potential to reach consumer price points at volume production. We describe the results from a technical performance evaluation and a suite of eight user studies, quantifying the diverse capabilities of our system. This includes recognition of object properties such as complex contact geometry, texture, and compliance, as well as expressive spatiotemporal effects.
Vivian Shen, Tucker Rae-Grant, Joe Mullenbach, Chris Harrison, and Craig Shultz. 2023. Fluid Reality: High-Resolution, Untethered Haptic Gloves using Electroosmotic Pump Arrays. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23). Association for Computing Machinery, New York, NY, USA, Article 8, 1–20. doi-org.cmu.idm.oclc.org/10.1145/3586183.3606771SmartPoser: Arm Pose Estimation with a Smartphone and Smartwatch Using UWB and IMU DataFuture Interfaces Group2023-10-30 | The ability to track a user’s arm pose could be valuable in a wide range of applications, including fitness, rehabilitation, augmented reality input, life logging, and context-aware assistants. Unfortunately, this capability is not readily available to consumers. Systems either require cameras, which carry privacy issues, or utilize multiple worn IMUs or markers. In this work, we describe how an off-the-shelf smartphone and smartwatch can work together to accurately estimate arm pose. Moving beyond prior work, we take advantage of more recent ultra-wideband (UWB) functionality on these devices to capture absolute distance between the two devices. This measurement is the perfect complement to inertial data, which is relative and suffers from drift. We quantify the performance of our software-only approach using off-the-shelf devices, showing it can estimate the wrist and elbow joints without the user having to provide training data.
Citation: Nathan DeVrio*, Vimal Mollyn*, and Chris Harrison. 2023. SmartPoser: Arm Pose Estimation with a Smartphone and Smartwatch Using UWB and IMU Data. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23). Association for Computing Machinery, New York, NY, USA, Article 79, 1–11. doi.org/10.1145/3586183.3606821 *Equal contributionWorldPoint: Finger Pointing as a Rapid and Natural Trigger for In-the-Wild Mobile InteractionsFuture Interfaces Group2023-10-30 | Pointing with one's finger is a natural and rapid way to denote an area or object of interest. It is routinely used in human-human interaction to increase both the speed and accuracy of communication, but it is rarely utilized in human-computer interactions. In this work, we use the recent inclusion of wide-angle, rear-facing smartphone cameras, along with hardware-accelerated machine learning, to enable real-time, infrastructure-free, finger-pointing interactions on today's mobile phones. We envision users raising their hands to point in front of their phones as a "wake gesture". This can then be coupled with a voice command to trigger advanced functionality. For example, while composing an email, a user can point at a document on a table and say "attach". Our interaction technique requires no navigation away from the current app and is both faster and more privacy-preserving than the current method of taking a photo.
Citation: Daehwa Kim, Vimal Mollyn, and Chris Harrison. 2023. WorldPoint: Finger Pointing as a Rapid and Natural Trigger for In-the-Wild Mobile Interactions. In Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces (ISS '23). Association for Computing Machinery, New York, NY, USA. doi.org/10.1145/3626478IMUPoser: Full-Body Pose Estimation using IMUs in Phones, Watches, and EarbudsFuture Interfaces Group2023-04-24 | Tracking body pose on-the-go could have powerful uses in fitness, mobile gaming, context-aware virtual assistants, and rehabilitation. However, users are unlikely to buy and wear special suits or sensor arrays to achieve this end. Instead, in this work, we explore the feasibility of estimating body pose using IMUs already in devices that many users own – namely smartphones, smartwatches, and earbuds. This approach has several challenges, including noisy data from low-cost commodity IMUs, and the fact that the number of instrumentation points on a user's body is both sparse and in flux. Our pipeline receives whatever subset of IMU data is available, potentially from just a single device, and produces a best-guess pose. To evaluate our model, we created the IMUPoser Dataset, collected from 10 participants wearing or holding off-the-shelf consumer devices and across a variety of activity contexts. We provide a comprehensive evaluation of our system, benchmarking it on both our own and existing IMU datasets.
Citation: Mollyn, V., Arakawa, R., Goel, M., Harrison, C. and Ahuja, K. 2023. IMUPoser: Full-Body Pose Estimation using IMUs in Phones, Watches, and Earbuds. To appear in Proceedings of the 41st Annual SIGCHI Conference on Human Factors in Computing Systems (April 23 – 30, 2023). CHI '23. ACM, New York, NY.Surface I/O: Creating Devices with Functional Surface Geometry for Haptics and User InputFuture Interfaces Group2023-03-10 | Surface I/O is a novel interface approach that functionalizes the exterior surface of devices to provide haptic and touch sensing without dedicated mechanical components. Achieving this requires a unique combination of surface features spanning the macro-scale (5cm~1mm), meso-scale (1mm~200um), and micro-scale (less than 200um). This approach simplifies interface creation, allowing designers to iterate on form geometry, haptic feeling, and sensing functionality without the limitations of mechanical mechanisms. We believe this can contribute to the concept of "invisible ubiquitous interactivity at scale", where the simplicity and easy implementation of the technique allows it to blend with objects around us. While we prototyped our designs using 3D printers and laser cutters, our technique is applicable to mass production methods, including injection molding and stamping, enabling passive goods with new levels of interactivity.
Authors: Yuran Ding, Craig Shultz, Chris Harrison Future Interfaces Group, Carnegie Mellon University
Citation: Ding, Yuran., Shultz, Craig. and Harrison, Chris. 2023. Surface I/O: Creating Devices with Functional Surface Geometry for Haptics and User. To appear in Proceedings of the 41st Annual SIGCHI Conference on Human Factors in Computing Systems (April 23 – 30, 2023). CHI '23. ACM, New York, NY.SweepSense: Ad Hoc Configuration Sensing Using Reflected Swept-Frequency UltrasonicsFuture Interfaces Group2023-01-20 | More info: http://www.gierad.com/projects/sweepsenseDynaTags: Low-Cost Fiducial Marker MechanismsFuture Interfaces Group2022-11-30 | Published at ICMI 2022.
Printed fiducial markers are inexpensive, easy to deploy, robust and deservedly popular. However, their data payload is also static, unable to express any state beyond being present. For this reason, more complex electronic tagging technologies exist, which can sense and change state, but either require special equipment to read or are orders of magnitude more expensive than printed markers. In this work, we explore an approach between these two extremes: one that retains the simple, low-cost nature of printed markers, yet has some of the expressive capabilities of dynamic tags. Our “DynaTags” are simple mechanisms constructed from paper that express multiple payloads, allowing practitioners and researchers to create new and compelling physical-digital experiences.Pull Gestures with Coordinated Graphics on Dual Touchscreen DevicesFuture Interfaces Group2022-11-09 | A new class of dual-touchscreen device is beginning to emerge, either constructed as two screens hinged together, or as a single display that can fold. The interactive experience on these devices is simply that of two 2D touchscreens, with little to no synergy between the interactive areas. In this work, we consider how this unique, emerging form factor creates an interesting 3D niche, in which out-of-plane interactions on one screen can be supported with coordinated graphics in the other orthogonal screen. Following insights from an elicitation study, we focus on "pull gestures", a multimodal interaction combining on-screen touch input with in air movement. These naturally complement traditional multitouch gestures such as tap and pinch, and are an intriguing and useful way to take advantage of the unique geometry of dual-screen devices.
Authors: Vivian Shen Chris Harrison
Published at ACM ICMI 2022EtherPose: Continuous Hand Pose Tracking with Wrist-Worn Antenna Impedance Characteristic SensingFuture Interfaces Group2022-10-30 | Published at ACM UIST 2022
EtherPose is a continuous hand pose tracking system employing two wrist-worn antennas, from which we measure the real-time dielectric loading resulting from different hand geometries (i.e., poses). Unlike worn camera-based methods, our RF approach is more robust to occlusion from clothing and avoids capturing potentially sensitive imagery. Through a series of simulations and empirical studies, we designed a proof-of-concept, worn implementation built around compact vector network analyzers. Sensor data is then interpreted by a machine learning backend, which outputs a fully-posed 3D hand. In a user study, we show how our system can track hand pose with a mean Euclidean joint error of 11.6 mm, even when covered in fabric. We also studied 2DOF wrist angle and micro-gesture tracking. In the future, our approach could be miniaturized and extended to include more and different types of antennas, operating at different self resonances.
AUTHORS: Daehwa Kim Chris HarrisonDiscoBand: Multiview Depth-Sensing Smartwatch Strap for Hand, Body and Environment TrackingFuture Interfaces Group2022-10-30 | Published at ACM UIST 2022
Real-time tracking of a user’s hands, arms and environment is valuable in a wide variety of HCI applications, from context awareness to virtual reality. Rather than rely on fixed and external tracking infrastructure, the most flexible and consumer-friendly approaches are mobile, self-contained, and compatible with popular device form factors (e.g., smartwatches). In this vein, we contribute DiscoBand, a thin sensing strap not exceeding 1 cm in thickness. Sensors operating so close to the skin inherently face issues with occlusion. To help overcome this, our strap uses eight distributed depth sensors imaging the hand from different viewpoints, creating a sparse 3D point cloud. An additional eight depth sensors image outwards from the band to track the user’s body and surroundings. In addition to evaluating arm and hand pose tracking, we also describe a series of supplemental applications powered by our band’s data, including held object recognition and environment mapping.
AUTHORS Nathan Devrio (Carnegie Mellon University) Chris Harrison (Carnegie Mellon University)TriboTouch: Micro-Patterned Surfaces for Low Latency TouchscreensFuture Interfaces Group2022-04-27 | Touchscreen tracking latency, often 80ms or more, creates a rubber-banding effect in everyday direct manipulation tasks such as dragging, scrolling, and drawing. This has been shown to decrease system preference, user performance, and overall realism of these interfaces. In this research, we demonstrate how the addition of a thin, 2D micro-patterned surface with 5 micron spaced features can be used to reduce motor-visual touchscreen latency. When a finger, stylus, or tangible is translated across this textured surface frictional forces induce acoustic vibrations which naturally encode sliding velocity. This acoustic signal is sampled at 192kHz using a conventional audio interface pipeline with an average latency of 28ms. When fused with conventional low-speed, but high-spatial-accuracy 2D touch position data, our machine learning model can make accurate predictions of real-time touch location. Published at CHI 2022.
Research Team: Craig Shultz, Daewha Kim, Karan Ahuja and Chris Harrison
Citation Shultz C., Kim D., Ahuja K. and Harrison, C. 2022. TriboTouch: Micro-Patterned Surfaces for Low Latency Touchscreens. To appear in Proceedings of the 40th Annual SIGCHI Conference on Human Factors in Computing Systems (April 30 – May 6, 2022). CHI '22. ACM, New York, NY.ControllerPose: Inside-Out Body Capture with VR Controller CamerasFuture Interfaces Group2022-04-27 | We present a new and practical method for capturing user body pose in virtual reality experiences: integrating cameras into handheld controllers, where batteries, computation and wireless communication already exist. By virtue of the hands operating in front of the user during many VR interactions, our controller-borne cameras can capture a superior view of the body for digitization. We developed a series of demo applications illustrating the potential of our approach and more leg-centric interactions, such as balancing games and kicking soccer balls. Published at CHI 2022.
Research Team: Karan Ahuja, Vivian Shen, Cathy Fang, Nathan Riopelle, Andy Kong and Chris Harrison
Citation Karan Ahuja , Vivian Shen, Cathy Fang, Nathan Riopelle, Andy Kong, and Chris Harrison. 2022. ControllerPose: Inside-Out Body Capture with VR Controller Cameras. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 9, 1–12.ElectriPop: Low-Cost, Shape-Changing DisplaysFuture Interfaces Group2022-04-27 | ElectriPop: Low-Cost, Shape-Changing Displays Using Electrostatically Inflated Mylar Sheets
We describe how sheets of metalized mylar can be cut and then “inflated” into complex 3D forms with electrostatic charge for use in digitally-controlled, shape-changing displays. This is achieved by placing and nesting various cuts, slits and holes such that mylar elements repel from one another to reach an equilibrium state. Importantly, our technique is compatible with industrial and hobbyist cutting processes, from die and laser cutting to handheld exacto-knives and scissors. Given that mylar film costs less than $1 per square meter, we can create self-actuating 3D objects for just a few cents, opening new uses in low-cost consumer goods. Published at ACM CHI 2022.
Research Team: Cathy Mengying Fang, Jianzhe Gu, Lining Yao, and Chris Harrison
Citation Fang, C., Gu, J., Yao, L. and Harrison, C. 2022. ElectriPop: Low-Cost Shape-Changing Displays with Electrostatically Inflated Mylar Sheets. To appear in Proceedings of the 40th Annual SIGCHI Conference on Human Factors in Computing Systems (April 30 – May 6, 2022). CHI '22. ACM, New York, NY.Mouth Haptics in VR using a Headset Ultrasound Phased ArrayFuture Interfaces Group2022-04-27 | Today’s consumer virtual reality (VR) systems offer limited haptic feedback via vibration motors in handheld controllers. Rendering haptics to other parts of the body is an open challenge, especially in a practical and consumer-friendly manner. The mouth is of particular interest, as it is a close second in tactile sensitivity to the fingertips, offering a unique opportunity to add fine-grained haptic effects. In this research, we developed a thin, compact, beamforming array of ultrasonic transducers, which can render haptic effects onto the mouth. Importantly, all components are integrated into the headset, meaning the user does not need to wear an additional accessory, or place any external infrastructure in their room. We explored several effects, including point impulses, swipes, and persistent vibrations. Our haptic sensations can be felt on the lips, teeth and tongue, which can be incorporated into new and interesting VR experiences.
Vivian Shen, Craig Shultz, and Chris Harrison. 2022. Mouth Haptics in VR using a Headset Ultrasound Phased Array. In CHI Conference on Human Factors in Computing Systems (CHI ’22), April 29-May 5, 2022, New Orleans, LA, USA. ACM, New York, NY, USA, 14 pages. doi.org/10.1145/3491102.3501960LRAir: Non-Contact Haptics Using Synthetic JetsFuture Interfaces Group2022-03-23 | Craig Shultz and Chris Harrison. 2022. LRAir: Non-Contact Haptics Using Synthetic Jets. In 2022 IEEE Haptics Symposium (March 21 – 24, 2022). HAPTICS '22. IEEE, Washington, D.C. (Best Paper Award)
We propose a new scalable, non-contact haptic actuation technique based on a speaker in a ported enclosure that can deliver air pulses to the skin. The technique is low cost, low voltage, and uses existing electronics. We detail a prototype device's design and construction, and validate a multiple domain impedance model with current, voltage, and pressure measurements. A non-linear phenomenon at the port creates pulsed zero-net-mass-flux flows, so-called "synthetic jets". Our prototype is capable of 10 mN time averaged thrusts at an air velocity of 10.4 m/s (4.3W input power). A perception study reveals that tactile effects can be detected 25 mm away with only 380 mVrms applied voltage, and 19 mWrms input power.FarOut: Extending the Range of ad hoc Touch Sensing with Depth CamerasFuture Interfaces Group2021-11-09 | The ability to co-opt everyday surfaces for touch interactivity has been an area of HCI research for several decades. Ideally, a sensor operating in a device (such as a smart speaker) would be able to enable a whole room with touch sensing capabilities. Such a system could allow for software-defined light switches on walls, gestural input on countertops, and in general, more digitally flexible environments. While advances in depth sensors and computer vision have led to step-function improvements in the past, progress has slowed in recent years. We surveyed the literature and found that the very best ad hoc touch sensing systems are able to operate at ranges up to around 1.5 m. This limited range means that sensors must be carefully positioned in an environment to enable specific surfaces for interaction. In this research, we set ourselves the goal of doubling the sensing range of the current state-of-the-art system. To achieve this goal, we leveraged an interesting finger "denting" phenomenon and adopted a marginal gains philosophy when developing our full-stack. When put together, these many small improvements compound and yield a significant stride in performance. At 3 m range, our system offers a spatial accuracy of 0.98 cm with a touch segmentation accuracy of 96.1\%, in line with prior systems operating at less than half the range. While more work remains to be done to achieve true room-scale ubiquity, we believe our system constitutes a useful advance over prior work.
Shen, V., Spann, K. and Harrison, C. 2021. FarOut: Extending the Range of ad hoc Touch Sensing with Depth Cameras. To appear in Proceedings of the 9th ACM Symposium on Spatial User Interaction. (November 9 - 10, 2021). SUI '21. ACM, New York, NY.Retargeted Self-Haptics for Increased Immersion in VR without Hand InstrumentationFuture Interfaces Group2021-10-25 | Future Interfaces Group: figlab.com Cathy Fang: cathy-fang.com Chris Harrison: chrisharrison.net
Today’s consumer virtual reality (VR) systems offer immersive graphics and audio, but haptic feedback is rudimentary – delivered through controllers with vibration feedback or is non-existent (i.e., the hands operating freely in the air). In this paper, we explore an alternative, highly mobile and controller-free approach to haptics, where VR applications utilize the user’s own body to provide physical feedback. To achieve this, we warp (retarget) the locations of a user’s hands such that one hand serves as a physical surface or prop for the other hand. For example, a hand holding a virtual nail can serve as a physical backstop for a hand that is virtually hammering, providing a sense of impact in an air-borne and uninstrumented experience. To illustrate this rich design space, we implemented twelve interactive demos across three haptic categories. We conclude with a user study from which we draw design recommendations.
Future Interfaces Group, Carnegie Mellon University
Fang, C. and Harrison, C. 2021. Retargeted Self-Haptics for Increased Immersion in VR without Hand Instrumentation. In Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology (October 10 - 13, 2021). UIST '21. ACM, New York, NY.3D Hand Pose Estimation on Conventional Capacitive TouchscreensFuture Interfaces Group2021-10-25 | Contemporary mobile devices with touchscreens capture the X/Y position of finger tips on the screen and pass these coordinates to applications as though the input were points in space. Of course, human hands are much more sophisticated, able to form rich 3D poses capable of far more complex interactions than poking at a screen. In this paper, we describe how conventional capacitive touchscreens can be used to estimate 3D hand pose, enabling richer interaction opportunities. Importantly, our software-only approach requires no special or new sensors, either internal or external. As a proof of concept, we use an off-the-shelf Samsung Tablet flashed with a custom kernel. After describing our software pipeline, we report findings from our user study, we conclude with several example applications we built to illustrate the potential of our approach.
Choi, F., Mayer, S. and Harrison, C. 2021. 3D Hand Pose Estimation on Conventional Capacitive Touchscreens. In Proceedings of the 23rd International Conference on Human-Computer Interaction with Mobile Devices and Services (October 5 - 8, 2020). MobileHCI ’21. ACM, New York, NY. 1-13.
www.figlab.comEyeMU Interactions: Gaze + IMU Gestures on Mobile DevicesFuture Interfaces Group2021-10-20 | As smartphone screens have grown in size, single-handed use has
become more cumbersome. Interactive targets that are easily seen
can be hard to reach, particularly notifications and upper menu bar
items. Users must either adjust their grip to reach distant targets, or
use their other hand. In this research, we show how gaze estimation
using a phone’s user-facing camera can be paired with IMU-tracked
motion gestures to enable a new, intuitive, and rapid interaction
technique on handheld phones. We describe our proof of-concept
implementation and gesture set, built on state-of-the-art techniques
and capable of self-contained execution on a smartphone. In our
user study, we found a mean euclidean gaze error of 1.7 cm and a
seven class motion gesture classification accuracy of 97.3%.
Citation: Andy Kong, Karan Ahuja, Mayank Goel, and Chris Harrison. 2021. EyeMU Interactions: Gaze + IMU Gestures on Mobile Devices. In Proceedings of the 2021 International Conference on Multimodal Interaction (ICMI '21). Association for Computing Machinery, New York, NY, USA, 577–585. DOI:doi.org/10.1145/3462244.3479938Vibrosight++: City-Scale Sensing Using Existing Retroreflective Signs and MarkersFuture Interfaces Group2021-05-12 | Today's smart cities use thousands of physical sensors distributed across the urban landscape to support decision making in areas such as infrastructure monitoring, public health, and resource management. These weather-hardened devices require power and connectivity, and often cost thousands just to install, let alone maintain. In this paper, we show how long-range laser vibrometry can be used for low-cost, city-scale sensing. Although typically limited to just a few meters of sensing range, the use of retroreflective markers can boost this to 1km or more. Fortuitously, cities already make extensive use of retroreflective materials for street signs, construction barriers, road studs, license plates, and many other markings. We describe how our prototype system can co-opt these existing markers at very long ranges and use them as unpowered accelerometers for use in a wide variety of sensing applications.
Yang Zhang, Sven Mayer, Jesse T. Gonzalez, and Chris Harrison. 2021. Vibrosight++: City-Scale Sensing Using Existing Retroreflective Signs and Markers. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). Association for Computing Machinery, New York, NY, USA, Article 410, 1–14. DOI:doi.org/10.1145/3411764.3445054Super-Resolution Capacitive TouchscreensFuture Interfaces Group2021-05-10 | Capacitive touchscreens are near-ubiquitous in today’s touch-driven devices, such as smartphones and tablets. By using rows and columns of electrodes, specialized touch controllers are able to capture a 2D image of capacitance at the surface of a screen. For over a decade, capacitive "pixels" have been around 4 millimeters in size – a surprisingly low resolution that precludes a wide range of interesting applications. In this paper, we show how super-resolution techniques, long used in fields such as biology and astronomy, can be applied to capacitive touchscreen data. By integrating data from many frames, our software-only process is able to resolve geometric details finer than the original sensor resolution. This opens the door to passive tangibles with higher-density fiducials and also recognition of every-day metal objects, such as keys and coins. We built several applications to illustrate the potential of our approach and report the findings of a multipart evaluation.
Mayer, Sven; Xu, Xiangyu; Harrison, Chris 2021. Super-Resolution Capacitive Touchscreens. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. CHI '21. Association for Computing Machinery, New York, New York, USA
Authors: Sven Mayer (CMU) Xiangyu Xu (CMU) Chris Harrison (CMU)Classroom Digital Twins with Instrumentation-Free Gaze TrackingFuture Interfaces Group2021-05-09 | Classroom sensing is an important and active area of research with great potential to improve instruction. Complementing professional observers – the current best practice – automated pedagogical professional development systems can attend every class and capture fine-grained details of all occupants. One particularly valuable facet to capture is class gaze behavior. For students, certain gaze patterns have been shown to correlate with interest in the material, while for instructors, student-centered gaze patterns have been shown to increase approachability and immediacy. Unfortunately, prior classroom gaze-sensing systems have limited accuracy and often require specialized external or worn sensors. In this work, we developed a new computer-vision-driven system that powers a 3D “digital twin” of the classroom and enables whole-class, 6DOF head gaze vector estimation without instrumenting any of the occupants. We describe our open source implementation, and results from both controlled studies and real-world classroom deployments.
Citation: Karan Ahuja, Deval Shah, Sujeath Pareddy, Franceska Xhakaj, Amy Ogan, Yuvraj Agarwal, and Chris Harrison. 2021. Classroom Digital Twins with Instrumentation-Free Gaze Tracking. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). Association for Computing Machinery, New York, NY, USA, Article 484, 1–9. DOI:doi.org/10.1145/3411764.3445711Pose-on-the-Go: Approximating User Pose with Smartphone Sensor Fusion and Inverse KinematicsFuture Interfaces Group2021-05-09 | We present Pose-on-the-Go, a full-body pose estimation system that uses sensors already found in today’s smartphones. This stands in contrast to prior systems, which require worn or external sensors. We achieve this result via extensive sensor fusion, leveraging a phone’s front and rear cameras, the user-facing depth camera, touchscreen, and IMU. Even still, we are missing data about a user’s body (e.g., angle of the elbow joint), and so we use inverse kinematics to estimate and animate probable body poses. We provide a detailed evaluation of our system, benchmarking it against a professional-grade Vicon tracking system. We conclude with a series of demonstration applications that underscore the unique potential of our approach, which could be enabled on many modern smartphones with a simple software update.
Citation: Karan Ahuja, Sven Mayer, Mayank Goel, and Chris Harrison. 2021. Pose-on-the-Go: Approximating User Pose with Smartphone Sensor Fusion and Inverse Kinematics. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). Association for Computing Machinery, New York, NY, USA, Article 9, 1–12. DOI:doi.org/10.1145/3411764.3445582Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Privacy-Preserving Activity RecognitionFuture Interfaces Group2021-05-02 | Millimeter wave (mmWave) Doppler radar is a new and promising sensing approach for human activity recognition, offering signal richness approaching that of microphones and cameras, but without many of the privacy-invading downsides. However, unlike audio and computer vision approaches that can draw from huge libraries of videos for training deep learning models, Doppler radar has no existing large datasets, holding back this otherwise promising sensing modality. In response, we set out to create a software pipeline that converts videos of human activities into realistic, synthetic Doppler radar data. We show how this cross-domain translation can be successful through a series of experimental results. Overall, we believe our approach is an important stepping stone towards significantly reducing the burden of training such as human sensing systems, and could help bootstrap uses in human-computer interaction.
Citation: Karan Ahuja, Yue Jiang, Mayank Goel, and Chris Harrison. 2021. Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). Association for Computing Machinery, New York, NY, USA, Article 292, 1–10. DOI:doi.org/10.1145/3411764.3445138BodySLAM: Opportunistic User Digitization in Multi-User AR/VR ExperiencesFuture Interfaces Group2020-10-31 | Today’s augmented and virtual reality (AR/VR) systems do not provide
body, hand or mouth tracking without special worn sensors or
external infrastructure. Simultaneously, AR/VR systems are increasingly being used in co-located, multi-user experiences, opening the
possibility for opportunistic capture of other users. This is the core
idea behind BodySLAM, which uses disparate camera views from
users to digitize the body, hands and mouth of other people, and
then relay that information back to the respective users. If a user is
seen by two or more people, 3D pose can be estimated via stereo reconstruction. Our system also maps the arrangement of users
in real-world coordinates. Our approach requires no additional
hardware or sensors beyond what is already found in commercial
AR/VR devices, such as Microsoft HoloLens or Oculus Quest.
Karan Ahuja, Mayank Goel, and Chris Harrison. 2020. BodySLAM: Opportunistic User Digitization in Multi-User AR/VR Experiences. In Symposium on Spatial User Interaction (SUI '20). Association for Computing Machinery, New York, NY, USA, Article 16, 1–8. DOI:doi.org/10.1145/3385959.3418452Direction-of-Voice (DoV) Estimation for Intuitive Speech Interaction with Smart Devices EcosystemsFuture Interfaces Group2020-10-24 | Future homes and offices will feature increasingly dense ecosystems of IoT devices, such as smart lighting, speakers, and domestic appliances. Voice input is a natural candidate for interacting with out-of-reach and often small devices that lack full-sized physical interfaces. However, at present, voice agents generally require wake-words and device names in order to specify the target of a spoken command (e.g., “Hey Alexa, kitchen lights to full brightness”). In this research, we explore whether speech alone can be used as a directional communication channel, in much the same way visual gaze specifies a focus. Instead of a device’s microphones simply receiving and processing spoken commands, we suggest they also infer the Direction of Voice (DoV). Our approach innately enables voice commands with addressability (i.e., devices know if a command was directed at them) in a natural and rapid manner. We quantify the accuracy of our implementation across users, rooms, spoken phrases, and other key factors that affect performance and usability. Taken together, we believe our DoV approach demonstrates feasibility and the promise of making distributed voice interactions much more intuitive and fluid.
Karan Ahuja, Andy Kong, Mayank Goel, and Chris Harrison. 2020. Direction-of-Voice (DoV) Estimation for Intuitive Speech Interaction with Smart Devices Ecosystems. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST '20). Association for Computing Machinery, New York, NY, USA, 1121–1131. DOI:doi.org/10.1145/3379337.3415588Listen Learner: Automatic Class Discovery & One-Shot Interactions for Acoustic Activity RecognitionFuture Interfaces Group2020-04-22 | Acoustic activity recognition has emerged as a foundational element for imbuing devices with context-driven capabilities, enabling richer, more assistive, and more accommodating computational experiences. Traditional approaches rely either on custom models trained in situ, or general models pre-trained on preexisting data, with each approach having accuracy and user burden implications. We present Listen Learner, a technique for activity recognition that gradually learns events specific to a deployed environment while minimizing user burden. Specifically, we built an end-to-end system for self-supervised learning of events labelled through one-shot interaction. Our results show that our system can accurately and automatically learn acoustic events across environments (e.g., 97% precision, 87% recall), while adhering to users’ preferences for non-intrusive interactive behavior.
Paper Citation: Wu, J., Harrison, C., Bigham, J. and Laput, G. 2020. Automated Class Discovery and One-Shot Interactions for Acoustic Activity Recognition. In Proceedings of the 38th Annual SIGCHI Conference on Human Factors in Computing Systems. CHI '20. ACM, New York, NY.Wireality: Enabling Complex Tangible Geometries in Virtual Reality with Worn Multi-String HapticsFuture Interfaces Group2020-04-21 | Wireality is a worn VR haptic system that allows for individual joints on the hands to be accurately arrested in 3D space through the use of retractable wires that can be locked. This allows for convincing tangible interactions with large and complex geometries, such as walls, furniture and railings. Our approach is lightweight (11g worn on the hands), low-cost (~$35) and low-power (0.024mWh per actuation).
Citation: Fang, C., Zhang, Y., Dworman, M. and Harrison, C. 2020. Wireality: Enabling Tangible Complex Geometries in Virtual Reality with Worn Multi-String Haptics. To appear in Proceedings of the 38th Annual SIGCHI Conference on Human Factors in Computing Systems (Honolulu, Hawaii, April 25 - 30, 2020). CHI '20. ACM, New York, NY.Enhancing Mobile Voice Assistants with WorldGazeFuture Interfaces Group2020-04-14 | Contemporary voice assistants require that objects of interest be specified in spoken commands. Of course, users are often looking directly at the object or place of interest – fine-grained, contextual information that is currently unused. We present WorldGaze, a software-only method for smartphones that provides the real-world gaze location of a user that voice agents can utilize for rapid, natural, and precise interactions. We achieve this by simultaneously opening the front and rear cameras of a smartphone. The front-facing camera is used to track the head in 3D, including estimating its direction vector. As the geometry of the front and back cameras are fixed and known, we can raycast the head vector into the 3D world scene as captured by the rear-facing camera. This allows the user to intuitively define an object or region of interest using their head gaze. We started our investigations with a qualitative exploration of competing methods, before developing a functional, real-time implementation. We conclude with an evaluation that shows WorldGaze can be quick and accurate, opening new multimodal gaze+voice interactions for mobile voice agents.
Mayer, Sven; Laput, Gierad; Harrison, Chris. 2020. Enhancing Mobile Voice Assistants with WorldGaze. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20), ACM, New York, NY, USA.LightAnchors: Appropriating Point Lights for Spatially-Anchored Augmented Reality InterfacesFuture Interfaces Group2019-12-09 | Augmented reality requires precise and instant overlay of digital information onto everyday objects. We present our work on LightAnchors, a new method for displaying spatially-anchored data. We take advantage of pervasive point lights – such as LEDs and light bulbs – for both in-view anchoring and data transmission. These lights are blinked at high speed to encode data. We built a proof-of-concept application that runs on iOS without any hardware or software modifications. We also ran a study to characterize the performance of LightAnchors and built eleven example demos to highlight the potential of our approach.
Karan Ahuja, Sujeath Pareddy, Robert Xiao, Mayank Goel, and Chris Harrison. 2019. LightAnchors: Appropriating Point Lights for Spatially-Anchored Augmented Reality Interfaces. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST '19). ACM, New York, NY, USA, 189-196. DOI: doi.org/10.1145/3332165.3347884MeCap: Whole-Body Digitization for Low-Cost VR/AR HeadsetsFuture Interfaces Group2019-10-29 | Low-cost, smartphone-powered VR/AR headsets are becoming more popular. These basic devices – little more than plastic or cardboard shells – lack advanced features, such as controllers for the hands, limiting their interactive capability. Moreover, even high-end consumer headsets lack the ability to track the body and face. For this reason, interactive experiences like social VR are underdeveloped. We introduce MeCap, which enables commodity VR headsets to be augmented with powerful motion capture (“MoCap”) and user-sensing capabilities at very low cost (under $5). Using only a pair of hemi-spherical mirrors and the existing rear-facing camera of a smartphone, MeCap provides real-time estimates of a wearer’s 3D body pose, hand pose, facial expression, physical appearance and surrounding environment – capabilities which are either absent in contemporary VR/AR systems or which require specialized hardware and controllers. We evaluate the accuracy of each of our tracking features, the results of which show imminent feasibility.
Ahuja, K., Harrison, C., Goel, M. and Xiao, R. 2019. MeCap: Whole-Body Digitization for Low-Cost VR/AR Headsets. In Proceedings of the 32st Annual ACM Symposium on User Interface Software and Technology (New Orleans, USA, October 20 - 23, 2019). UIST '19. ACM, New York, NY.Sozu: Self-Powered Radio Tags for Building-Scale Activity SensingFuture Interfaces Group2019-10-19 | Sozu is a low-cost sensing system that can detect a wide range of events wirelessly, through walls and without line of sight, at whole-building scale. Instead of using batteries, Sozu tags convert energy from activities that they sense into RF broadcasts, acting like miniature self-powered radio stations. For more information, please see:
Project paper: https://yangzhang.dev/research/Sozu/Sozu.pdf Open source repo: github.com/figlab/sozu
Citation: Zhang, Y., Iravantchi, Y., Jin, H., Kumar, S. and Harrison, C. 2019. Sozu: Self-Powered Radio Tags for Building-Scale Activity Sensing. In Proceedings of the 32st Annual ACM Symposium on User Interface Software and Technology (New Orleans, USA, October 20 - 23, 2019). UIST '19. ACM, New York, NY.ActiTouch: Robust Touch Detection for On-Skin AR/VR InteractionsFuture Interfaces Group2019-10-19 | ActiTouch allows users to use their hands and arms as readily available touch input surfaces for AR and VR, opening a new interaction opportunity beyond conventional controllers and in-air gestures. We invented a powerful sensor fusion method which combines an electrical method with computer vision. This enables precise on-skin touch segmentations, which uniquely enables many fine-grained touch interactions such as scrolling and swiping. For more information, please refer to:
Citation: Zhang, Y., Kienzle, W., Ma, Y., Ng, S., Benko, H. and Harrison, C. 2019. ActiTouch: Precise Touch Segmentation for On-Skin VR/AR Interfaces. In Proceedings of the 32st Annual ACM Symposium on User Interface Software and Technology (New Orleans, USA, October 20 - 23, 2019). UIST '19. ACM, New York, NY.Interferi: Gesture Sensing using On-Body Acoustic InterferometryFuture Interfaces Group2019-05-06 | Interferi is an on-body gesture sensing technique using acoustic interferometry. We use ultrasonic transducers resting on the skin to create acoustic interference patterns inside the wearer’s body, which interact with anatomical features in complex, yet characteristic ways. We focus on two areas of the body with great expressive power: the hands and face.
Published at ACM CHI 2019.
Iravantchi, Y., Zhang, Y., Bernitsas, E., Goel, M. and Harrison, C. 2019. Interferi: Gesture Sensing using On-Body Acoustic Interferometry. To appear in Proceedings of the 37th Annual SIGCHI Conference on Human Factors in Computing Systems (Glasgow, UK, May 4 - 9, 2019). CHI '19. ACM, New York, NY.BeamBand: Hand Gesture Sensing with Ultrasonic BeamformingFuture Interfaces Group2019-05-06 | BeamBand is a wrist-worn system that uses ultrasonic beamforming for hand gesture sensing. Using an array of small transducers, arranged on the wrist, we can ensemble acoustic wavefronts to project acoustic energy at specified angles and focal lengths. This allows us to interrogate the surface geometry of the hand with inaudible sound in a raster-scan-like manner, from multiple viewpoints. In our paper, we describe our software and hardware, and future avenues for integration into devices such as smartwatches and VR controllers.
Published at ACM CHI 2019.
Iravantchi, Y., Goel, M. and Harrison, C. 2019. BeamBand: Hand Gesture Sensing with Ultrasonic Beamforming. To appear in Proceedings of the 37th Annual SIGCHI Conference on Human Factors in Computing Systems (Glasgow, UK, May 4 - 9, 2019). CHI '19. ACM, New York, NY.Sensing Fine-Grained Hand Activity with SmartwatchesFuture Interfaces Group2019-05-06 | Details: http://www.gierad.com/projects/handactivities As philosopher Immanuel Kant argued, "the hand is the visible part of the brain." However, most prior work has focused on detecting whole-body activities, such as walking, running and bicycling. In this research, we explore the feasibility of sensing hand activities from commodity smartwatches, which are the most practical vehicle for achieving this vision. We show that our deep learning classification stack achieves 95.2% accuracy across 25 hand activities. Our work highlights an underutilized, yet highly complementary contextual channel that could unlock a wide range of promising applications.
Published at ACM CHI 2019.
Laput, G. and Harrison, C. 2019. Sensing Fine-Grained Hand Activity with Smartwatches. In Proceedings of the 37th Annual SIGCHI Conference on Human Factors in Computing Systems (Glasgow, UK, May 4 - 9, 2019). CHI '19. ACM, New York, NY. Paper 338, 13 pages.SurfaceSight: A New Spin on Touch, User, and Object Sensing for IoT ExperiencesFuture Interfaces Group2019-05-06 | Project details: http://www.gierad.com/projects/surfacesight SurfaceSight is an approach that enriches IoT experiences with rich touch and object sensing, offering a complementary input channel and increased contextual awareness. For sensing, we incorporate LIDAR into the base of IoT devices, providing an expansive, ad hoc plane of sensing just above the surface on which devices rest. We can recognize and track a wide array of objects, including finger input and hand gestures. We can also track people and estimate which way they are facing. We evaluate the accuracy of these new capabilities and illustrate how they can be used to power novel and contextually-aware interactive experiences.
Published at ACM CHI 2019.
Laput, G. and Harrison, C. 2019. SurfaceSight: A New Spin on Touch, User, and Object Sensing for IoT Experiences. In Proceedings of the 37th Annual SIGCHI Conference on Human Factors in Computing Systems (Glasgow, UK, May 4 - 9, 2019). CHI '19. ACM, New York, NY. Paper 329, 12 pages.Vibrosight: Long-Range Vibrometry for Smart Environment SensingFuture Interfaces Group2018-10-15 | We present Vibrosight, a new approach to sense activities across entire rooms using long-range laser vibrometry. Our sensing principle was inspired by the spy technology used by the KGB in the 1950s. Unlike a microphone, our approach can sense physical vibrations at one specific point, making it robust to interference from other activities and noisy environments.Ubicoustics: Plug-and-Play Acoustic Activity RecognitionFuture Interfaces Group2018-10-15 | Learn more: http://www.gierad.com/projects/ubicoustics Despite sound being a rich source of information, computing devices with microphones do not leverage audio to glean useful insights about their physical and social context. In this project, we present a novel, real-time, sound-based activity recognition system. We start by taking an existing, state-of-the-art sound labeling model, which we then tune to classes of interest by drawing data from professional sound effect libraries traditionally used in the entertainment industry. These well-labeled and high-quality sounds are the perfect atomic unit for data augmentation, including amplitude, reverb, and mixing, allowing us to exponentially grow our tuning data in realistic ways. We quantify the performance of our approach across a range of environments and device categories and show that microphone-equipped computing devices already have the requisite capability to unlock real-time activity recognition comparable to human accuracy.Wall++: Room-Scale Interactive and Context-Aware SensingFuture Interfaces Group2018-04-23 | More information: http://yang-zhang.me/research/Wall/Wall.html
We present Wall++, a low-cost sensing approach that allows walls to become a smart infrastructure. Instead of merely separating spaces, walls can now enhance rooms with sensing and interactivity. Our wall treatment and sensing hardware can track users’ touch and gestures, as well as estimate body pose if they are close. By capturing airborne electromagnetic noise, we can also detect what appliances are active and where they are located.Pulp Nonfiction: Low-Cost Touch Tracking for PaperFuture Interfaces Group2018-04-23 | More information: http://yang-zhang.me/research/Pulp/Pulp.html
In this work, we present a new technical approach for bringing the digital and paper worlds closer together, by enabling paper to track finger input and also drawn input with writing implements. Importantly, for paper to still be considered paper, our method had to be very low cost. This necessitated research into materials, fabrication methods and sensing techniques. We describe the outcome of our investigations and show that our method can be sufficiently low-cost and accurate to enable new interactive opportunities with this pervasive and venerable material.UIST 2017 Student Innovation Contest: Robotic ArmFuture Interfaces Group2017-07-06 | In this UIST Student Innovation Contest (SIC), we explore how novel input, interaction, actuation, and output techniques can augment experiences that “reach out” into the world! In partnership with Arduino.org, we are seeking students to help us push the boundaries of input and output techniques on an Arduino Braccio— a desktop sized and fully-customizable, multi-DOF robotic arm! Join the UIST SIC and turn your ideas into reality! Win fabulous prizes!
Apply here: http://goo.gl/JEqXdWSynthetic Sensors (Gierad Laput - ACM CHI 2017)Future Interfaces Group2017-06-15 | Gierad Laput's talk at the ACM CHI 2017 Conference (May 2017)Deus EM Machina (Robert Xiao - ACM CHI 2017)Future Interfaces Group2017-05-12 | Video of Robert Xiao's Deus EM Machina talk at the ACM CHI 2017 conference in Denver, USA. See youtu.be/eInfzdZ-9fE for the accompanying project video.