Ishikawa Group LaboratoryProjection mapping technology is a highly anticipated area, which has been used for mere optical effects or interactive AR applications. However, until now mainly static objects, such as things placed on tables, walls, floors or desk surfaces, have been subjected to the projected information. Dynamic scenes and high-speed objects have not been dealt with. Even if this was tried using a traditional projection mapping system, there would be a misalignment between the target and the projection due to delay in the system.
Therefore, we propose an unprecedented projection mapping technology aimed at moving targets, which is achieved by means of a high-speed vision system capable of capturing a thousand images per second and a high-speed optical device, called Saccade Mirror. This device was originally designed to keep the camera gaze fixed at a dynamic target (cf. 1 ms Auto Pan-tilt, URL: http://www.youtube.com/watch?v=9Q_lcFZOgVo). In our projection mapping system, the projector and camera are coaxially aligned in the Saccade Mirror which provides a misalignment free projection that previously was considered to be difficult. This technology is named "Lumipen" (URL: http://ishikawa-vision.org/mvf/Lumipen/index-e.html) after an imaginary pen with illumination instead of ink, where arbitrary patterns can be depicted.
We highly expect Lumipen has a big potential for various interactive applications. "Visual and Tactile Cues for High-Speed Interaction", as previously launched, is primarily based on Lumipen technology. (URL: http://www.youtube.com/watch?v=DWeAXUVrqjE)
Lumipen: Projection Mapping on a Moving ObjectIshikawa Group Laboratory2013-06-19 | Projection mapping technology is a highly anticipated area, which has been used for mere optical effects or interactive AR applications. However, until now mainly static objects, such as things placed on tables, walls, floors or desk surfaces, have been subjected to the projected information. Dynamic scenes and high-speed objects have not been dealt with. Even if this was tried using a traditional projection mapping system, there would be a misalignment between the target and the projection due to delay in the system.
Therefore, we propose an unprecedented projection mapping technology aimed at moving targets, which is achieved by means of a high-speed vision system capable of capturing a thousand images per second and a high-speed optical device, called Saccade Mirror. This device was originally designed to keep the camera gaze fixed at a dynamic target (cf. 1 ms Auto Pan-tilt, URL: http://www.youtube.com/watch?v=9Q_lcFZOgVo). In our projection mapping system, the projector and camera are coaxially aligned in the Saccade Mirror which provides a misalignment free projection that previously was considered to be difficult. This technology is named "Lumipen" (URL: http://ishikawa-vision.org/mvf/Lumipen/index-e.html) after an imaginary pen with illumination instead of ink, where arbitrary patterns can be depicted.
We highly expect Lumipen has a big potential for various interactive applications. "Visual and Tactile Cues for High-Speed Interaction", as previously launched, is primarily based on Lumipen technology. (URL: http://www.youtube.com/watch?v=DWeAXUVrqjE)Fully Automated Beads Art Assembly based on Dynamic Compensation ApproachIshikawa Group Laboratory2023-09-27 | Towards the goal of realizing next-generation manufacturing, we demonstrate the implementation of made-to-order bead art as a simplified scenario of smart manufacturing. Specifically, by configuring a 3DoF position compensation module which is controlled with 1,000Hz visual feedback at the end-effector of an industrial robot arm, the proposed system accurately picks up moving beads on a rotating stage that resembles a belt conveyor and creates bead art according to the design of online orders. The robot achieves good positioning accuracy even under great uncertainties of the moving targets thanks to the dynamic compensation framework, where the high-bandwidth dynamics as well as low-bandwidth dynamics of targets can be accommodated by the compensation module working in a local manner with high-speed visual feedback control and by the robot arm working in a global manner coarsely, respectively. Therefore, we are able to achieve accurate and smart robotic assembly in an unstructured work environment without human intervention.
ishikawa-vision.org/fusion/BeadsArtAssembly/index-e.htmlA sealant dispensing robot for applications with moving targetsIshikawa Group Laboratory2023-09-20 | Towards the goal of realizing next-generation smart manufacturing, we developed a sealant dispensing robot to deal with moving targets of no prior knowledge (shape, placed pose and position, moving speed, etc.) based on our previously proposed dynamic compensation framework. To realize accurate sealant dispensing for works of unknown shape that are randomly placed on a moving conveyor, the robot was designed based on a coarse-to-fine strategy such that an arm part (2-DOF) for long-term adaptation of uncertainties in a global manner (global planning), and a hand part (2-DOF) for real-time adaptation of uncertainties in a local manner (real-time error absorption). Specifically, the hand part was utilized with 1,000 fps visual feedback control. The developed robot can be either work in the auto mode without any human intervention, or work in a human-robot mode to collaborate with human. http://ishikawa-vision.org/fusion/DispensRobot/index-e.htmlAccurate and Robust Inter-vehicle Distance Estimation with Stereo High-speed VisionIshikawa Group Laboratory2023-07-05 | We propose an accurate and robust inter-vehicle distance estimation method using high-speed stereo vision. The framework involves two driving mechanisms. The first mechanism performs accurate and stable tracking with an algorithm optimized for stereo high-speed vision even under intensive vibration. The second mechanism estimates inter-vehicle distance via a highly accurate scale estimation and aggregates multiple scale-based distance estimations to ensure that it is more accurate and robust even in situations where the scale changes rapidly (e.g., emergency braking).
We demonstrated the proposed system in three different scenes. In the first scene, we were following a truck on the highway, which is the most common situation in truck platooning. We can see the bounding box of the truck and its distance from the ego vehicle. The estimated relative velocity and acceleration are also displayed in the speedometers below. Although the front truck was slightly tilted when approaching a curve, we can see that the bounding box is pretty stable and the distance is estimated accurately. In the second scene, a parked truck that was more than 100m away was approached at about 10m by sudden acceleration and braking. As we can see, the distance, velocity, and acceleration can be estimated stably without losing track even in a situation where relative velocity and acceleration are quite high. In the last scene, both the front vehicle and the ego-vehicle were suffering intensive vibration and the slope was changing in the middle of the scene. Even in this challenging situation, we can see that the front vehicle can be stably tracked, and accurate distance estimation can be achieved.
Visit our project page for more information. Project page: ishikawa-vision.org/fusion/InterVehicleDistance/index-e.htmlRobot System for Manipulating a Randomly Placed Towel-like ObjectIshikawa Group Laboratory2022-06-02 | In recent years, there has been an increasing demand for robotic handling of towel-like objects in linen and laundry services. One such task involves picking up a towel randomly and aligning it to a predetermined position and posture. However, it is difficult for robots to handle these towel-like flexible objects because deformation occurs during robotic manipulation, and manipulation based on estimation of deformation status is rather challenging and unfeasible for most robotic systems. To solve this problem, in this research, we propose a method to continuously realize the object picking, the object spreading and the object alignment by robot arm system, aiming at further automation.
研究室Webページ: http: http://ishikawa-vision.org/index-j.html1 Millisecond Vision Pioneers a New EraIshikawa Group Laboratory2021-06-16 | Our laboratory in the University of Tokyo moved from Graduate School of Information Science and Technology (IST) to Information Technology Center (ITC) in April 2020. This video shows a history and recent progress of our high-speed vision research until March 2020 as well as basic concept and active attitude of our research.
Our affiliation after April 2020 is Ishikawa Group Laboratory (Project Professor Masatoshi Ishikawa), Information Technology Center, the University of Tokyo. We are conducting research on high-speed vision and its wide range of application systems without a break.
URL: http://ishikawa-vision.orgVarioLight 2: Wide Dynamic Projection Mapping in Rhythmic GymnasticsIshikawa Group Laboratory2021-04-06 | We have realized the combination of wide dynamic projection mapping and rhythmic gymnastics performance. "VarioLight 2" system can realize dynamic projection mapping onto a moving ball surface without any projection misalignment. Sticky and high-resolution projection is possible even for dynamic, irregular, and wide ball motion. We can observe clear and contrasted projected content, and visually grasp the ball rotation. Lively rhythmic gymnastics and wide dynamic projection mapping create a beautiful visual harmony. This technology will lead to further live applications such as media art, entertainment, and sports training.
VarioLight: Dynamic Projection Mapping for a Wide Range Performance http://ishikawa-vision.org/mvf/VarioLight youtube.com/watch?v=XEseo-orRDIHigh-speed color projector toward the world of 1,000fps visionIshikawa Group Laboratory2021-03-23 | 1,000fps sensing, image processing, and projection enable real-time overwriting of objects. We, humans, will know the world of 1,000fps vision and how slow our eyes are.
BGM: Rhythm and Booze by Twin Musicom is licensed under a Creative Commons Attribution license (creativecommons.org/licenses/by/4.0) Artist: http://www.twinmusicom.orgTracking Projection Mosaicing for Wide High-resolution DisplayIshikawa Group Laboratory2020-12-16 | We have developed a tracking projection mosaicing system to wide and high-resolution projection with proper geometric alignment. Considering our gaze, static projectors suffer from a trade-off between an angle of projection and resolution due to limited pixels. Wide-area and high-resolution display is theoretically possible by performing active projection that follows our gaze position. We have focused on tracking projection for laser pointing as our gaze, and realized dynamic high-resolution projection by high-speed visual feedback and appropriate synchronization between a high-speed projector and rotational mirrors. The synchronization enables mosaicing of projected images without any misalignment while dynamically controlling the projector optical axis using the rotational mirrors. Tracking projection mosaicing has advantages in displaying high-resolution photographs, maps, and text over a wide area. The system enables both overall understanding of the wide area and detailed observation of local areas. In future, we will integrate high-speed eye-tracking technology to this projection system for highly immersive display.
http://ishikawa-vision.org/mvf/TrackingProjectionMosaicing/index-e.htmlElaMorph Projection: Deformation of 3D Shape by Dynamic Projection MappingIshikawa Group Laboratory2020-11-03 | We propose a method of illusion named ``ElaMorph projection", in which a rigid object appear to be deformed by dynamic projection mapping. In this example, a plaster of human head appears to have elasticity. The proposed method enables deforming the geometry and rendering in less than 2 ms; this is so fast that human cannot notice the delay of the system. In addition, we refined conventional rendering algorithm which based on the principle of human perception to present effective illusions for 3D solid. Moreover, since environmental lighting is estimated in real time, it is possible to keep the illusion where lighting changes over time, such as party or concert halls. ElaMorph projection expands the range of entertainment of projection mapping.
Kentaro Fukamizu, Leo Miyashita, and Masatoshi Ishikawa: ElaMorph Projection: Deformation of 3D Shape by Dynamic Projection Mapping, Int. Symposium on Augmented and Mixed Reality (ISMAR 2020) (2020.11.9-13)Proximity sensor-based High-speed Tracking and High-precision Depth ScanningIshikawa Group Laboratory2020-10-27 | We developed one board, USB power supply type, high-speed, high-precision proximity sensor. The sensor element, amplifier circuit, and communication circuit are mounted on a single board (CMOS custom chip and FPGA). The sensor can measure distance and two tilt angles within 1ms. The sensor is suitable for robot application such as control of fingertip position precisely at high speed. And the sensor enables high-precision depth scanning of a target surface. This work was conducted by a joint research project between Ishikawa Group Laboratory and Toyoda Gosei co., ltd.
URL: http://ishikawa-vision.org/fusion/prox2/index-e.htmlBilateral Motion Display : Multiple Visual Perception Using Afterimage Effects for Specific MotionIshikawa Group Laboratory2020-09-23 | "Bilateral Motion Display" produces user-oriented visual perception by using afterimage effects for specific motion. Initially, it seems that the displayed patterns do not reveal any information; however, seen by a user moving his/her gaze in a certain direction and speed, they are spatially integrated to appear as 2D afterimages. The system expands the range of display expression and has various potential applications such as in road signs.
* Abstract: http://ishikawa-vision.org/perception/bilateral/index-e.html * Details: dl.acm.org/doi/10.1145/3359996.3364241 H. Ikeda, T. Hayakawa, and M. Ishikawa: Bilateral Motion Display: Strategy to Provide Multiple Visual Perception Using Afterimage Effects for Specific Motion, The 25th ACM Symposium on Virtual Reality Software and Technology (VRST2019) / Proceedings, Article No. 17.Dynamic Perceptive Compensation for Optical Illusions Synchronized with Eye MovementIshikawa Group Laboratory2020-09-08 | Our dynamic perceptive compensation system realizes controlling illusory perception depending on the eye movement. The system detects user’s eye position by an eye tracker and determine how much compensation is needed for each illusory image based on the illusory characteristics; then the perception is compensated by temporarily changing the displayed image. From the psychophysical experiments, we were able to determine the compensation parameters and observe the temporal dependence of the illusory perception. The technology of controlling gaze-dependent illusions is not limited to understanding the dynamics of visual perception, but enables us to eliminate the visual perception discrepancies that can be applied in the engineering field. *You can partially experience our system compensation in the movie.
Result 1: doi.org/10.36463/idw.2019.1652 ・(Result 1) Yuki Kubota, Tomohiko Hayakawa, Masatoshi Ishikawa: Reduction of Moving Optical Illusion through Synchronization with Eye Movement, International Display Workshops 2019 (IDW ’19) (Sapporo, 2019.11.27) / Proceedings, INP1-5L (2019).
Result 2: dl.acm.org/doi/abs/10.1145/3379156.3391344 ・(Result 2) Yuki Kubota, Tomohiko Hayakawa, Masatoshi Ishikawa: Quantitative Perception Measurement of the Rotating Snakes Illusion Considering Temporal Dependence and Gaze Information, Symposium on Eye Tracking Research and Applications (ETRA ’20 Short Papers) (online, 2020.5.27) / Proceedings, No.45, pp. 1-4 (2020).High-resolution Focused Tracking of Freely Swimming FishIshikawa Group Laboratory2020-09-01 | We have developed a continuous high-resolution focused imaging system for freely swimming fish using high-speed image processing and optical control. Two high-speed vision system including rotational mirrors track medaka fish continuously, and high-speed liquid lens control by triangulation achieves high-resolution focused imaging for one target. High-speed tracking using ellipse self-windowing can be robust against individual intersection such as tails, and improve efficiency of the tracking. Triangulation with the stereo vision system controls the high-speed liquid lens, and realizes continuous high-resolution focused imaging beyond fixed focus imaging. We can observe in detail fish body textures and the movement of gills and fins even during the swimming. Such observation will lead to individual identification and health management for multiple fish in aquaculture and aquarium, and we also expect this imaging system to have further applications for observation of other dynamic animals or their dynamic specific parts including humans.
http://ishikawa-vision.org/mvf/FishTracking/index-e.htmlHigh-Speed Focal Tracking Projection Based on Liquid LensIshikawa Group Laboratory2020-08-25 | A high-speed projection system with a dynamic focal tracking technology based on a variable focus lens was illustrated. The traditional projection was limited on 2D space, due to their narrow depth-of-field projection range. Our system included a high-speed projector, a high-speed variable focus lens, and high-speed visual feedback, so that the depth and rotation information would be detected and then served as feedback to correct the focal length and update the projection information in high-speed. As a result, the information would be well-focused projected even on a 3D dynamic moving object.
Furthermore, it is expected that any physical surface in the real world are possible to be repaint with projection, the appearance is freely manipulated, and interactive information is dynamically presented. Our system provides the essential technology for expanding such dynamic projection mapping applications.
Lihui Wang, Hongjin Xu, Satoshi Tabata, Yunpu Hu, Yoshihiro Watanabe, and Masatoshi Ishikawa: High-Speed Focal Tracking Projection Based on Liquid Lens, ACM SIGGRAPH 2020 Emerging Technologies (SIGGRAPH '20) (Virtual Event, USA, 2020.8.24-28)
http://ishikawa-vision.org/mvf/dyna_dof/index-e.htmlVarioLight 2: Dynamic Projection Mapping for Ball SportsIshikawa Group Laboratory2020-07-08 | "VarioLight 2" is a novel "Dynamic Projection Mapping" system for a widely dynamic sphere. One key technology is circumferential markers on the sphere, which enable 500 fps robust ball tracking against interactive occlusions (e.g., hands, fingers) and low resolution compared with dots markers. Another key technology is a previous VarioLight system with a high-speed projector and rotational mirrors, which allows dynamic projection mapping for a widely moving planar object. Then, "VarioLight 2" has realized high-speed projection mapping for the dynamic ball with interaction, wide motion, and no misalignment of projection. We can play football, basketball, and volleyball with attractive and beautiful visual contents as sports entertainment, and can also enjoy sports training using visual feedback such as rotational speed visualization. http://ishikawa-vision.org/mvf/VarioLight2/index-e.html
VarioLight: Dynamic Projection Mapping for a Wide Range Performance http://ishikawa-vision.org/mvf/VarioLight/index-e.html youtube.com/watch?v=XEseo-orRDIHigh-speed UAV Delivery System with Non-stop Parcel Handover Using High-speed Visual ControlIshikawa Group Laboratory2020-05-19 | Although research on physical distribution using unmanned aerial vehicles (UAVs) has seen increasingly significant interest, the task of automatically loading a parcel onto a UAV has not been researched adequately. In this study, to design an automatic UAV delivery system, we achieved the task of non-stop handover of a parcel to an airborne UAV. For the handover task, we developed a novel tracking system with high-speed, multi-camera vision using cameras with different frame rates. The proposed system demonstrates that it is feasible to combine both high-speed object tracking (1,000 fps) and distant object tracking.
http://ishikawa-vision.org/fusion/UAVdeliveryHigh-speed Projection Feedback for Golf Swing TrainingIshikawa Group Laboratory2020-03-20 | We propose a high-speed projection method for golf swing training. Golf club motion information such as a swing plane is useful for training, and immediate feedback of the swing information will increase training efficiency because of the temporal consistency between the motion experience and feedback information. A 1,000-fps mirror-based tracking system measures the swing motion, and a 1,000-fps high-speed projector casts a shaft intersection point and swing plane line even during the swing motion. The method has extremely little latency and can show swing motion predictive projection by the high-speed tracking and projection. The low-latency projection feedback will be applied to dynamic golf training, and will be combined with a more sophisticated projection representation.
High-speed Projection Feedback for Golf Swing Training http://ishikawa-vision.org/mvf/GolfProjection/index-e.htmlDynamic Viewpoint-dependent Projection with Dynamic Projection MappingIshikawa Group Laboratory2020-03-06 | We propose a method for projecting viewpoint-dependent images by projecting onto a lenticular lens with dynamic projection mapping (DPM) techinque. Our system can project at 1000-fps with the latency of about 4.84 ms. We realize a more accurate control of projection position than conventional DPM by estimating the system latency. By increasing the resolution of the viewpoint-dependent images in the future, a moveable stereoscopic video can be presented.Allowable Limits of Latencies in Delay Control Visual Feedback SystemIshikawa Group Laboratory2020-02-28 | In the current information-driven society, input devices with visual feedback are widely used. However, latency from the moment of input to the appearance of visual feedback on display devices (e.g., a touching screen) is inevitable.
Hence, we conducted subjective experiments to measure the effects of latency on human performances in visual feedback system. As a result, it was confirmed that even with low latency (i.e., below 100 ms), human performance in some types of task exerts negative effects. The results provide suggestions in defining available range of latencies in input devices and make it possible in the future to further examine specific user performances, which helps develop significant indications in GUIs and other fields of user-oriented researches with interactive elements.
Web Page: http://ishikawa-vision.org/perception/delayBrobdingnagian Glass: A Micro-Stereoscopic Telexistence SystemIshikawa Group Laboratory2019-11-14 | We propose a system called “Brobdingnagian Glass” that realizes the binocular perspective of a miniature human using a vibrating hemispherical mirror and a camera in order to remove the lower limit of realizable scale. We reproduced the binocular stereovision at 1.72 mm as the interpupillary distance. This human is about 5 cm tall.
Web Page: http://ishikawa-vision.org/vision/BrobdingnagianGlassHigh-speed Grasping of a Card using New Actuator MagLinkageIshikawa Group Laboratory2019-09-16 | “MagLinkage” is a new actuator (small size, low friction and high torque). The actuator is consisted of a compact size DD motor (MTL, Inc.), a low-reduction ratio gear box (Shindensha, inc.) and a magnet gear (FEC, inc.). MagLinkage enables 1ms torque control and sensing, and high-backdrivability motion. The impact force can be absorbed by the high-backdrivability motion of MagLinkage. The robot hand equipped with MagLinkage can slide and grasp a thin object on a table. By sliding the object with high speed and soft touch, the hand succeeded in grasping one piece from the pile of cards. This is a difficult task for a conventional robot hand.
Web Page: http://ishikawa-vision.org/fusion/MagLinkage_hand/index-e.htmlHigh-Speed Ring Insertion by Dynamic Observable Contact HandIshikawa Group Laboratory2019-05-20 | We propose a new multifingered robotic hand, called dynamic observable contact (DOC) hand and realize high-speed, high-precision ring insertion. The clearance between the shaft and ring is 0-36 micro-millimeters. Experiment results show that DOC hand performs high-precision ring insertion with a higher speed than a human. The average cycle time is 2.42 s for the robot, whereas it is 2.58 s for a human. In order to reduce impact force at insertion and compensate the position error, DOC hand has the following two properties. 1) 6-DOF dynamic passivity: The grasp system exhibits passivity with respect to the impact in any direction. 2) Object-pose observability: The object pose in the grasp system can be observed by the hand. This system has been developed in collaboration with OMRON Corporation.
http://ishikawa-vision.org/fusion/doc_handDynamic Depth-of-Field Projection for 3D Projection MappingIshikawa Group Laboratory2019-04-30 | This video introduces a dynamic depth-of-field projection system to realize a 3D projection mapping. The prototype included with a high-speed projector, a high-speed variable focus lens and a depth sensor. A comparison experiment showed that two fixed focus projectors were in the left and middle, and the projection became blurry when the board moved out of the focus. While, in the right, our dynamic projection was always in focus when the board moved from 0.5 to 2.0 meters. In the second experiment, volumetric medical data could be projected and observed in a virtual 3D space. A CT head scanning image was projected in a 1.0-meter virtual space, when the board was moved, each slice image could be clearly observed. A MR scanning image could also be projected, when the board was moved, the detail information could be observed. It provides a friendly interactive platform for doctors and patients, and makes it easier for patients to understand the doctor's instructions.
Lihui Wang, Hongjin Xu, Yunpu Hu, Satoshi Tabata, Masatoshi Ishikawa, Dynamic Depth-of-Field Projection for 3D Projection Mapping, ACM CHI Conference on Human Factors in Computing Systems (CHI'19) (Glasgow, Scotland, UK. 2019.05.04-09)MIDAS Projection: Markerless and Modelless Dynamic Projection Mapping for Material RepresentationIshikawa Group Laboratory2018-11-29 | MIDAS-projection system enables markerless and modelless projection mapping onto dynamically moving targets at 500fps with millisecond-order latency.
The visual appearance of an object can be disguised by projecting virtual shading as if overwriting the material. However, conventional projection-mapping methods depend on markers on a target or a model of the target shape, which limits the types of targets and the visual quality. In this research, we focus on the fact that the shading of a virtual material in a virtual scene is mainly characterized by surface normals of the target, and we attempt to realize markerless and modelless projection mapping for material representation. In order to deal with various targets, including static, dynamic, rigid, soft, and fluid objects, without any interference with projection light, we measure surface normals in the infrared region in real time and project material shading with a novel high-speed texturing algorithm in screen space. The proposed method realizes dynamic and flexible material overwriting for unknown objects.
Web page http://ishikawa-vision.org/vision/MIDASDynamic Human-Robot Interaction -Realizations of collaborative motion and peg-in-hole-Ishikawa Group Laboratory2018-11-21 | We developed a dynamic human-robot interactive system consisting of a high-speed vision and a high-speed robot hand. The high-speed vision can measure the position and the orientation of the board to be manipulated by a human and a robot. Then the high-speed robot hand can react based on the board information. This system can correspond to a randomly human motion at high-speed and low-latency. Also, this system can compensate the pitch angle, and follow the yaw and roll angles by the human motion. In addition, we successfully achieved a collaborative peg-in-hole. In this task, the diameters of the peg and the hole are 6.350 mm and 6.325 mm, respectively. Therefore, we can perform the high-accuracy peg-in-hole by using the developed human-robot interactive system.
http://ishikawa-vision.org/fusion/HumanRobotCollaboration/index-e.htmlRubiks Cube Manipulation Using a High-speed Robot HandIshikawa Group Laboratory2018-09-26 | We realized manipulation of Rubik's cube using a high-speed robot hand with three fingers. The experimental system consists of a high-speed vision and a high-speed robot hand, and the high-speed vision can calculate the center of gravity position and angle of the Rubik's cube at 500 fps. The manipulation realized in this research is a total of three operations, two kinds of regrasping and one-face turning of the Rubik’s cube. By combining these three operations all the faces can be turned. In the experiment, these three operations were performed in a row in 1 second and we succeeded in 30 continuous operations in 10 seconds.
http://ishikawa-vision.org/fusion/RubikManipulation/index-e.htmlHigh-speed, Non-deformation Catching with High-speed Vision and Proximity FeedbackIshikawa Group Laboratory2018-09-06 | We demonstrated the high-speed, non-deformation catching of a marshmallow. The marshmallow is a very soft object which is difficult to grasp without deforming its surface. For the catching, we developed a 1ms sensor fusion system with the high-speed active vision sensor and the high-speed, high-precision proximity sensor. Generally, a tactile feedback is used to grasp various kinds of soft objects without deforming. However, the robot hand tends to deform the object surface with only tactile feedback. By slowing the grasping speed, the deformation becomes smaller. However, grasping time becomes longer. The 1ms sensor fusion system enabled seamless, high-sensitive sensing from non-contact to contact state. The robot hand could control fingertip position dynamically and precisely based on the visual and the proximity feedback. By the proximity feedback, contact to the object was detected before deforming its surface, and grasping motion is stopped. The robot hand could catch the marshmallow even if the position and posture of it were different.
http://ishikawa-vision.org/fusion/vision_proximity1/index-e.htmlHuman-Robot Collaboration Based on Dynamic CompensationIshikawa Group Laboratory2018-08-31 | This video summarizes our recent studies on human-robot collaboration based on dynamic compensation framework with the aim of optimally combining the cognitive capabilities of human and accurate motion capabilities of robot. Under the dynamic compensation approach, a human operator is for cognitive global-motion without caring much about accuracy. Fine local-motion in an active manner is realized by a dynamic compensation robotic module based on high-speed visual feedback. Dynamic compensation lies in the concept that robotic module has a much higher dynamic bandwidth than that of average human. Application scenarios with a background of cell-production ranging from micro-manipulation to macro-manipulation are implemented. More details can be referred to the website: http://ishikawa-vision.org/fusion/DynaCobot/index-e.htmlPortable Lumipen: Mobile Dynamic Projection Mapping System Using a 3D-stacked Vision ChipIshikawa Group Laboratory2018-07-25 | Portable Lumipen is a portable dynamic projection mapping system that tracks a target at 1,000fps with 3ms response time.
In recent years, projection mapping systems for dynamically moving targets have been proposed and this field is called "dynamic projection mapping." In Dynamic projection mapping, the moving target is tracked by a high-speed camera, and the projection is controlled at high speed. By achieving high responsiveness in sensing and feedback, these systems enable dynamic and immersive applications. However, they are not truly dynamic because the systems are fixed in a room.
In this reseach, we use a 3D-stacked vision chip and an optical gaze contoller and realize a portable dynamic projection mapping system. The 3D-stacked vision chip enables to omit a huge workstation and inefficient data transfer, and the optical gaze controller enables high-speed tracking shot and also tracking projection. The portable system enables a variety of applications including wearable user interface, projector-based makeup and combination with a drone, and this device will take projection mapping a step further.
Web page http://ishikawa-vision.org/vision/Portable_lumipen http://ishikawa-vision.org/vision/vckHigh-Speed Catching of a Paper Balloon using High-Performance Proximity SensorIshikawa Group Laboratory2018-05-31 | We developed a fingertip-size, high-performance proximity sensor for high-speed and super soft-touch catching. The proximity sensor detects the distance to and the tilt angle of the surface of an object with a resolution more than 20-times higher (50micromillimeters) and a measuring time less than 1/10-th (1ms) those of existing sensors. The high-speed, high-precision sensing enables accurate position control of the finger and the contact detection independently of the contact force. Conventionally, contact between a tactile sensor and an object is defined as the sensor output (contact force or pressure value) exceeding a threshold value. With this approach, however, in high-speed grasping, robot hands tend to break or damage fragile objects whose reaction force is extremely small. In our approach on the other hand, we define contact as zero distance to the object. The hand was able to catch the paper balloon with a deformation equal to or less than that achievable by a human performing the same catching task.
http://ishikawa-vision.org/fusion/prox1/index-e.htmlVarioLight: Dynamic Projection Mapping for a Wide Range PerformanceIshikawa Group Laboratory2018-05-24 | "VarioLight", a novel "Dynamic Projection Mapping" system combines two key technologies. "Lumipen"(Type 1) is a high-speed optical axis controller, which enables a wide range motion of a target with sufficient projection resolution, but has restrictions on shape and deformation of the target due to slow projectors. "DynaFlash"(Type 2) is a high-speed/low-latency projector, which allows dynamic rotation/deformation of the target, but has a trade-off between resolution and angle of projection. "VarioLight"(Type 3) effectively exploits the advantages of the two previous systems; it realizes both a spatially wide range and small rotation/deformation with sufficient projection resolution. We can dynamically dance around a stage and perform acrobatics with projection mapping.
Dynamic Projection Mapping; Now and the Future at Ishikawa Watanabe Laboratory youtube.com/watch?v=Ca8SmIDjPOYACHIRES: Robust Bipedal Running Based on High-speed Visual FeedbackIshikawa Group Laboratory2018-05-09 | ACHIRES has been improved in the aspect of robustness. The posture stabilization control enables the bipedal robot to keep running on rough terrain and under disturbance force applied to the trunk. The control is based on the instantaneous recognition and behavior for falling avoidance with the integration system composed of high-speed vision and high-speed actuators. In this demonstration, robust running can be achieved without any information about incoming obstacles, but only detecting the body posture of the bipedal robot for balance by high-speed vision.
http://ishikawa-vision.org/fusion/BipedalRunningForwardBentDynaFlash v2 and Post RealityIshikawa Group Laboratory2018-03-05 | High-speed Color Projector “DynaFlash v2” will open up a new possibility toward “Post Reality”. Details can be found in the below page. http://ishikawa-vision.org/vision/dynaflashv2Accurate pick-and-place under uncertainties by a dynamic compensation robotIshikawa Group Laboratory2018-02-20 | It is challenging to realize accurate pick-and-place of tiny bearing balls under uncertainties. The uncertainties may be attributed to environmental disturbances as well as to positioning errors of a typical industrial robot. We propose to realize the task by a dynamic compensation robot. It consists of a commercial industrial robot and an add-on module with 2DOF compensation actuators. The former is for fast and coarse global motion realized either by coarse teaching-playback programming or by motion planning with the use of computer vision. The latter is to conduct real-time compensation in a local manner under high-speed visual feedback. In the demonstrations, random disturbances are exerted on the working stage. Along with the main robot conducting coarse global motion, fine positioning is realized by the compensation module under 1000 fps visual feedback. http://ishikawa-vision.org/fusion/PickandPlace/index-e.htmlACHIRES: Improved Running Taking Dynamically Unstable Posture Achieved with High-Speed VisionIshikawa Group Laboratory2017-11-24 | ACHIRES, “Actively Coordinated High-speed Image-processing Running Experiment System”, is a bipedal running system consists of high-power bipedal robot and high-speed vision. The high-speed vision recognizes the posture of the running robot at 600 fps, which realizes the posture control in response to the real-time changes in situation without prediction. In previous version of ACHIRES, the posture information was used only when both legs were off the ground and the control was open-loop otherwise. In this version, we improved the control method to implement visual feedback to whole process of running. In the movie, ACHIRES was given a human-like forward-bent trajectory as a reference to achieve fast running. Such running gait that includes dynamically unstable area was difficult in widely-used ZMP-based control because of its stability-oriented approach. This improved ACHIRES can recover its balance instantly with high-speed visual feedback and run reliably within dynamically unstable area. Web: http://ishikawa-vision.org/fusion/BipedalRunningForwardBent/index-e.htmlActive Assistant Robot - human robot cooperation based on a new high-speed visionIshikawa Group Laboratory2017-11-24 | We propose an active assistant robot to realize high-performance manipulation that is traditionally difficult for human. In this line-following demonstration, human operator moves the assistant robot (2-DOFs) to realize coarse global-motion while keeping the target within the limited work range of the assistant robot. A projected square area is aligned with the robot’s work range and is used as visual indication for human operation. With the active assistance of the robot based on 1,000Hz visual feedback in local manner, tracking error (between image center and line center) is reduced dramatically compared to that of human control motion. The robot is developed based on the dynamic compensation approach with a new high-speed vision system. 1,000 fps imaging and processing are implemented within a single Vision Chip simultaneously. This technology can be used in a broad range of application scenarios where required accuracy is beyond traditional human capability, such as laser cutting, welding, sealing as well as assembly. http://ishikawa-vision.org/fusion/ActiveAssistant/index-e.htmlTracking Background-oriented Schlieren: shock-wave image measurement of high-speed flying objectsIshikawa Group Laboratory2017-05-09 | Tracking background-oriented schlieren (Tracking BOS) of high-speed flying object for forensic investigation. We control a camera gaze direction towards the flying object using high-speed rotational mirrors and visual feedback. Moreover, we introduce striped background to visualize shock waves around the object as the background-oriented schlieren technique. Long-duration and high-resolution image measurement has become possible thanks to the high-speed mirror-based tracking, and the shock waves can be clearly visualized with appropriate image processing. We applied this Tracking BOS to three different types of objects, and regarding one object, we succeeded in observing unsteady shock-wave phenomena with the actual flying object.
Tracking BOS: shock-wave image measurement of high-speed flying objects http://ishikawa-vision.org/mvf/TrackingBOS/index-e.htmlSENSECASE: Crafting Deformable Interfaces to Physically Augment SmartphonesIshikawa Group Laboratory2017-04-18 | Users can already personalize their smartphones by “clothing” or “disguising” them inside a case of their own choice. SENSECASE enables to go beyond the simple disguise, and use that shape to enable more meaningful interactivity. In our method, a deformable case is filled with transparent and black gels and put over the camera of a smartphone. The complex light reflections can be used to recognize patterns of deformation or grasping and map them to different UI actions based on a machine learning algorithm. Using SENSECASE, we demonstrate three types of example applications including pictograph input, volume control and 3D animation.
music used in the scene introducing a volume control application: MusMus, http://musmus.main.jpAdvanced Inspection System on Expressways Using Pixel-wise Deblurring ImagingIshikawa Group Laboratory2017-04-13 | As an application example of "Pixel-wise Deblurring Imaging" which are developed in our laboratory, we propose alternative method of advanced inspection for maintenance and management of infrastructure. In camera shooting, motion blur is one of the main factors leading to image quality degradation. Efforts to eliminate this motion blur have been tackled by various methods such as shake correction function and exposure time limiting for image quality improvement. However, there are situations in which it is difficult to close to vehicular traffic when inspecting the infrastructure for maintenance purposes. So it is required to photograph structures while a camera is moving on expressways at high speed. Furthermore, it is extremely difficult to photograph a high resolution image without motion blur. In order to solve the problem, we propose a newly developed method by using motion-blur compensation system. This work was conducted by a joint research project between Ishikawa Watanabe Laboratory and Central Nippon Expressway co., ltd.
1) Research on advanced inspection of highways: http://ishikawa-vision.org/perception/AdvancedInspection/index-e.html 2) Pixel-wise Deblurring Imaging: http://ishikawa-vision.org/perception/Pixel-wiseDeblurringImaging/index-e.html 3) Ishikawa Watanabe Laboratory: http://ishikawa-vision.org 4) Central Nippon Expressway co., ltd.: http://global.c-nexco.co.jp/enDynamic Projection Mapping; Now and the Future at Ishikawa Watanabe LaboratoryIshikawa Group Laboratory2017-03-22 | Dynamic Projection Mapping targets a dynamic object avoiding geometric misalignment of projection content caused by a system latency. The object dynamics consists of large translation and small deformation, which can be solved by high-speed devices separately. For the large translation, a high-speed optical axis controller is a solution to reduce the projection misalignment. Using high-speed rotational pan-tilt mirrors, Lumipen 2 shows high tracking performance against a bouncing ball. For the small deformation, a high-speed projector is a solution to fit the projection to the shape or rotation of the object. Based on a high frame rate and a low latency, DynaFlash shows high tracking performance against a rotating board, a deforming paper and T-shirt. In the near future, perfect Dynamic Projection Mapping will be realized by combining these two technologies. The hybrid projector system can be applied to various fields such as stage performance and sports.
Dynamic projection mapping onto deforming non-rigid surface using a high-speed projector http://ishikawa-vision.org/vision/DPM/index-e.html youtube.com/watch?v=-bh1MHuA5jUHistory of Vision Chip at Ishikawa Watanabe LaboratoryIshikawa Group Laboratory2017-03-09 | At Ishikawa Watanabe laboratory of the university of Tokyo, vision chip was devised in 1992 and so far it has developed greatly. The first vision chip in 1992 was implemented by gate array and it was about as large as a child. With the passage of time, the vision chip developed by Sony and our laboratory in 2017 realized both of high imaging capability and high functionality in a 1/3.2-inch small chip. It can capture 0.31M-pixel images at 1000fps and can execute spatio-temporal image processing simultaneously even though the power consumption is max 363mW. A vision chip has provided and will provide high-speed, low-latency, low-power and small-sized visual feedback systems.3D Augmented Reality Head-Up-Display for the Advanced Driver Assistance System in-vehicleIshikawa Group Laboratory2017-03-02 | Head-Up-Display enables a driver to view information with his head positioned "up" and looking for-ward, instead of angled down looking at lower instruments. Traditional 2D HUD asks driver to ob-serve the projection along the optical axis at a certain point. When the driver moves his head, a miss-matching occurs.
In our 3D HUD, a virtual display is projected into a three-dimensional world, so there will be no mis-match when the driver moves. The demo was recorded by two cameras which were placed at differ-ent places. When the camera was placed along the optical axis, 2D and 3D markers were all per-fectly matched. When the camera was placed with an angle against the optical axis, a mismatch was found in 2D HUD, but 3D HUD was still well matched. Another demo shows that if two mes-sages are projected along the same line but at different distances, they are aligned when looking straight, but not aligned when looking from an angle.
By using this technology, a speedometer can be dynamically projected at near or far distance ac-cording to the car-speed, so that the driver can enjoy driving more.
This work was conducted by a joint research project between Ishikawa Watanabe Laboratory and Konica Minolta Inc.
1) 3D Head-Up-Display: http://ishikawa-vision.org/mvf/3d_hud/index-e.html 2) Ishikawa Watanabe Laboratory: http://ishikawa-vision.org 3) Konica Minolta Inc.: http://www.konicaminolta.comDynamic Compensation Robot with a Newly Developed High-speed Vision ChipIshikawa Group Laboratory2017-02-23 | We realize high-speed and accurate contour-following task under dynamic compensation scheme with the use of a new Vision Chip developed by Sony and Ishikawa Watanabe Laboratory of the University of Tokyo. The Vision Chip carries out image processing at 1,000 fps. In this demonstration, given an arbitrary contour pattern, the industrial robot's trajectory is firstly programmed with sparse teaching points chosen roughly. The compensation module (2-DOFs) will then realize real-time motion under 1,000 Hz visual feedback by the new Vision Chip to keep the target contour always in the center of the image, even under systematic uncertainties such as backlash of the main robot, or external disturbances of the workpiece. By realizing image processing on the chip, there is no need to provide a device for image processing.
New Vision Chip Web: http://ishikawa-vision.org/vision/vck/index-e.html Video: youtube.com/watch?v=C6mz9kQGk0Y Dynamic Compensation Video: youtube.com/watch?v=VWT-Ko8xuGkISSCC 2017 New Vision Chip DemoIshikawa Group Laboratory2017-02-07 | At the international solid-state circuits conference, ISSCC 2017, the university of Tokyo and Sony presented a new vision chip. Vision chip is a high-speed intelligent image sensor with parallel processing elements and realizes high-speed, low-latency, low-power and small-sized visual feedback system. At the demo session of ISSCC 2017, 1000fps target recognition and target tracking were demonstrated by using the new vision chip with 3D-stacked 140GOPS column-parallel SIMD processing elements. The new vision chip achieves both of high imaging capability and high functionality and will provide various applications with high-speed visual feedback.
Web page http://ishikawa-vision.org/vision/vckDynamic projection mapping onto deforming non-rigid surfaceIshikawa Group Laboratory2016-10-19 | We realize dynamic projection mapping onto deforming non-rigid surface based on two original technologies. The first technology is a high-speed projector "DynaFlash" that can project 8-bit images up to 1,000 fps with 3 ms delay. The second technology is a high-speed non-rigid surface tracking at 1,000 fps. Since the projection and sensing are operated at a speed of 1,000 fps, a human cannot perceive any misalignment between the dynamically-deforming target and the projected images. Especially, focusing on new paradigms in the field of user interface and fashion, we have demonstrated dynamic projection mapping onto a deformed sheet of paper and T-shirt. Also we show that projection to multiple targets can be controlled flexibly by using our recognition technique.
http://ishikawa-vision.org/vision/DPMHigh-speed 3D Sensing with Three-view Geometry using a Segmented PatternIshikawa Group Laboratory2016-08-20 | We propose a high-speed 3D sensing system that can achieve 1,000-fps acquisition. High-speed vision technology, exceeding video rates (30 Hz), has recently been considered an important technology for various applications, such as robotics, vehicle systems, automatic inspection, man-machine interfaces, sports science, and so on. When object moves at high-speed, low-frame-rate sensing system can't obtain 3D shape and can't use data feedback. Our 1,000-fps sensing enables real-time observation of such motion detail. Furthermore, our system based on the one-shot structured light method, so that this system can obtain 3D shape even if the sensing system and/or the target object moves high-speed and discontinuous. High-speed 3D sensing is achieved by two approaches, including three-viewpoint epipolar constraints and a well-designed segmented pattern. Incorporating the three-view geometry constraints eliminates the need for a given point in the pattern to have a unique feature in a local area compared with the other points when identifying corresponding points in different views. Our proposed pattern has a hierarchical structure which consists of bar and dots. So it is easy to detect and identify at high-speed. In this way, our method achieves high-speed and fine 3D point cloud acquisition with low latency based on hierarchical processing which is enabled by the segmented pattern and the three-view geometric properties. http://ishikawa-vision.org/vision/SegmentedPatternPhyxel: Realistic Display using Physical Objects with High-speed Spatially Pixelated LightingIshikawa Group Laboratory2016-07-25 | Phyxel is a realistic display that makes a desired physical object appear at spatially pixelated locations. The created image appears to be essentially real and can be manipulated. Toward the realization of Phyxel, it is essential to closely coordinate the lighting and motion for the perceptual reality. In the developed system, we manipulate the motion of various objects at high speed and control their perceived locations by projecting a computed lighting pattern using a 1000-fps 8-bit high-speed projector.
http://ishikawa-vision.org/vision/PhyxelZoeMatrope: A System for Physical Material DesignIshikawa Group Laboratory2016-07-13 | Reality is the most realistic representation. We introduce a material display called ZoeMatrope that can reproduce a variety of materials with high resolution, high dynamic range and high light field fidelity by using real objects and human vision characteristics. ZoeMatrope achieves super-realistic material representation and animation by using composition and animation principles of a thaumatrope and a zoetrope. In addition, the proposed system realizes wide range of gamut by optimizing basis material set and controlling strobe light emission time in the order of microseconds. ZoeMatrope can also create spatially-varying materials, and even augmented materials such as a material with an alpha channel. ZoeMatrope will enhance not only material design but also creation of new media in art, entertainment, advertising and augmented reality.
http://ishikawa-vision.org/vision/ZoeMatropeFully Automatic Robotic Tracking of Uncertain ContoursIshikawa Group Laboratory2016-06-14 | We propose a fully automatic solution for high-performance robotic tracking of uncertain contour patterns without any teaching. It is implemented by a coarse-to-fine strategy with better performance comparing with our previous semi-automatic approach (youtube.com/watch?v=VWT-Ko8xuGk). First, several key points are automatically extracted from the unknown contour pattern using one image captured with a roughly calibrated low-cost camera. With these key points, a smooth path for coarse tracking is generated by utilizing the main robot's controller. Then, during the execution of the main robot's coarse motion, the add-on module conducts fine compensation under 1,000 Hz visual feedback. With this approach, high-performance, fully automatic robotic tracking of unknown contour patterns can be realized, even under systematic uncertainties such as backlash of the main robot, or external disturbances of the workpiece. http://ishikawa-vision.org/fusion/dctracking/index-e.html