Well, it has been an easy thing for humans to indentify the things by uses the sense of touch and sight. Humans have been provided with such abilities or senses for identifications. Now scientist is trying to create the future robots, which are near to identify the things by using touch and sight ability as closely as humans can do. So, the big challenge for the machines will be solved in future, as there might be touch free robots. For that purpose, a new robot developed by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is attempting to do just that.
On the other hand, for experiment KUKA robot arm and added a tactile sensor called GelSight. After that, the collected information was then fed to an AI so it could learn the relationship between visual and tactile information.
MIT’s New Robot
o teach the AI how to identify objects by touch, the team recorded 12,000 videos of 200 objects like fabrics, tools and household objects being touched. The videos were broken down into still images and the AI used this dataset to connect tactile and visual data.
For the research, the team at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) used a KUKA robot arm. It features a special tactile sensor called GelSight. Well, the KUKA robot arms are designed for industrial uses and GelSight technology helps robots gauge an object’s hardness. The idea behind the research is to understand the relationship between touch and sight.
According to shared information by Yunzhu Li, CSAIL PhD stude, “By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge. He further said, “by blindly touching around, our model can predict the interaction with the environment purely from tactile feelings. Bringing these two senses together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects.”
For the experiment or research, team used a web camera to record 12,000 videos of 200 different objects being touched. The videos were broken down into images, which the system then used to connect touch and visual data. The dataset called “VisGel” had three million “visual/tactile-paired images.” The team also used what they call the generative adversarial networks (GANs) to teach the system the connection between vision and touch.
Recent Comments