It is quite easy for people to determine the density and relief of an object just by looking at it. You might as well say what the object looks like, just touching it with your eyes closed. Such skills would help robots to interact better with objects, but, unfortunately, until now they were not available to them. Researchers from the MIT Artificial Intelligence Laboratory (CSAIL) solved this problem by equipping the KUKA robotic arm with a GelSight tactile sensor - thus, artificial intelligence was able to study the connection between visual and tactile information and combine them.
The GelSight tactile sensor used was developed by a group of engineers led by Ted Adelson in 2014. At its core, it is an electronic copy of the tip of a human finger, which uses a camera and a sensitive rubber film to create a three-dimensional map of the surface. The device has already been tested in real conditions more than once - for example, once it helped a robot to properly connect a USB cable to the port.
In the new project, the sensor was installed in a KUKA robot, and combined with artificial intelligence — in this way, the robotic arm learned to determine the relief of objects and blindly recognize their shape. A set of 12,000 videos with 200 objects, such as fabrics, tools, and household items, were used to train the system. The videos were divided into frames, and it was on their basis that the robot combined tactile and visual information.
At the moment, the robot is able to perform work only in a controlled environment, and only with objects known to him in advance. System developers want to expand its capabilities, giving artificial intelligence more data to explore.