radio-frequency identification) system developed at MIT has allowed robots to track objects with pinpoint accuracy and could supersede computer vision.
Known as TurboTrack, the technology sees cheap RFID tags placed on objects, with a wireless signal then bounced around the environment to locate them. Signals from the tags and other reflected objects are then processed using a “space-time super-resolution” algorithm, with the movement and direction of the tags factored in to improve localisation accuracy. And where machine vision for robots relies on line-of-sight, the new system can operate in cluttered environments.
“If you use RF signals for tasks typically done using computer vision, not only do you enable robots to do human things, but you can also enable them to do superhuman things,” said Fadel Adib, an assistant professor and principal investigator in the MIT Media Lab, and founding director of the Signal Kinetics Research Group. “And you can do it in a scalable way, because these RFID tags are only 3 cents each.”
As the tag moves, its signal angle slightly alters - a change that also corresponds to a definitive location. The algorithm can then use that change in angle to track the tag’s distance as it moves. By constantly comparing that changing distance measurement to all other distance measurements from other signals, it can find the tag in a three-dimensional space. This all happens in a fraction of a second. According to the MIT team, the system could have applications in packing and assembly and even allow swarms of drones to communicate with each other during search and rescue missions.
“The high-level idea is that, by combining these measurements over time and over space, you get a better reconstruction of the tag’s position,” said Adib.
To validate the system, the MIT team attached one RFID tag to a cap and another to a bottle. A robotic arm located the cap and placed it on the bottle, which was held by another robotic arm. Another demonstration saw the researchers tracking RFID-equipped nanodrones during docking, manoeuvring and flying. In both tasks, the team claims the system was as accurate and fast as traditional computer-vision systems, but also worked in scenarios where computer vision fails, such as where line-of-sight is broken.