Electrical engineers have developed a faster collision detection algorithm that uses machine learning to help robots avoid moving objects and negotiate rapidly changing environments in real time.
Developed at the University of California San Diego, the Fastron algorithm is said to run up to eight times faster than existing collision detection algorithms.
The engineers, led by Prof Michael Yip envision Fastron being useful for robots that operate in human environments where they work with moving objects and people fluidly. They are also exploring robot-assisted surgeries using the da Vinci Surgical System. In this scenario, a robotic arm would autonomously perform assistive tasks – such as suction, irrigation or pulling tissue back – without obstructing the surgeon-controlled arms or the patient’s organs.
The team also envisions that Fastron can be used for robots that work at home for assisted living applications.
Existing collision detection algorithms spend time specifying all the points in a given space -the specific 3D geometries of the robot and obstacles – and performing collision checks on every single point to determine whether two bodies are intersecting at any given time.
To lighten the computational load, Yip and his team in the Advanced Robotics and Controls Lab (ARClab) developed a minimalistic approach to collision detection. The result was Fastron, an algorithm that uses machine learning strategies – which are traditionally used to classify objects – to classify collisions versus non-collisions in dynamic environments.
“We actually don’t need to know all the specific geometries and points. All we need to know is whether the robot’s current position is in collision or not,” said Nikhil Das, an electrical engineering PhD student in Yip’s group and the study’s first author.
An important feature of Fastron is that it updates its classification boundaries very quickly to accommodate for moving scenes. Fastron’s active learning strategy works using a feedback loop. It starts out by creating a model of the robot’s configuration space (C-space), which is the space showing all possible positions the robot can attain.
Fastron models the C-space with a small number of so-called collision points and collision-free points. The algorithm then defines a classification boundary between the collision and collision-free points – this boundary is essentially a rough outline of where the abstract obstacles are in the C-space. As obstacles move, the classification boundary changes. Rather than performing collision checks on each point in the C-space, as is done with other algorithms, Fastron selects checks near the boundaries. Once it classifies the collisions and non-collisions, the algorithm updates its classifier and then continues the cycle.
Because Fastron’s models are more simplistic, the researchers set its collision checks to be more conservative. Since just a few points represent the entire space, Das explained, it’s not always certain what’s happening in the space between two points, so the team developed the algorithm to predict a collision in that space.
“We leaned toward making a risk-averse model and essentially padded the workspace obstacles,” Das said. This ensures that the robot can be tuned to be more conservative in sensitive environments like surgery, or for robots that work at home for assisted living.
The team has so far demonstrated the algorithm in computer simulations on robots and obstacles in simulation.
Prof. Yip presented the new algorithm at the first annual Conference on Robot Learning taking place in California between November 13-15.