MIT researchers have developed a team of autonomous forest search-and-rescue drones that operate under dense forest canopies using onboard computation and wireless communication.
In a paper being presented at the International Symposium on Experimental Robotics conference (Nov 5-8), MIT researchers describe their autonomous system, which overcomes limitations of using GPS in forest environments.
Each autonomous quadrotor drone is equipped with laser-range finders for position estimation, localisation, and path planning. As the drone flies around, it creates an individual 3D map of the terrain. Algorithms help it recognise unexplored and already-searched spots, and an off-board ground station fuses individual maps from multiple drones into a global 3D map that can be monitored by human rescuers.
In a real-world implementation, though not in the current system, the drones would come equipped with object detection to identify a missing hiker. When located, the drone would tag the hiker’s location on the global map. Humans could then use this information to plan a rescue mission.
“Essentially, we’re replacing humans with a fleet of drones to make the search part of the search-and-rescue process more efficient,” said first author Yulun Tian, a graduate student in the Department of Aeronautics and Astronautics.
The researchers tested multiple drones in simulations of randomly generated forests and tested two drones in a forested area. In both experiments, each search-and-rescue drone mapped an area roughly 20m2 in about two to five minutes and collaboratively fused their maps together in real-time. The drones are also said to have performed well across several metrics, including overall speed and time to complete the mission, detection of forest features, and accurate merging of maps.
A LIDAR system is mounted on each drone to create a 2D scan of the surroundings. To distinguish between individual trees – and increase exploratory efficiency – the researchers programmed their drones to identify multiple trees’ orientations. With this method, when the LIDAR signal returns a cluster of trees, an algorithm calculates the angles and distances between trees to identify that cluster.
“Drones can use that as a unique signature to tell if they’ve visited this area before or if it’s a new area,” Tian said.
This feature-detection technique helps the ground station accurately merge maps. The drones generally explore an area in loops, producing scans as they go that are continuously monitored by the ground station. When two drones loop around to the same cluster of trees, the ground station merges the maps by calculating the relative transformation between the drones, and then fusing the individual maps to maintain consistent orientations.
“Calculating that relative transformation tells you how you should align the two maps so it corresponds to exactly how the forest looks,” Tian said.
In the ground station, robotic navigation software called “simultaneous localisation and mapping” (SLAM) – which maps an unknown area and keeps track of an agent inside it – uses the LIDAR data to localise and capture the position of the drones, helping it to accurately fuse the maps.
The result is a map with 3D terrain features. Trees appear as blocks of coloured shades of blue to green, depending on height. Unexplored areas are dark but turn grey as they’re mapped by a drone. On-board path-planning software tells a drone to always explore these dark unexplored areas as it flies around.
According to MIT, a current limitation for practical forest search and rescue use is that the drones still must communicate with an off-board ground station for map merging. In their outdoor experiment, the researchers set up a wireless router that connected each drone and the ground station. In the future, they hope to design the drones to communicate wirelessly when approaching one another, fuse their maps, and then cut communication when they separate. The ground station would then be used to monitor the updated global map.