More in

Mapping the future: how computer vision will keep autonomous cars on the road

ben_peters_fiveai

Ben Peters, VP of Product and Marketing at FiveAI looks at how computer vision is defining navigation for autonomous vehicles.

With the growth in digital services, not least the emergence of autonomous vehicles, what we understand as cartography is evolving. Traditionally, maps have been diagrams of an area that show physical features such as the elevation contours of mountains, the buildings in conurbations and the roads between them. Increasingly however, maps contain a broad range of additional geospatial data.

Maps are most useful when we can derive our position within them, a problem known as localisation. Google’s Street View product added visual imagery to maps in 2007, allowing people to localise themselves by comparing what they can see with the images stored by Google. More recent efforts by Google, TomTom and Here have involved solving the same problem for autonomous vehicles. All three companies have been developing high-definition 3D maps, where visual imagery and 3D point-cloud data from LIDAR sensors are combined to create a geometric fingerprint that can be leveraged by machines to locate themselves within those maps to centimetre-level accuracy. This method of localisation is certainly more accurate and reliable than GNSS, but it has its own problems - which we’ll come to later.

Once we have localised we can use that to figure out how we get from where we are to where we want to be (routing). Electronic 2D maps have been available for satellite navigation systems for decades and are regularly used for planning routes from A to B. Where GNSS units fail or are inaccurate it’s possible to use various SLAM (simultaneous location and mapping) techniques to derive a coarse approximation for location that is sufficient for A-B navigation.

Historically however, determining the driveable part of road (what is pavement and what is road) has been problematic for computer vision systems, thus an accurate prior-map that distinguishes between these has been required. In turn, this has driven the requirement for accurate localisation to ensure that early autonomous vehicle prototypes stay where they’re meant to, namely on the road.

As humans we plan a route from A to B, using landmarks and waypoints to localise ourselves along that route as we progress. We use maps to help us prepare for what lies ahead. For example, when we know we will be taking the next exit on a highway, we prepare by staying in the lane adjacent to the exit, even before we can see it. We don’t need a particularly accurate idea of our location in order to perform this, we only need coarse relative localisation to navigate along a route; it’s enough if we can see the exit in the distance, rather than knowing we are 367m from the turn. We look continually to make sure we’re driving in the right position on the road, in the correct lane.

/p/u/i/Volvo_autonomous_car.jpg

Computer vision techniques have progressed remarkably in the last few years to the point where machines can reliably distinguish, in real-time, between pavement and driveable road, even in the presence of reflections and specularities (such as from surface water). This means that autonomous vehicle technology developed today is able to deal with only approximate localisation, just as humans do. Complete road surface covering – snow, for example - can still cause problems for computer vision perception systems, much as they do for humans. However, in such conditions it’s questionable whether vehicles should be driving at all, either with autonomous system control or human drivers.

Though we don’t need high definition maps for accurate localisation, there are other geospatial data that can be of assistance to autonomous vehicles.

As human drivers, when we have driven a route before, we build up local knowledge that we use when driving the route again. For example, we know where children tend to cross on their way to and from school. We know the entertainment establishments from which people often stumble in to the road of an evening. We know the road sections where other drivers often overtake or drive too fast. We’re always making note of potential dynamic agents (pedestrians, cyclists, and cars) and their potential behaviour(s) on a geospatial basis, especially when they pose a potential risk.

We also build up knowledge of how we like to drive on certain roads. When we have local knowledge, we put it to use. When we don’t have that local knowledge, we exercise caution.

/r/j/c/bosch_autonomous_car.jpg

Where available, autonomous vehicles will use rich geospatial data to inform their real-time probabilistic model of the world around them and how they should navigate through it. Some of that data will be cartographic (such as roads and junctions), some will be behavioural (likely intentions of other road users, or optimal driving styles), but all will be uncertain.

Every piece of information in a prior map is dynamic to some degree; road topologies change, trees and vegetation can be removed from one day to the next and spontaneous events cause changes such as contraflows. That uncertainty, along with the requirement it places on real-time perception performance and scene modelling, will drive the need for huge amounts of compute to power the advanced AI required.

On the other hand, such an approach will be robust to dynamic changes in the environment and loosen the requirement for centimetre-level accurate mapping as well as the high bandwidth connectivity requirements associated with delivering those large file-size maps into the vehicle on a regular update basis.

Our ability as human drivers to successfully navigate a route in the absence of perfect information is an important part of keeping our traffic systems moving. And it’s a must for autonomous vehicles too.

Ben Peters, VP of Product and Marketing, FiveAI