Guest blog: training autonomous vehicles to cope with adverse weather

The safety of an autonomous vehicle is reliant on its perception of the world around it but adverse weather conditions have the potential to impair object detection methods. Pawel Jaworski, Innovation Lead: Connected Autonomous Vehicles at HORIBA MIRA, discusses how developments in machine learning are helping to address this challenge.

Rain, snow and fog have the potential to cause havoc for autonomous vehicles and their object detection systems. These adverse weather conditions cause sharp intensity fluctuations in images and video, ultimately degrading the image quality and making object recognition more complex. Raindrops create patterns on the image, decreasing its intensity and blurring the edge of objects. Heavy snow can increase the image intensity and obscure the edge of objects, while fog can reduce the contrast of the image and again make pattern edge recognition more difficult.

Adverse weather conditions makes object recognition more complex for autonomous vehicles - stock.adobe.com

To overcome this challenge, data recognition systems need to be trained to identify objects using a dataset of images captured in varied weather conditions. Typically, building such a dataset is complex and costly. It requires installing static cameras and capturing images in a variety of lighting and weather conditions, from dry and sunny, through to dark, wet and snowy. However, collation and mapping of high quality, paired image-to-image datasets for all possible conditions is prohibitively time consuming and expensive, and is often not feasible in adverse weather conditions.

To overcome this issue, HORIBA MIRA has been investigating a data-driven approach for gathering this varied dataset using Generative Adversarial Networks (GANs). Driving scene videos are captured in potentially clement weather conditions and augmented with seasonal changes employing the GANs method to produce quality video data streams that can be used to evaluate and train autonomous vehicle systems.

What are GANs?

GANs are a form of generative modelling using deep learning methods in an adversarial deep neural network. Generative modelling enables the automatic learning of patterns within input data to facilitate the creation of new data examples that plausibly could have come from the original dataset. Two networks are simultaneously trained: the ‘generator network’ and the ‘discriminator network’. The generator tries to create fake but plausible images, while the discriminator tries to distinguish between real and generated samples. The two models are trained competitively together with the parameters of both the generator and the discriminator continuously updated until the discriminator model cannot distinguish between real and generated data. An equilibrium point will be reached where the discriminator model outputs a probability equal to 0.5 for either a real or generated input.

GAN is the most successful generative model developed in recent years and has become one of the hottest research directions in the field of artificial intelligence. Applications to date include hand-written font generation, image blending and manipulation such as face ageing, medical applications, language and speech synthesis, and music generation, to name a few.

GAN variants: CycleGAN for AV applications

Although GANs are a relatively new technology, a great deal of research has been carried out to refine and create better and more powerful GAN variants. Through HORIBA MIRA’s investigations, CycleGAN has proven to be the most useful when applying it to the field of data augmentation for autonomous vehicle development.  

CycleGAN enables unpaired image-to-image translation: this captures the special characteristics of one image collection and identifies how these characteristics could be translated onto another image collection, without the need for paired training examples. This is achieved with an extension of the GAN architecture that involves the simultaneous training of two generator models and two discriminator models.

Images of road showing the applied augmentation - HORIBA MIRA

The first generator takes images from the first input domain and outputs images for the second domain, while the other generator takes images from the second domain as input and generates images for the first domain. Discriminator models determine how plausible the generated images are and the generator models are updated accordingly.

An additional ‘cycle consistency’ extension can be added to generate translations of the input images. In this extension an image output by the first generator is used as input to the second generator; the resulting output of the second generator should match the original image. The reverse also applies: an output from the second generator can be input to the first generator, with the result matching the input to the second generator. Additional equations can be added to ensure there is no identity loss and the colour composition is maintained between the input and output images.

HORIBA MIRAs DigiCAV

Having established the most effective GAN variant, this can be used to augment and extend the training datasets required for autonomous vehicle development. High quality videos of road driving scenarios are split into constituent static frames. The trained CycleGAN model is applied to each image frame and a new video is built using the augmented images that feature the different GAN-generated weather conditions.

The resulting video data streams enhance the training and enable evaluation of AV perception systems. This technology is now being integrated with HORIBA MIRA’s Digital Connected and Autonomous Vehicle Proving Ground (DigiCAV) platform, allowing us to ‘control’ the weather. Sitting within the wider ASSURED CAV test ecosystem, it supports the simulation and augmented physical testing of connected and autonomous vehicle technologies, enabling accurate object recognition in varied weather and lighting conditions for the safer deployment of autonomous vehicles.

Pawel Jaworski, Innovation Lead: Connected Autonomous Vehicles at HORIBA MIRA