The truth will out

Lie detection could soon become a lot more sophisticated than a few bobbies quizzing suspects in an interview room. Rob Coppinger reports on potential new technologies.

The year is 2010, and an immigration officer at New York’s JFK Airport questions passengers newly arrived from London. ‘What is your business in the United States? How long do you plan to stay?’

But the officer is no longer relying on intuition alone to decide if he is being told the truth. Unseen by the passengers, a thermal imaging camera monitors the bloodflow around their faces and reacts to the changes that occur due to the anxiety of lying.

If this sounds implausibly Orwellian even in the current state of global paranoia, the US Department of Defence would not agree. The Engineer can reveal that the DoD has just begun a concerted effort to develop new lie-detection technologies, committing almost $2m (£1.25m) to its ‘deception-detection programme’ this year alone.

The work will be done by the Defence Advanced Research Projects Agency (Darpa), the US military’s hi-tech thinktank behind such advances as the robot tank and the unmanned aircraft. A Darpa spokeswoman confirmed that the programme’s aim was to develop technologies capable of ‘detecting individuals involved in high-stakes deception’.

She said it would concentrate on non-contact sensor systems that can measure key indicators such as micro-gestures, expressions, facial temperature, eye motion, brain activity and heart rate. ‘The scenario envisaged for this technology is the questioning of an individual at a military checkpoint,’ she added.

This burst of activity in the US is partly prompted by the huge investment in security technologies seen as vital to the ‘war on terror’. Another key goal is to create technologies that can be applied non-invasively and without the consent, or even knowledge, of the subject.

It also underlines, however, the waning star of the polygraph, the most venerable lie-detection technology in widespread use around the world but one whose efficiency has been repeatedly questioned. US confidence in the polygraph was badly shaken earlier this year by a report from the prestigious US National Academy of Sciences (NAS). The report concluded that the human responses the process relies on ‘makes polygraph testing intrinsically susceptible to producing erroneous results’.

NAS also criticised the US government for having never properly researched alternative lie-detection systems. It stated: ‘There has been no serious effort by the US government to develop the scientific basis for psycho-physiological detection of deception by any technique, even though criticisms of the scientific grounding of polygraph testing have been raised prominently for decades.’

Darpa’s hunt for new lie-detection techniques will examine the detailed merits of the polygraph against rival technologies such as facial thermal imaging, MRI scanning and the UK’s own Silent Talker – a video camera attached to an artificial intelligence system (see sidebar).

But even as it begins the search it seems clear that any alternative is likely to be subject to the same doubts as the polygraph. To be any more credible than its predecessor, a replacement to the polygraph would need to achieve a very high accuracy rate. Specialists working in the area agree that the challenge is huge. Dr Ioannis Pavlidis, an associate professor at the University of Texas, has pioneered studies of the use of a thermal imaging camera system for lie detection. His system monitors the warmth from bloodflow in the face, which can change when people are anxious because they are lying.

Pavlidis first investigated the technology’s potential as a stress-measurement tool for medical engineering giant Honeywell in conjunction with the Mayo Clinic, a US hospital chain. Its possible application to lie detection gradually became apparent, and it is now pressing a strong claim as an alternative to the polygraph. According to Pavlidis, the US authorities are concerned that other types of biometric security technologies are not completely reliable. An effective lie-detection system such as thermal imaging could, they believe, plug the gaps.

Facial-recognition systems are widely acknowledged to be less than foolproof because they can be baffled by beards and ageing. Rock-solid authentication techniques such as fingerprinting, iris recognition and DNA profiling only work if an individual’s sample is already held on a database. Also, many biometrics cannot be applied remotely and without the subject’s knowledge and co-operation.

Thermal imaging, by contrast, can be fully automated and could even be carried out over the internet. ‘The polygraph cannot be administered online,’ said Pavlidis. ‘You have to strap the subject in and you have to have an expert with them throughout the process.’

Although his technology is still firmly in the development phase, with funding for at least three more years of research, Pavlidis is already confident enough to make a vigorous public case for the accuracy levels it can achieve compared to the polygraph. He claimed the latest stage of research had brought the accuracy of the process up to ‘clinical standards’ and that it could reach a point where real-world application of the technology would be viable at locations such as border crossings and in other one-off screening situations.

But despite the ever-improving performance of thermal imaging Pavlidis accepted that large question marks persist over the theoretical basis of lie-detection technology (see sidebar). ‘Further research is needed, and that’s what the Darpa project is doing,’ he said. He also conceded that thermal imaging, while able to achieve a high level of performance, could never be 100 per cent correct.

In Pavlidis’s view, only brain scanning using MRI technology could achieve extremely high levels of confidence.

There are just three researchers in the world looking at functional MRI scanning and its application to lie detection. One is Dr Daniel Langleben at the University of Pennsylvania, whose research has found that when test subjects lied scans of their brains displayed a distinctive pattern associated with concentration and errors, suggesting the person had to think hard about lying.

‘If truth was the brain’s normal default response, lying would require increased brain activity in the regions involved in inhibition and control,’ said Langleben. He has just secured funds for further research from several US government agencies including the National Institute of Health. The new cash will cover two years of research to demonstrate the accuracy and validity of MRI scanning for lie detection.

But Langleben urged caution, stressing that his research is not at a stage where a lie-detection machine can be built. He explained that the percentage change in brain activity between lies and truth was tiny, meaning that the accuracy of the technology depends on working within extremely tight parameters.

Other academics have questioned whether MRI will ever be usable in lie detection. Dr Richard Wiseman, a researcher at the psychology department of the University of Hertfordshire, found MRI inadequate. ‘We did a pilot study but didn’t get the striking results one would want to justify a follow-up,’ he said.

It seems that the polygraph’s status as the best lie-detector system available may be secure for some time yet. Research into new applications of the existing technology certainly continues unabated. One of the most controversial relates to the use of the polygraph on convicted sex offenders when asking them about their recent behaviour.

Led by Prof Don Gubrin at the University of Newcastle, researchers claimed that a significant number of offenders who took part in a pilot study had changed their behaviour after being subjected to lie detection. Several local probation services have now begun using the polygraph as a result of the research, and moves are underway to train personnel specifically for the evaluation of sex offenders using the technique.

Whether through existing or emerging technology, and with the heavyweight backing of the US authorities, lie detection looks set to assume an ever more prominent role in the fight against crime and terrorism. But when high technology meets the mysteries of the human brain the results may always be unpredictable.

Prof Ray Bull, a specialist in the psychology of deceit at the University of Portsmouth, poses just one of the big questions confronting researchers in the field. ‘Some people will believe that they are telling you the truth when in fact it’s an untruth. And you’ll never be able to detect that,’ he claimed. True or false?

Sidebar:Comparison of technologies

The many and varied strands of technological research into lie detection all have one thing in common. They rely on ultra-sensitive analysis of the minutiae of the human body, its characteristics and reactions.

The longest-established, the polygraph, monitors an individual’s heart rate and skin conductivity – a measure of how much people are sweating. Since its invention early in the 20th century the polygraph has evolved into the pre-eminent lie-detection technology, albeit one dogged by controversy over how reliable it really is. Accuracy rates are usually put at between 60 and 70 per cent.

The polygraph is used widely in the US and several other countries, and the UK government briefly flirted with its use in the 1980s following a spy scandal at the top-secret GCHQ. The government commissioned the British Psychological Society to produce a report on the polygraph’s possible use in the security services, but its conclusions were negative and the subject was subsequently shelved.

Of the emerging technologies thermal imaging is one of the furthest down the road towards practical application. Its developers realised that the heat-detecting properties of a thermal camera could, when applied to the human face, create a 2D or 3D ‘map’ of bloodflow.

The level of detail thermal imaging can achieve – particularly of the all-important region immediately around the eyes – can be processed by software against unconscious and uncontrollable changes in bloodflow that occur due to the anxiety of telling a lie. Developers of thermal imaging claim to have achieved accuracy levels of around the 80 per cent mark.

Magnetic resonance imaging (MRI) has long been one of the most useful of all technologies in the medical world. Now some believe it is powerful enough to uncover deception in the depths of the human brain.

MRI uses extremely powerful magnets, whose electromagnetic waves are absorbed and reflected by the hydrogen atoms in the water molecules of brain cells. This allows it to map brain activity at the level of individual neurons.

MRI has helped neurologists understand exactly which parts of the brain do what. As that understanding has become ever more refined, lie-detection researchers have identified distinctive patterns of activity in the anterior cingulate gyrus and parts of the prefrontal and pre-motor cortex. These areas, they believe, hold the key to determining whether a response is true or false.

UK research into lie detection has concentrated on the monitoring of facial expressions and body language by machine intelligence. A computer attached to a video camera uses artificial intelligence (AI) systems trained to recognise and track minute changes in the face and upper body.

The AI uses its neural network to learn about deceit signals from the face’s micro-gestures, expressions and body language to build up a ‘deceit pattern’. This forms the basis of Silent Talker, developed by universities in the Northwest of England with accuracy rates in trials of up to 87 per cent.

One of the most ambitious projects concerns ‘brain fingerprinting’, developed by Dr Lawrence Farwell at the Human Brain Research Laboratory in Iowa. Farwell’s method scans brain activity in response to a particular fact, and claims to be able to detect the distinction the brain makes between a known fact and something it is seeing or hearing for the first time.

A subject claiming to have no knowledge of a fact – at the scene of a crime, for example – when their brain indicates otherwise is identified as a liar.

Sidebar: The science of deception

The massive technical challenges of developing systems that can tell truth from lies reflect the complexity of the whole science of deceit – a blend of psychology, physiology and raw human emotional response.

However, decades of research have enabled psychologists to identify some key physical attributes related to deceit.

This is because lying is believed to be stressful, and certain physiological characteristics are linked with this stress. The technical term used is ’emotional leakage’ which occurs via ‘channels’. These channels can be eye movement, speech tremor, body posture, heart rate, respiration and general levels of anxiety.

Lie-detection technologies monitor these channels for significant indicators of deceit. But some question whether the whole field of study is nothing more than an expensive wild goose chase.

Prof Ray Bull, a specialist in the psychology of deceit at Portsmouth University, pointed out that it is possible for an individual to train themselves to suppress the channels that would reveal their lies. Despite a wealth of theory and research carried out in the area, Bull is highly sceptical about the generalisations that are inevitably needed to construct a lie detection system.

‘You have to make assumptions about how that stress manifests itself which cannot be right all the time, and do not apply to everyone,’ claimed Bull.

Furthermore, like all new technologies, lie-detection systems rely on a lengthy and exhaustive R&D process, a crucial part of which involves ‘live’ testing on actual subjects.

But how reliable can this be, given the complex psychology involved in the whole process of falsehood?

The US National Academy of Science raised this issue in its recent report on the subject. ‘Laboratory studies suffer from lack of realism,’ it noted. ‘In the randomised, controlled studies focused on specific incidents using mock crimes, the consequences associated with lying, or being judged deceptive, almost never mirror the seriousness of these actions in real-world settings.’