Could technology help catch out lying politicians?

Senior reporter

Voice recognition systems are becoming good enough to understand what people say in real time. Now we need the artificial intelligence to check whether they’re telling the truth.

‘Political language is designed to make lies sound truthful and murder respectable’, said George Orwell in 1946. Since then, our faith in politicians to tell the truth certainly doesn’t seem to have increased.

Voter turnout at UK general elections has fallen from 73 per cent to 65 per cent since the end of the Second World War, and it’s easy to speculate that disaffection due to a sense that you can’t believe what politicians say might be a factor in that.

In the recent US election, an entire section of the media appeared to be devoted to fact-checking the candidates’ speeches, with commentators calling out both sides for using data selectively, misleadingly or just plain falsely. Some websites even started fact-checking the fact-checkers.

We’ve seen a similar situation arise in the UK with the likes of Channel 4 News and The Guardian running blogs examining politicians’ claims in the hours after they’ve made them.

Prime minister David Cameron was recently caught out by several publications from across the political spectrum for claiming the government was ‘paying down Britain’s debts’ when it’s actually borrowing more.

But these articles probably often go unread by the majority of the population who may only hear the false claims. (Whether they believe them or not is another matter.) So what if there was a way to instantly check facts and reveal whether a politician was telling porkies?

Hermann Hauser, co-founder of Acorn Computers and elder statesman of the British computing industry, thinks the technology could soon exist to provide such a solution.

Speaking at a recent lecture organised by networking group Cambridge Wireless, he revealed he had had conversations with bosses at Google about the possibility of an ‘evidence meter’.

‘The idea is if voice recognition is good enough, which it clearly is now, it can run continuous voice recognition at the bottom of your TV screen whenever they interview David Cameron or the opposition leader,’ he said.

‘You could then have a little graph at the bottom of your screen that varies between plus-one and minus-one so when David Cameron says the unemployment rate in Britain is 7.8, it can go away, find evidence as to whether it is 7.8, and shoot up to the plus-one position.

‘So this running evidence-meter below the news item I think could be a very cool thing to implement. I don’t think it’s that far away.’

It sounds like a great idea, and certainly it seems that he’s right about voice recognition technology. Many of us now have smartphones with good enough voice technology to search the internet or dictate and send messages without us touching a button (or screen). And a recent Microsoft demo showed how their system is good enough to offer near-real time voice translation from English into Chinese (see below).

But there’s a much bigger question over the level of artificial intelligence you would need to create the kind of fact-checking technology Hauser is talking about.

‘The problem is the knowledge that would be required for responding to the queries,’ said Anthony Hunter, professor of artificial intelligence (AI) at University College London (UCL).

‘If the queries were within quite a restricted domain then this is perfectly possible. But [for political speeches] the domains would, by and large, be too broad that you could have a significantly broad knowledge base to check those facts … I think AI and natural language processing have some way to go to address those problems.’

Even if an evidence meter could quickly search the entire internet, it would probably also encounter the same problem that human listeners have. A statement might be technically true but out of context it could mislead listeners or fail to give important information about the wider situation.

This is something that professional fact-checkers have become quite good at spotting. For example, Channel 4 recently published an article checking a statement by business secretary Vince Cable that the number of people starting apprenticeships had almost doubled in the last two years.

While it found this was correct, the blog also pointed out that the proportion of people finishing apprenticeships had gone down, as had the number of under-18s starting schemes.

And, of course, Cable didn’t say how many of those apprenticeships were working at the likes of Rolls Royce and how many were learning how to stack shelves in Tesco.

There is a branch of artificial intelligence known as expert systems that deals with the problem of making decisions using a database of knowledge, but initially ambitious predictions about its use gave way to disappointment and a much more modest approach to its use.

‘In the 1980s there was this idea that AI was going to develop quite rapidly and that by 2000 people like doctors and lawyers and so on would be finding themselves redundant,’ said Hunter. ‘But quite quickly it was realised that for a whole range of reasons it’s very difficult to emulate experts.

‘You can build systems that do very particular tasks but to broaden them out is extremely difficult. And to replace all of the interpretation and common sense and so on that humans bring to bear on problems is very, very difficult.’

So even with the incredible advances in voice recognition, it doesn’t look likely that journalists like myself will be put out of a job by fact-checking computers any time soon. (Phew!)