Machine vision

Summer is usually the slacker part of the year for technology stories; university vacations mean that researchers take breaks, like the rest of us. But we’ve seen some intriguing ideas this week. The Israeli project looking at the structure of teeth to inspire new composites for aerospace composites made our jaws drop. A paper from mathematicians describing how a new cloaking method would shield devices that actively generate electromagnetic fields rather than being composed of ‘metamaterials’ was eye-opening. And a project from Glasgow University to use tiny polymer tubes lined with microscale patterns to heal neural damage touched a nerve.

But it’s an intriguing debate among engineering ethicists, technologists and legal experts which focused our attention. The Royal Academy of Engineering’s report on the possible implications of autonomous systems painted an intriguing vision of a future which, it appears, is set to creep up on us and have a major impact on our lives, but which sounds like something out of a fictional dystopia.

Artificial intelligence and sensor technologies have advanced to a point where autonomous systems are on the verge of breaking through into society in a big way, the report argues. Unmanned freight trucks could be plying the motorways within ten years, each equipped with its own control system, radar, lidar and cameras. In the home, the technology behind robot vacuum cleaners could be combined with machine intelligence to produce something like a robot pet, which could provide companionship to isolated elderly people. ‘It may seem odd, and it’s certainly not as good as regular human contact, but it’s much better than nothing,’ commented one of the report’s authors, Lambert Dopping-Hepenstal of BAE Systems. Your author knows people who name their household appliances; it’s not too big a jump.

The ethics and legal framework underpinning the introduction of these technologies is uncharted ground, and it’s heartening to see that serious thought is being put into this to guide the development of the devices, rather than hurriedly trying to cobble together regulations once they’re already a fait accompli. Robot surgeons —completely autonomous ones, rather than the teleoperated devices currently being used — could be more accurate and precise than humans, but if something goes wrong and the patient dies, then what would the public reaction be? And who, or what, would be culpable?

There are also ethical implications on the battefield. If autonomous armed systems are sent into combat against a human adversary, is that ‘fair’ warfare? Should warfare involve risk to humans on both sides? It’s a matter for philosophy as much as technology, and the traditional ‘Two Cultures’ divide between arts and science is ill-equipped to cope with such questions.

So while we’re intrigued by the possibilities of self-driving taxis that turn up at your door and take you wherever you want to go — a logical extension of the technology that operates the new personal transport system at Heathrow’s Terminal Five — we’re also concerned about how such systems will affect society. It seems that, more than ever, we need people to address these concerns; people who government, industry and the public will listen to. We need a Chief Engineer.

Stuart Nathan
Special Reports Editor