Advanced search

Hack to the future

They’re nightmare scenarios. You’re driving your gadget-packed car and suddenly the self-driving system malfunctions and takes you off the road. Or your pacemaker fails due to an errant signal from the modem that connects it to your doctor’s office.

The “internet of things” concept means more and more everyday objects from fridges to the food inside them are already - or on their way - to becoming connected to the internet. And where there’s a link, there’s a way for hackers to take control or viruses to find their way in.

Of course a cyber attack on a fridge is an unlikely and at worst annoying situation. But assaults on pacemakers and cars are a very real and dangerous possibility. And it doesn’t take an intentional attack from a cyber criminal, whether they are targeting specific individuals or just bored.

One of the biggest cyber threats to the public is from the unintended consequences of attacks, according to Andrew Beckett, head of cyber defence services at Cassidian. ‘If you release a cyber weapon over the internet, you don’t know what will be affected and where,’ he says. ‘Even if the target is based on a specific hardware/software configuration, you can’t be sure it’s the only one to have that configuration.’

The Stuxnet worm, for example, was allegedly created by the US government to attack Iran’s nuclear facilities but ended up infecting computer systems around the world including those of US oil company Chevron.

Cyber warfare might seem attractive because it doesn’t involve bloodshed, but the potential collateral damage or the danger of weapons being used against their creators means the consequences could be more far-reaching than traditional conflict. If a country’s power network were disabled, for example, it would throw transport, healthcare and communications systems into chaos and ultimately be responsible for a high number of deaths.

Does such a threat really extend to everyday internet-enabled objects, though? ‘The risks and threats go across everything,’ says Beckett. ‘We were involved in some research showing it is already possible to hack some internet-enabled cars and take them over. We were able to turn off the brakes off a car in motion and there are similar things you can do. The more things you connect to the internet, unless you secure them properly, the more potential for damage there is.’

This is starting to sound scary. How worried should we be? And will the technology come along to protect us from cyber attacks? Firstly, Beckett points out, there are often simple steps people can take to keep systems secure, like not using unknown memory sticks or opening attachments from odd or suspicious emails, things that many people still do despite the well-known dangers.

Manufacturers can also build in protections. For example, if a car has two-way communication capability so a mechanic can assess whether it needs to come in for a service, the car is more vulnerable if its critical systems are also accessible remotely.

But while there are some improved encryption techniques on the horizon, it’s rare that current protection systems are broken. It’s far more likely that cyber attacks will exploit a weakness in implementation. As with many things, we can’t rely solely on technology to solve our problems.

Readers' comments (3)

  • Along the same vein I seem to remember the FAA in America refusing certification or instigating a new regulation on the Boeing Dreamliner to prevent passengers getting cross-network from the passenger information systems into the flight systems, which had a physical nework link.

    Unsuitable or offensive? Report this comment

  • We need to train our HR personnel, too. The same people who hack into systems for fun were probably refused a job because they lacked formal, documented qualification. For a technical job the technicians need to make the decision, not some HR staff.
    Intended hacking has a motivation.
    Unintended disruption needs to be handled by a systematic and thorough risk management.

    Unsuitable or offensive? Report this comment

  • Complex systems have more failure points. Operating systems and applications have many more of these now.

    Businesses (and users) seem to insist on having monocultures (like a certain operating system I won't name) so any exploit is immediately useful on millions of machines. It's stupid and anyone could see it coming - it's just ignored because it was much easier to lock oneself into a single proprietary system and pretend that company X would keep you safe.

    In the Linux world the solution to the complexity issue is to have a *very* rapid update process so if you use some standard distribution you might be downloading updates every second day. This means that at least exploitable vulnerabilities cease to be useful eventually.

    The problem is with the number of people who don't update, sticking with end-of-lifed versions.

    If you're designing something with a computer in it I would suggest that you should consider a few things:
    1) How will you provide very rapid and effortless updates to the computing components of your product that will fix security issues.
    2) If it's internet connected then such updates must also be downloaded from the internet and must be free and automatic. i.e. your response-to-threat mechanism must be at least as powerful/fast as the threat. If you can't stomach rapid software updates then don't connect the damn thing to the internet in the first place.
    3) Don't think merely of OS viruses. You might have a very secure OS but someone's application can be infected if it has a sufficiently capable macro language e.g. javascript in web pages.
    4) Write platform-independent code - don't get stuck on some insecure system with no hope of escape.
    5) Don't believe anyone who tells you their system is totally secure. They are idiots.
    6) Think about security on day 1 of your design.
    7) If you make the human component of security too onerous then people will subvert your measures.
    8) Design with the assumption that parts of your system have been "compromised." Try to limit the worst possible thing that can happen based on that level of "penetration". e.g. adding a physical lock/key which software cannot "get around" to turn something on.

    Unsuitable or offensive? Report this comment

Have your say


My saved stories (Empty)

You have no saved stories

Save this article