Monday, 01 September 2014
masthead+quote+image
Advanced search

Safer software

There is now a way to mathematically prove that the software governing critical safety and security systems is free of a large class of errors.

There is now a way to mathematically prove that the software governing critical safety and security systems in aircraft and motor vehicles is free of a large class of errors - long before the plane takes off or the car’s engine starts.

All thanks to researchers at NICTA, Australia’s Information and Communications Technology (ICT) Research Centre of Excellence, who have completed the world’s first formal machine-checked proof of a general-purpose operating system kernel.

The Secure Embedded L4 (seL4) microkernel itself, which has been designed for real-world use, has potential applications in defence and other safety and security industries where the flawless operation of complex embedded systems is of critical importance.

'It is hard to comment on this achievement without resorting to clichés,' said Prof Lawrence Paulson at Cambridge University’s Computer Laboratory. 'Proving the correctness of 7,500 lines of C code in an operating system's kernel is a unique achievement, which should eventually lead to software that meets currently unimaginable standards of reliability.'

'Formal proofs for specific properties have been conducted for smaller kernels, but what we have done is a general, functional correctness proof, which has never before been achieved for real-world, high-performance software of this complexity or size,' explained NICTA principal researcher Dr Gerwin Klein, who leads NICTA’s formal verification research team.

The proof also shows that many kinds of common attacks will not work on the seL4 kernel. For instance, the microkernel is impervious to buffer overflows, a common form of software attack where hackers take control of programs by injecting malicious code. 'Our seL4 kernel cannot be subverted by this kind of attack,' said Dr Klein.

The outcome is the result of four years’ research by Dr Klein’s team of 12 NICTA researchers, NICTA/UNSW PhD students and UNSW contributed staff. They successfully verified the C code and proved more than 10,000 intermediate theorems in more than 200,000 lines of formal proof. The proof is machine checked using the interactive theorem-proving program Isabelle. It is one of the largest machine-checked proofs ever done.

To reach this milestone, the NICTA team invented new techniques in formal machine-checked proofs, made advances in the mathematical understanding of real-world programming languages and developed new methodologies for rapid prototyping of operating system kernels.

'This work goes beyond the usual checks for the absence of certain specific errors,' Prof Paulson said. 'Instead, it verifies full compliance with the system specification. The project has yielded not only a verified microkernel but a body of techniques that can be used to develop other verified software.'

NICTA will shortly transfer its intellectual property to NICTA spin-out company Open Kernel Labs, whose embedded hypervisor software - also based on NICTA research - is in millions of consumer devices worldwide.

A scientific paper describing this research will appear in the 22nd ACM Symposium on Operating Systems Principles (SOSP) here. Further details about NICTA’s L4 verified research project can be found here.


Readers' comments (11)

  • The proof sounds not really realistic.
    You cannot predict that the os kernel will meet the requirements in any circumnstance.
    For instance:
    What happens if some cosmic rays hit the memory chips on which the kernel runs, thus modifying randomly bytes values?
    Will the kernel OS still be compliant?

    Unsuitable or offensive? Report this comment

  • They have answered that question in their FAQ (http://ertos.nicta.com.au/research/l4.verified/faq.pml), see 'Does seL4 have zero bugs?'

    Unsuitable or offensive? Report this comment

  • That's not what they were proving.

    The formal proof of completeness and correctness of the code does not take into account random hardware failures.

    If you pulled the plug on the machine, it wouldn't work correctly either.

    Unsuitable or offensive? Report this comment

  • Software can rarely make the machine more safe than the previous layer, that is, the hardware layer. Hardware is increasingly checked using verifiers too, though; assuming functionally correct hardware and untampered software, theorem proving shows that certain conditions cannot occur.

    If cosmic rays hit the hardware (or what not), it will be the hardware layer that fails to deliver up to the specification. Consequently, the software and the machine may fail. If the hardware layer has been verified against cosmic rays (e.g.), the machine (and the software) should continue to function.

    Similarly, if code is tampered through hardware backdoors, the guarantees that formal verification gives, will quite naturally cease to apply. Nothing very complicated there.

    Since the theoretical model of software is (by the nature of software) quite perfect, two questions arise, though, regarding "complete safely" against cosmic rays and what-not:
    1. How realistic is the theoretical model and the assumptions made about components and environment when verificating the hardware?
    2. How insulated are the software and the hardware, that is, does the software rely on specified hardware features and only them? If software uses features (or "features") of hardware that have not been formally verified, quite naturally the guarantees given by the hardware are not complete.

    There's my two cents.

    Unsuitable or offensive? Report this comment

  • I would agree, not necessarily because of cosmic rays, but I can't shake the fundamental conviction that proof of mathematical correctness isn't quite the same as being bug free.

    It does mean that a rather lot of people have spent time looking at the code though.

    Unsuitable or offensive? Report this comment

  • "Beware of bugs in the above code; I have only proven it correct, not tried it."
    --D. Knuth

    Unsuitable or offensive? Report this comment

  • Hmmm. I assume they have yet to verify the C compiler, linker, and any other tools used to generate the binaries?

    Unsuitable or offensive? Report this comment

  • Nothing is safe. Only safer.

    Unsuitable or offensive? Report this comment

  • I read GEB:EGB again last year, and I have to say that the description of what they have demonstrated completely lacks any mention of Gödel's Incompleteness Theorem. For a sufficiently complex system...well, now I'm not sure if the OS kernel or the TEST of the OS kernel would be the system in question for Gödel.

    At any rate, I am absolutely certain that their statements about what they have demonstrated are too strong. At most, they have demonstrated the lack of certain faults, and likely only in certain situations.

    Unsuitable or offensive? Report this comment

  • This is a great step forward. Just look at the available RTOSes on the market. Most of them can't even be certified by lack of decent design documents. Open source does not do much better.
    Formal methods should be a mandatory tool of any software engineer although formal methods are not the answer to all software engineering issues. Safety and security are system properties, but using a formalised approach from the beginning is a precondition to deliver safe and secure ("dependable") systems. There is also a dependency chain. Translating incomplete requirements into software correctly still yields wrong software no matter the formal verification done. And often software will depend on the correctness of the hardware, not to say the least of the often erroneous documentation. Just one example that formal verification does not solve is stack overflow. It can happen either because the developer made a mistake or because the hardware misbehaves. The result is the same: the systems will likely fail.
    This being said, this is not the first RTOS that was formally developed. A recent example is OpenComRTOS. (see http://www.openlicensesociety.org/drupal55/system/files/FM+2008+-+OpenComRTOS.pdf) It is a completely different breed of OS, targeting embedded, networked systems so the L4 kernel and OpenComRTOS are complementary. Another difference is that it is part of a complete systems engineering methodology. (http://www.altreonic.com/drupal64/system/files/Whitepaper_OpenComRTOS.pdf) Achieving dependable systems is a process. Using dependable components in this process is a precondition.
    Let's hope that we will see more such developments coming to the market. Dependability is not just about safety, it can also deliver products faster. If we ever want to close the software productivity gap to catch up with Moore's law, formal methods will be part of it.
    Eric Verhulst, CEO/CTO Altreonic

    Unsuitable or offensive? Report this comment

View results 10 per page | 20 per page

Have your say

Mandatory
Mandatory
Mandatory
Mandatory

My saved stories (Empty)

You have no saved stories

Save this article