The flurry of headlines that came out of the LinuxWorld Conference & Expo in August 2002 would lead the casual observer to think that Linux, the open source OS (Operating System), was taking over the corporate computing world.
Oracle Chairman and CEO Larry Ellison is unequivocal in his praise for the seemingly ubiquitous OS. ‘We’re encouraging our customers to pick Linux for the very simple reason that it’s cheaper and faster and more reliable than any other environment around.’
An OS, however, is only a part of any computing solution. Deploying it effectively in time-critical design or manufacturing environments is of primary concern and has led major vendors such as HP and IBM to offer bespoke cluster packages, which are machines that run in parallel and allow programs to operate faster across multiple processors. Some companies, however, have adopted a DIY approach to clustering.
The CFD group at Volvo Aero Corporation is a case in point. Running the Linux OS on a 150-node cluster to perform CFD calculations, it is said to have reduced its computational resources by a factor of ten using compute nodes made up of normal desktop PCs running 100Mb/sec switched Ethernet.
This combination of cheap hardware and a universally revered OS begs the question: if Volvo is doing it, why isn’t everyone else?
‘You could just go and buy a load of PC’s and hook them up with Ethernet and say ‘I’ve got a supercomputer cluster’, but you haven’t. There’s a lot of set-up required,’ says Dr. Tony Kent of MSC Software.
‘You’ll need a lot of switchgear equipment, Ethernets and LAN connectors to make it all work. Then you need an OS like Linux and also effective load balancing software,’ he adds.
Bob McLatchie, centre manager, Oxford Supercomputing Centre, sounds a similar note of caution. ‘Going for off the shelf parts is not a simple task,’ he says.
‘What we found, when we built systems using off the shelf hardware, is that applications like CFD, which distribute points from various grids across a set of PC’s, require relatively low latency communications (the time it takes for a packet to cross a network connection, from sender to receiver) between the boxes if you are to get any reasonable scaling.’
Despite the expertise needed to set up his cluster McLatchie ‘ended up with a system that is a third of the price of a specialist, shared memory type of configuration that you might by from Sun or IBM.’
Alan Priestly, strategic marketing manager, Enterprise Server group, Intel EMEA, is not entirely convinced by DIY clusters.
‘Clusters right now are built using high density rack-mounted servers,’ says Priestly. ‘Rack-mounting gives you packing density. The challenge you face when you go to a high street vendor is that the stuff they sell is not optimised to run a rack-mounted solution.’
‘If you really want to build something that is critical to your business, you’re better off going to buy something built for the purpose. If you buy standard desk-top mother boards then you’ve got work out how you’re going to mount them. Any time you build your own system you’ve got to make sure that the thing is going to work reliably, and that is the value that OEM’s bring.’
‘It took Jonas Larsson and his colleagues at Volvo Aero Corporation a week to set up their cluster. This included learning Linux administration, installing the operating system on each node as well as setting up compilers, disks and cluster monitoring software. But Larsson agrees that a pre-installed cluster package is the best option for most users.
‘If you have never used Linux and haven’t done any system-administration on UNIX you might want to consider buying a pre-installed rack-based cluster with support included,’ says Larsson.
‘On the other hand, if you have a fair amount of experience from UNIX and like the thought of setting up things yourself, you should definitely buy separate machines and install the cluster yourself.’