Crunch time: innovations in data centre technology

Data centres - the engines of our digital economy – are pushing up against the limits of existing technology. Jon Excell looks at some of the technology innovations that will help them keep up with our insatiable demand for information

data centres
The observed slowdown of Moore's law is a major driver of innovation. Image: sdecoret / stock.adobe.com

If the smoking factory chimney stack is a potent physical symbol of the industrial revolution, then the data centre - the humming, server-crammed, high-tech temple to the bits and bytes that underpin almost every area of our lives - is perhaps its modern equivalent. And at a time when our demand for data is growing exponentially these engine-rooms of the digital economy have never been busier.

Even before the pandemic struck, our collective demand for data was insatiable, but in early March, as businesses across the globe switched to remote working; human relationships migrated en-masse to the virtual realm; and streamed entertainment became a lifeline for the quarantined masses, this demand was turbocharged. OpenVault’s Broadband Insights report points to a surge in internet usage of 47 percent during the first quarter of 2020, whilst figures from real estate analyst EG Radius Exchange point to a corresponding growth in plans to build new data centres to cater for this spiraling demand.

Clearly, the economic ravages of the pandemic cannot be overstated, but the ease with which many businesses and individuals have adapted to our changed circumstances is a compelling illustration of the power of the digital tools at our disposal. Indeed, leaders in a number of sectors (in particular manufacturing) have pointed to the pandemic as a catalyst for a long overdue productivity-boosting digital transformation.

And yet, just as we collectively embrace our digital future, concerns are growing that the technological forces underpinning this revolution are beginning to come unstuck.

Moore’s Law, Intel co-founder Gordon Moore’s observation that computing power doubles every two years with no increase in energy useage or cost, is predicated on a doubling of the number of transistors on a chip every two years or so. The phenomenon has held true for decades, becoming something of a self-fulfilling prophecy, but with engineers increasingly pushing up against the boundaries of what’s possible with existing technology, Moore’s law is slowing down. And data centres are at the sharp end of this concerning trend.

data centres
Inside Apple's 45,000 square metre Viborg data centre in Sweden. Image: Apple

Today, data centres are thought to account for an astonishing two per cent of global electricity usage, which is more than the total of the UK electricity market. But without a fundamental technological shift, the growing demand for the services they provide is expected to push this figure closer to 15 per cent, making them one of our planet’s biggest consumers of electricity.  And the quest to keep this number as low as possible whilst delivering improved performance is a major driver of engineering innovation.

Unsurprisingly, with cooling accounting for as much as 40 per cent of a data centre’s energy usage, many existing efforts are focussed on the development of improved techniques for removing heat, whether through improvements in conventional fan driven air cooling systems; the development of new more effective refrigerants; or through the deployment of more advanced systems that circulate coolant around server components.

Alongside new active approaches to cooling, developers are also actively exploring the benefits of siting data centres in cooler parts of the planet. Indeed, a growing number of facilities are now dotted around the arctic circle where low average air temperatures can help bring down operating costs.

Engineers have even explored the benefits of putting the technology beneath the sea, where the cooling effect of the ocean water can be exploited.

Once such initiative, Microsoft’s Natick project, has had a shipping container-sized data centre operating on the seafloor off the coast of the Orkney Islands since 2018.

Developed primarily to explore the potential of building smaller data centres for remote coastal communities, this project has also explored the natural cooling benefits of the underwater environment – and has made use of technology originally developed for submarines  by project partner, French defence firm Naval Group.  The system pipes seawater directly through the radiators on the back of each of the 12 server racks and back out into the ocean.

Microsoft's Natick project saw the development of a shipping-container sized data centre that was installed underwater off the coast of the Orkney Islands. Image: Microsoft

Recent years have also seen a growing use of renewables to power data centres, with tech giants such as Google, Facebook and Apple investing huge amounts to ensure that their energy requirements are being met with clean energy.

For instance, Apple recently announced plans to build two of the world’s largest onshore wind turbines – 62GWh each year – to power its 45,000 square metre data centre in Viborg, Sweden.  The turbines will support energy supplied by one of Scandinavia’s largest solar arrays, located in Thisted, Northern Jutland. The company also recently announced plans to build a 400,000-square-foot, state-of-the-art data centre in Waukee, Iowa, that will run entirely on renewable energy from day one.

As well as tapping into renewables, there is also increasing interest in using the waste heat generated by data centres to provide energy for district heating schemes.  This idea has gained particular traction in Scandinavian countries, where district hearting is already widespread. In Stockholm, for instance, grid operator Stockholm Exergi is leading efforts to attract data centres to the city, where they could tap into an existing 2800km network of district heating and cooling pipes.

In The Engineer’s September cover story - which explored efforts to use geologically warmed water trapped in the UK’s abandoned coal mines  for district heating - Coal Authority innovation chief Jeremy Crooks outlined a related vision for the UK, claiming that waste heat from data centres located close to Britain’s abandoned coal mines could be used to “top-up” this promising geothermal resource.

data centres
Apple recently announced plans to build two of the world's largest onshore wind turbines at its Viborg data centre. Image Apple

But whilst all of these innovations can play a role in reducing operating costs and reducing the overall impact of data centres, what they can’t do is overcome the fundamental constraints of existing approaches to computing. Which is why the world’s largest tech firms are investing increasing amounts of money and time into developing fundamentally new approaches that could – it is hoped – ultimately reboot Moore’s law and once again set our digital technologies on a sustainable growth path.

One area of technology thought to hold particular promise for data centres is the rapidly advancing field of optical networking, where fibre optics is used to dramatically speed the flow of information whilst reducing power requirements.

Currently, despite the widespread use of fibre optics in other areas, data centres still use electronic networks, where information has to be converted to electrons every time it needs to be routed.  But according to Dr Georgios Zervas – associate professor of optical networked systems at UCL - replacing these with fibre optic networks could lead to huge improvements in speed and efficiency. Indeed, because certain optical components don’t require any power, and therefore don’t need cooling, the technology could - he claims - ultimately reduce overall reduce power consumption by as much as fifty per cent.

Zervas explained to The Engineer that optical networking also offers a solution to the latency issues that dog existing technology. Currently when data is sent between the servers within a data centre, electronic switches are used to send it using a so-called packet switching approach, in which data is chopped into a number of different parts, which are sent independently over whichever route is optimum, before being reassembled at the destination.

There is nothing on the horizon using conventional electronic methods that suggests things will get any better

Dr Georgios Zervas, UCL

Whilst this approach is an efficient method of transferring large amounts of data, it does have its limitations, said Zervas. “When a packet leaves a server it’s not guaranteed when it’s going to arrive, and it’s not always going to get the same delay because it might take a different route, or the route become more congested over time.”

The benefit of using optics, he said, is that you can operate a network close to the speed of light and by using flat topologies (one hop can get you to any destination node) make much better use of the existing computing and storage technology. “Data centres are highly underutilitsed. For example – if  a server has 64 GB of RAM and 8 core processor – when all the processing power is used but not all the memory, the idle memory cannot be used by any other application so it is wasted. What many people are exploring is how you can compose a system out of building blocks – so you can have blocks of memory, blocks of processors, accelerators, storage, etc.. You can then connect them together and create any type of system you like.”

The use of optical networks could – he said – help make such an approach a reality, making best possible use of a data centre’s resources. It will, he explained, enable the development of efficient and large scale distributed learning of neural networks that used to solve problems across different sectors, from health to smart cities across thousands of processors (i.e. GPUs, CPUs).  It will be able to form systems able to solve vast and complex problems that have never been seen before and unlock new applications.

Zervas and his colleagues are working on several projects aimed at turning this vision into reality.

In one recent project the team demonstrated a superfast optical switch that can switch light (and data) on and off in just ~500ps (500x10^-12 seconds), a key milestone on the route to optical networks. The group has also made breakthroughs in the ability to rapidly switch between different frequencies of light  - showcasing technology that is able switch between 122 wavelengths in less than 1ns. This is key to the development of a switch that can talk to multiple different servers said Zervas: “If you can switch between the colours extremely fast you can send one packet in one colour, and the other in a different colour to go to a different place. So the other challenge is how you can change this frequency of light extremely fast.

data centres
Image: Sashkin via stock.adobe.com

UCL researchers have taken their inspiration from the natural world to develop an AI switch control method to speed up the switching time by 10 times.

In another project, UCL researchers developed a custom network processor to rapidly determine which routes data will take and which colour of light will be used.  “The dynamics in a data centre network are very fast – and you have to make decisions very fast,” said Zervas.  “If you make decisions too slowly you underutilise the network, many of the communications will be delayed and eventually the computational and application performance will suffer.”

The group has also made significant progress in the area of clock synchronisation, which is considered one of the key hurdles to making optical switching viable.

As Zervas explained, current electronic switched networks are formed of switches with optical links and transceivers (transmitters and receivers) between them. Two transceivers on either side of the optical fibre link are in continuous communication and so their clocks can be easily synchronised so the data can be correctly recovered. In optical networks one server might receive information from many other servers in quick and transparent succession. However each transceiver has its own clock that operates in slightly different frequency and phase. If two clocks (clock at one source server and clock are receiving server) operate in slightly different frequency and/or they are out of phase (that’s the default condition as clock generators frequencies have few Hz variation) then the destination server can’t correctly recover/receive the data. The challenge is how to devise a method in which the clocks of all servers are always synchronised in frequency and phase so a server can listen and receive data from all others correctly and this synchronicity can be maintained under diverse conditions (such as temperature variations in data centres).

The UCL team is exploring a number of different approaches to this challenge,  including a new technique developed  in collaboration with Microsoft Research dubbed “clock phase caching” coordinated by Zhixin Liu, a UCL lecturer, that synchronises the clocks of thousands of computers in under a billionth of a second.

Whilst such breakthroughs are key milestones on the route to optical networking, there is – said Zervas - still much to do.

But if there is one certainty in today’s uncertain world it is that our increasing reliance on data will continue to grow. And, sooner rather than later, this irresistible force will, he said, drive technologies such as optical networking into the mainstream.  “There is nothing on the horizon using conventional electronic methods that suggests things will get any better. Predictions are never easy but I suspect that from 2025 onwards things will become more serious on cloud providers to migrate from electronics to optics at least in parts of the data centre.

“It has to happen, it’s just a matter of when, depending on the business case and who is going to make the first step in order to influence everyone else.”