Companies like Google and Microsoft are secretive about datacentres, the buildings that house the thousands of computer servers that store, process and distribute the large amounts of data that now permeate our lives.
Information about datacentres is tightly controlled, as is access to them and their employees. This belies the increasingly important role datacentres play in modern life, some would say now as vital a utility as energy, water or telecoms. An estimated 2.5 billion people are connected to the internet globally. That number is
predicted to grow to 3.6 billion by 2017 - almost half of the world's projected population of 7.6 billion. This increase will dramatically increase the amount of power datacentres use.
Running computer servers requires lots of electricity. Electricity produced from fossil fuels creates greenhouse gases and carbon emissions, causing climate change. Today, experts estimate that datacentres are responsible for around 2% of global greenhouse gas emissions – around the same amount as the aviation sector. The unfettered growth of datacentres could therefore have serious enviornmental consequences. According to the
Smarter report datacentres will consume 81% more electricity by 2020 than the did in 2012.
Alex Rabbetts is chief executive of datacentre consultancy Migration Solutions, which specialises in helping companies “move” datacentres, and is a board member of the European Datacentres Association. He says that datacentres are the foundation of our digital age and should now be viewed as “critical infrastructure”.
“If you switched off all the datacentres the world as we know it would stop. Almost everything you do in your daily life requires a datacentre. Everything form transport control, to the pesticides and herbicides produced for crops would stop,” he says. “It's a fact that datacentres use a huge amount of power and produce a huge amount of heat, but overall they are good for the enviroment.”
Rabbetts argues that the internet has reduced the environmental impact of nearly all the businesses it has changed, such as music, film, print media and the retail industry, by reducing physical manufacturing processes, infrastructure and transportation. It's an over-generalised argument, but not without merit. His conclusion is in line with mainstream thinking – that the smart move is to address the problem at source by building more low carbon sources of electricity.
Nevertheless, high energy costs mean there is a commercial imperative to cut power consumption Energy efficiency is the number one factor driving the sector, says Rabbetts. Between 35-50% of a datacentre's energy bill is to pay for cooling IT equipment. An average datacentre runs at between 18-27 degrees C to keep within the servers' recommended operating parameters. But modern servers can frequently run up to 35 degrees C. The primary culprit for power consumption is cooling and many engineers face a delicate balancing act to ensure IT equipment runs within its recommended temperature. The challenge is to remove hot air and push in cold air, for the least amount of energy.
It's commonly said that for every kilowatt put into a datacentre, at least a kilowatt of power is needed to remove the heat generated by its IT equipment. A typical “cascade” cooling process in a datacentre has up to five steps that will use plant. A supply fan moves fresh air in, a pump is used in a chilled water loop, a compressor is used in the refrigerant circuit, another pump is used in the condenser water loop and then there is another fan in the cooling tower. Adrian Jones, director of technical development at Cnet training, says that as the amount of plant and equipment increases, the ratio of power in to power required for cooling is often closer to 2kW required per kilowatt in.
The easiest way to improve cooling efficiency is to remove as many steps as possible while moving the heat out of the datacentre as quickly as possible, says Jones. “The best thing to do then would be to recycle the heat into offices or other parts of the building, but in reality you need ducting and pumps to move that air about, which is hard,” he says.
Most cooling systems for large datacentres work by pumping cold air under a pressurised raised floor. This system can be extended into the space between the server “racks” to create a “cold aisle”, a fairly well-established practice. Another fairly widespread practice is the use of “hot aisles”. These focus on controlling the heat at its source, in the servers, and moving it out of the racks by channeling it up into ducts in the ceiling. The techniques can be used independently or combined. But the aim is always to get the cold air in and the hot air out - striking the right balance between the two is key.
Another established technology is “in–row” cooling – the use of variable speed fans to control pressure and direction of air flow in the server racks. The use of in–row cooling is sometimes effective enough to negate the use of other perimeter cooling techniques.
Some of the most common problems faced in cooling management within datacentres include an inability to use external air and poor design, says Jones. There are also considerable operational demands placed on a datacentre from a variety of sources. The primary demand is from the business it serves, which will nearly always require continuous uptime. But there are also demands from environmental regulations as well as the pressure to reduce energy costly energy bills.
Engineers are also effectively “fighting the laws of physics” to ensure a cooling system's effectiveness, says Jones. Specifically, this involves avoiding recirculation and bypass, effects of the different physical behaviours of cold and hot air. Recirculation is where hot air mixes with the cold air stream and bypass is where cold air doesn't reach the computer equipment. Another enemy is pressure loss. Reductions in pressure can result from obstructions such as cabling, poor design or leaks, and will mean the cooling unit has to work harder, increasing its power requirement and costs.
The key to overcoming such operational challenges is to constantly measure and monitor conditions within the datacentre, which means installing sensor systems. “The payback is often huge and rapid,” Jones says. The idea of a data centre creating large amounts of data about itself may seem circular, but it enables the better planning of maintenance and upgrades, and tweaking of equipment according to load and environmental conditions.
Indicators growing in use by datacentre managers include the Rack Cooling Index and the Return Temperature Index, which allow facilities managers to adjust and maintain cooling services. The widespread use of measurement is also enabling the benchmarking of performance and the establishment of efficiency reduction aims. A recently created acronym to measure efficiency is Power Usage Effectiveness (PUE), which allows different datacentres to compare performance and share best practice.
There are other cooling technologies in an engineer's armoury. Computational Fluid Dynamics (CFD) simulations can be used to model airflow, both at the initial design stage and if retrtofitting a cooling system. Many datacentres are using variable speed drives in order to run motors for fans and pumps more efficiently. Air side economisers, to exhaust hot air, are also used. Some datacentres cool their servers with water, mainly in the chiller of the air conditioning system. Some do use liquid to cool computer processors directly, although this technique is finding more use in high performance computing applications, where the density of servers within a rack is higher.

A further example of liquid cooling and one of the most recent innovations is to totally immerse the server rack in a cooling fluid. Market-leader for this niche technology is Iceotope, a system developed by an engineering firm in the UK, which is finding customers all over the world.
As is always the case, which and how many cooling systems applied should be a decision dictated by commercial necessity than technical ambition. “You can have as many cooling systems as you want,” says Jones. “It depends entirely on your IT equipment and your environment.”.
Conversely, there is a growing trend within the sector to just replace servers more frequently and not bother with cooling. Alex Rabbetts, from consultancy Migration Solutions says: “Big mechanical and electrical companies like lots of plant and electrical equipment. But the reality is that modern servers can be run at higher temperatures. Fresh air is normally a perfectly adequate solution. Servers don't feel the cold, so why not just run them hot? They are almost consumable items, all the cost is in operating them.”
Running a server outside of its recommended operational parameters will reduce its lifetime, but makes economic sense when energy costs are high. A server costs around £800, and running it will cost between £3,500 to £4,000 over its lifetime. Reducing its lifetime from five to six years to two to three years often makes economic sense.
“Big firms that specialise in mechanical and electrical engineering can lack the IT expertise and miss solutions,” says Rabbetts. “One of the biggest issues this industry faces is dinosaurs from an M&E background. We need innovation in cooling, but datacentres are not mechanical and electrical infrastructure, they are IT infrastructure. You need IT knowledge to produce the best cooling solutions. The focus should always be the IT.”
Whatever the cooling solution, the increasing size and importance of datacentres within society are increasingly placing them under public scrutiny. Undeniably, datacentres have moved from the IT domain into being vital, if not critical infrastructure. If increased scrutiny results in more transparency for datacentres, from the privacy of the data they contain to helping reduce energy they use, it can only be a good thing.