We’d better think “Green IT“ – part 2: Cool down, but not more than necessary

Well, it’s time to talk a bit more about “Green IT”. You know that itelligence operates several data centers around the world and so this is an important topic for us. Of course, we are commercially driven, no doubt about that, but when you run several thousands of servers  you consume megawatts of energy, then “Green IT” definitely gets on your mind.

Last time I said, that “Green IT” is more than energy saving. That’s true, but energy consumption is an important issue at data centers. Reduction of that consumption and, by that, reduction of CO2 output, is an important contribution on the way to “Green IT”.

How cool is cool enough?

Any piece of energy used by a computer system will be converted into heat. Generally, that’s not a bad thing. We all need heat. Unfortunately, this heat is in the wrong place. Computer systems don’t like too much heat; they are only able to operate at moderate temperatures. Thus, we have to get this heat away. We have to cool the systems and, what a pity, this needs even more energy. So the job is, to do the cooling as efficient as possible.

The first thing to think about is, what temperature my servers need. Views on that have changed over the last years. Data centers used to be places with a good chance to get a chill due to the prevailing low temperatures. But people became aware that servers don’t necessarily need an environment of about 20 or even 16 °C, but work properly at even 25 °C or higher. And it’s like at home: every degree that you don’t need to heat or cool, will save energy.

Interesting enough, at this point, there has been a development in the server market: This year Dell announced  a line of servers, specially designed to operate even at temperatures of about 40-45 °C. Not continuously as yet, but at least for a couple of days. The idea behind that is the so-called “chiller-less data center” with fresh-air cooling only (I will tell you more about that in another blog entry). But that’s, of course, the best trend: don’t investigate efficient cooling technologies, but do research in IT systems that don’t care about their own heat at all.

The best temperature

Unfortunately, every increasing degree of the air around these servers also has a negative impact:

  • Reduction of hardware durability
  • Less reserve for defects in cooling infrastructure and local hot spots (high temperature spots)
  • Increase of internal server cooling power (although that might not be much)

However, most servers today can stand a maximum temperature of about 35 °C. But be aware, that this is not standard operation temperature. A good level for server room temperature is from 20 to 27 °C. This depends on your individual server room architecture, the stability and control of cooling technology, server density in the racks, and type of electronics.

As continuous operation of many servers will heat up the server room environment, we have to cool the data center to supply the servers with appropriate conditioned air. The trend is to bring cooling water as close as possible to the servers. That has two advantages. The first one is, that water can pick up more heat than air and, thus, is more efficient in cooling. The second one is, that you can avoid hot spots more easily with water cooling, because it’s sometimes difficult to control air flows in data center rooms.

Water and electronics?

In contrast, the classical way to push cool air through a raised  server room  floor by use of CRAC (Computer Room Air Conditioner) units is no longer the latest technology . Although more easy to implement, it has the disadvantage of using an inefficient medium “air” and hard-to-control air flow. And don’t be afraid of water in server rooms: Nowadays, water pipes are installed at the raised floor level, while any cable wiring, power and data, are installed in ceiling mounted cable ducts. Additional sensors will detect water leaks at an early state in order to avoid any damage to sensitive electronics. And the trend goes on: While we already had water-cooled servers once in the past with central mainframe systems, we will get them back, servers with a direct water connection.

Currently, there are two main options to use water cooling. You can use so-called In-Row- or Side-coolers, where the water chillers are placed between the server racks. This works together with hot- or cold-aisle containments that enable the side coolers to work efficiently.

Another option is to implement water chillers directly at the back door of the server racks. Huge fans push the hot air from the server back through the chiller and immediately cooled down . The air leaving the rack is as cool as it was entering the rack. Latest developments are passive cooling back doors, that work without additional fans. With this technology, the air stream generated by the servers themselves directly passes full door chillers and is immediately cooled down. Advantage: no additional fans are necessary in the rack back doors. Disadvantage: You need to carefully cover the rack back room to avoid hot air flowing back to the rack front.

itelligence uses this kind of water cooling devices in its data centers. Where we have server racks, we use actively cooled back doors, where we have other racks, SAN storage racks for instance, we use the side-cooler option together with hot-aisle containments.

Okay, that’s all for now. My next blog entry will probably be about central cooling machines, free cooling options, and water chillers. And there’s more to say about, e.g. geothermic cooling and hot water cooling (yes, that’s possible).

Hasonló hozzászólás

További információ
SAP HANA Enterprise Cloud
További információ
Digital Transformation
További információ
SAP S4HANA 4
További információ
hybris Partner Summit 2015
További információ
man-writing-on-desk
További információ
Kövessen bennünket: