Nov 302008
 

We have never worked on a Data Center project but the topic interests me. I love computers, the Internet and Energy Savings. A Data Center is at the center of all three. [Note this entry has been update a couple times, see ‘Additional Resources’ for more information.]

Although we haven’t work on Data Centers our partner Xcel Energy has. As a utility they have many resources available including a Data Center Efficiency program. Follow the link or call 1-800-481-4700 for more information. Note: be sure to select a state or else the link won’t work properly- I know Minnesota works.

There is a very good overview of the topic in the November 2008 Issue of ASHRAE Journal.  The authors look at the entire system and identify ten energy saving strategies:

  1. Lower power processors *
  2. High-efficiency power supplies **
  3. Power management features
  4. Blade servers
  5. Server virtualization
  6. 415V AC power distribution
  7. Cooling Best Practices
  8. Variable Capacity Cooling and Variable Speed Drives.
  9. Supplemental Cooling
  10. Monitoring and optimization: Cooling units work as a team

*AnandTech Low Power CPU Shootout.

** I’ve often wondered if there would be some savings bringing in 12 VDC and 5VDC power to the servers and putting all the power supplies a  separate area.

Their analysis shows the servers use 52% of Total energy consumption while the systems that support the servers us the remaining 48%.

The distinction between demand and supply power consumption is valuable because reductions in demand-side energy use cascade through the supply side. For example, in the 5,000 ft 2 data center model a 1 watt reduction at the server component level results in an additional 1.84 watt savings in the power supply, power distribution system, UPS system, cooling system and building entrance switchgear, and medium voltage transformer with no further action. Consequently, every watt of savings that can be achieved on the processor level creates a total of 2.84 watts of savings for the facility.

In addition to detailing all ten strategies, the authors detail the order they might best be implemented depending on server usage:

24-Hour Operation, I/O Intensive

  • Lowest Power Processor
  • High-Efficiency
  • Power Supplies
  • Blade Servers
  • Power Management

24-Hour Operation, Compute Intensive

  • Virtualization
  • Lowest Power Processor
  • High-Efficiency
  • Power Supply
  • Power Management
  • Consider Mainframe Architecture

Daylight Operations, I/O Intensive

  • Power Management
  • Low Power Processor
  • High-Efficiency Power Supplies
  • Blade Servers

Daylight Operations, Compute Intensive

  • Virtualization
  • Power Management
  • Low Power Processor
  • High-Efficiency Power Supplies

All Operations & Intensities

  • Cooling Best Practices
  • Variable Capacity Cooling
  • Supplemental Cooling
  • 415V AC Distribution
  • Monitoring and Optimization

The sequential approach can be tailored to the compute load and type or operation.

Further Details

Using the model of a 5000 ft2 data center consuming 1127 kW of power, the actions included in the approach work together to produce a 585 kW reduction in energy use.

Energy Saving Action Savings Independent
of Other Actions
Energy Logic Savings
with Cascade Effect
ROI
Savings (kW) Savings % Savings (kW) Savings % Cumulative
(kW)
Months
Lower power processors 111 10% 111 10% 111 12-18
High-efficiency power supplies 141 12% 124 11% 235 5-7
Power management features 125 11% 86 8% 321 Immediate
Blade servers 8 1% 7 1% 328 TCO reduced 38%*
Server virtualization 156 14% 86 8% 414 TCO reduced 63%**
415V AC power distribution 34 3% 20 2% 434 2-3
Cooling Best Practices 24 2% 15 1% 449 4-6
Variable Capacity Cooling VSD 79 7% 49 4% 498 4-10
Supplemental Cooling 200 18% 72 6% 570 10-12
Monitoring and optimization:
Cooling units work as a team
25 2% 15 1% 585 3-6

*Source for blade impact on TCO: IDC
**Source for virtualization impact on TCO: VMware ©2007 Emerson Network Power

Using these strategies the facility could operate at the same performance level but use less power and space. They reduced the number of racks from 210 to 60, while maintaining computational loads, yet reducing power, cooling and space.

Additional Resources

  • Of Sausage And Servers by Kevin Dickens in the August 2009 editions of Engineered Systems – “The early data center designers weren’t idiots. How did such a counterintuitive approach become the norm? Well, it isn’t necessarily because they didn’t understand the physics. They probably did. But they were working in a raised floor environment which was a product of the IT infrastructure, not of the HVAC infrastructure. So voila, necessity births invention, and we find we can cool relatively low watt densities using a supply plenum approach — albeit inefficiently, but no one cared about energy … until now.”
  • EPA and Department of Energy National Data Center Energy Efficiency Program  – Enterprise Server and Data Center Energy Efficiency Initiatives
  • EPA Report on Server and Data Center Energy Efficiency  –
  • The Green Grid
  • U.S Department of Energy – partnering with computer data centers
  • DC PRO – data center diagnostic tool
  • Reduce Data Center Cooling Cost by 75% , by Mike Scofield, Tom Weaver, Keith Dunnavant, and Mark Fisher. Engineered Systems (April 2009)
  • Energy In Data Centers: Benchmarking and Lessons Learned , by Munther Salim. Engineered Systems (April 2009). “If you work in data centers, then the Power Usage Effectiveness matters to you. Look into some ways to reduce the cooling system’s power consumption, understand the impact of climate zone and size, and perhaps improve your facility’s benchmark along the way.”
  • Data Center Retroit: Heat Containment and Airflow Management, by Muskesh K. Khattar, Ph.D. ASHRAE Journal (December 2010). Oracle was looking to expand their data center. At the same time they wanted to address a common problem, “Hot air diffuses into the cold supply air aisle near the top of the racks as well as on the end of the aisle, causiing mixed air temperature to be Unacceptably high.” They solved the problem by enclosing each rack with a plenum, which drew the hot air off the equipment directly into the return. ASHRAE awards this second place in their Industrial Facilities or Processes, Existing. The article can be downloaded for $8.00. According to the article, “At full-load operations, the energy cost savings is measured at $1.2 million annually.”

  2 Responses to “Data Centers”

  1. Arstechnica has an interesting article on Google’s use of shipping containers for self-contained semi portable data centers. Just hook up chilled water, electricity, and the internet. It seems they got the idea from Sun. They are also using a custom motherboard for their custom servers and using 12 volt batteries in place of a UPS system. Their custom motherboard is 12 volt only- any 5 volt is converted on the motherboard.

    More on the shipping container data center
    More on the Google servers

  2. My voltage transformer overloaded and I have to find a new replacement before all my fish die.