• No results found

Selecting Rack-Mount Power Distribution Units For High-Efficiency Data Centers

N/A
N/A
Protected

Academic year: 2021

Share "Selecting Rack-Mount Power Distribution Units For High-Efficiency Data Centers"

Copied!
7
0
0

Loading.... (view fulltext now)

Full text

(1)

Selecting Rack-Mount

Power Distribution Units For

High-Efficiency Data Centers

By Anderson Hungria

Sr. Product Manager of Power, Electronics & Software

Published June 2014

WHITE PAPER

800-834-4969

(2)

Selecting Rack-Mount Power Distribution Units For High-Efficiency Data Centers

Power consumption in the data center continues to be a rising trend. The need to provide redundant power systems with high reliability and availability of compute resources is a major driving force for the increase in power utilization. Some data centers use just as much power for non-compute or “overhead energy” like cooling, lighting and power conversions, as they do to power servers. The ultimate goal is to reduce this “overhead energy” loss, so that more power is dedicated to revenue-generating equipment, without jeopardizing reliability and availability of resources.

There are many methods currently being implemented to reduce unnecessary power consumption in the data center—high-efficiency servers, thermal containment, server inlet temperature increase and reducing power conversion loss. When used in combination, these approaches can deliver low Power Utilization Effectiveness (PUE) values and reduce energy expenses.

As an added challenge, new trends in data center traffic highlight the importance of implementing energy efficiency techniques in facilities. According to the third annual Cisco Global Cloud Index (2012-2017), global data center traffic is expected to triple over the next five years—a 25 percent combined annual growth rate (CAGR)—to 7.7 zettabytes by the end of 20171.

Increased virtualization and use of high-density devices, such as blades and switches require even more power. For these reasons, it’s crucial to deploy a reliable and effective power distribution unit (PDU) at the cabinet level,

25% CAGR 2012-2017 Ze tta by te s / Y ea r 0.0 8.0 7.0 6.0 5.0 4.0 3.0 2.0 1.0 9.0 2.6 ZB 3.3 ZB 4.2 ZB 5.2 ZB 6.4 ZB 7.7 ZB 2012 2013 2014 2015 2016 2017

(3)

Managing Power Properly in the Data Center of the Future

Power is the biggest expense in the data center, most of it being used to cool and keep these facilities at a temperature that prevents servers or devices from overheating. One way to be more energy efficient is to implement a containment strategy, and then raise the temperature in the cold aisle from the traditional setting of between 60°F (15°C) and 70°F (21°C) to a higher temperature between 80°F (27°C) and 85°F (29°C). Thermal Guidelines for Data Processing Environments, part of the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Datacom Series, defines recommended and allowable environmental ranges (classes), as shown in (Figure 2)2below. Under certain conditions, data centers can save 4 to 5 percent in energy costs for every 1 degree Fahrenheit increase in server inlet temperature, according to the U.S. Environmental Protection Agency, Department of Energy, Energy Star program3.

35 40 45 50 55 60 65 70 75 80 85 90 95 100 105 110 115 35 40 45 50 55 60 65 70 75 80 Dry-Bulb Temperature (ºF) D e w -P o in t T e m p e ra tu re ( ºF ) 80% 70% 60% 50% 40% 90% 100% 75 70 65 60 55 50 45 40 35 30

A1

A2

A3

A4

10% 20% 30%

ASHRAE -- Recommended

CO NSTA NT W ET BU LBBB LB LB ULB UL U U U U BU B B B B B T B T B B B ET B ET WE WE WE W W W T W W T W T T T W T W T T T W W WE NT NT NT NT N N N AN AN A A TA TAAA ST ST NS NS N N N N ON ONNN O O C C C CO NSTA NT W ET BU LB CON C COOONNNSSSTTATATATAN AN AN AN N T T T RT R CO AAN R R RE EL ELLALAAATATTI V V S VE CONSTANT RELATI VE HUMIDITY CONSTANT DEW POINT 80

Figure 2: 2011 ASHRAE environmental classes for data center applications support higher equipment inlet temperatures.

(Source: Image based on psychrometric charts in: Data Center Environments: ASHRAE¹s Evolving Thermal Guidelines, ©ASHRAE, ASHRAE Journal, vol. 53, no.12, December 2011 and ©ASHRAE, Thermal Guidelines for Data Processing Environments, Third Edition, Chapter 2 and Appendix F, 2012, ASHRAE recommended range is added)

(4)

Achieve Power Optimization with Free Cooling

The path to achieving optimization with free cooling begins with good airflow management practices, as described in the U.S. Department of Energy’s publication Best Practices Guide for Energy-Efficient Data Center Design6. As a pioneer in Passive Cooling®Solutions to promote “free cooling” in data centers, Chatsworth Products (CPI) brings an unmatched level of quality, expertise and efficiency to airflow management. CPI Passive Cooling was one of the first solutions to use comprehensive sealing strategies and a Vertical Exhaust Duct to maximize cooling efficiencies at the cabinet level. Now expanded to the aisle level, CPI Passive Cooling and Aisle Containment Solutions (Figure 3) allow data centers to increase heat and power densities by as much as four times their original level and increase cooling efficiency by nearly threefold.

Cold Aisle Return Air Plenum

Raised Floor

Hot Aisle Hot

Aisle

Figure 3: Good airflow management separates cold and hot airflow pathways within the data center, leading to higher temperatures in the hot aisle where rack-mount PDUs are typically placed.

(5)

The Solution: CPI eConnect®PDUs

CPI eConnect®PDUs currently have the highest ambient operating temperature rating of any PDU in the market (Figure 5). eConnect PDUs have been specially designed and tested for continuous operation in ambient air temperatures up to 149°F (65°C) to exceed the anticipated temperatures in a typical contained aisle.

Figure 4: Computational Fluid Dynamics (CFD) model showing hot aisle containment and resulting increase in hot aisle temperatures.

The Importance of a PDU with a High-temperature Rating

PDUs are usually installed in the back of cabinets behind hot air exhaust from equipment, which is potentially the hottest part of every data center (Figure 4). Depending on the expected ΔT from servers ranging from 25°F to 30°F (13°C to 20°C), the heat at the rear of cabinets or hot aisle containment can reach 110°F to 140°F (43°C to 60°C). In this type of environment, there are very few devices that can continuously operate reliably and efficiently.

40 50 60 70

Top Operating Temperatures of Rack-Mount PDUs (ºC)

30 20 10 0 60 65 Low Temp Geist APC/ ServerTech Eaton/ Raritan Chatsworth Products Estimated range of hot aisle temperatures within contained solutions when inlet temperatures are raised to 80.6°F (27°C) and above. 40 45 52

(6)

To keep the product safely operational at such high temperatures, strategically placed air vents, a larger power supply, high temperature components and other elements were included in the design. eConnect PDUs comply to safety standards by the International Electrotechnical Commission and are UL Listed.*

Additionally, the PDU has a small, compact size to minimize the space it occupies within the cabinet. On units with multiple breakers, the outlets are arranged in an alternating pattern so you distribute load more evenly as you plug in equipment. For intelligent PDUs, the LCD display is centrally located so that it is easy to read when the PDU is installed. It is possible to access the unit remotely using a web browser for setup, monitoring and control. IP consolidation allows access of up to 20 PDUs through a single Ethernet connection and IP address. Alternately, the PDU supports SNMP so that it can be monitored with third-party monitoring software.

Conclusion

eConnect PDUs are the Simply Efficient™solution to the ever-increasing demand for reliable power in the data center. eConnect PDUs allow for remote access with optional monitoring and switching capabilities on outlets. With more than 180 standard configurations, including high-density models in 50A and 60A 208V that meet power loads of up to 17kW per unit, eConnect PDUs withstand the heat loads of any hot aisle containment and are the market’s best answer to the growing industrywide demand for High Performance Computing (HPC), virtualization and cloud computing.

(7)

References and Acknowledgements

1Cisco Systems, Inc. 2013. “Cisco Global Cloud Index: Forecast and Methodolgy, 2012-2017.” pp.1-3.

http://www.cisco.com/c/en/us/solutions/collateral/service-provider/global-cloud-index-gci/Cloud_Index_White_Paper.html. (or http://www.cisco.com/c/en/us/solutions/service-provider/global-cloud-index-gci/index.html).

2American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), Technical Committee 9.9, Mission Critical Facilities, Technology Spaces and Electonic Equipment. 2012. “Thermal Guidelines for Data Processing Environments, 3rd Edition.” pp.12-15 and Appendix F.

3U.S. Environmental Protection Agency. Department of Energy. Energy Star program. “Server Inlet Temperature and Humidity Adjustments.”

http://www.energystar.gov/index.cfm?c=power_mgt.datacenter_efficiency_inlet_temp

(or http://www.energystar.gov/datacenterenergyefficiency)

4Facebook. 2011. Reflections on the Open Computer Summit webpage. “Building the Next Open Data Center in North Carolina.” https://www.facebook.com/note.php?note_id=10150210054588920.

5Data Center Knowledge. Miller. 2012. “Too Hot for Humans, But Google Servers Keep Humming.”

http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/.

6U.S. Environmental Protection Agency. Department of Energy. Office of Energy Efficiency & Renewable Energy. Federal Energy Management Program. “Best Practices Guide for Energy-Efficient Data Center Design.” pp. 5-8. http://www1.eere.energy.gov/femp/pdfs/eedatacenterbestpractices.pdf.

(or http://www.energy.gov/eere/femp/federal-energy-management-program).

Anderson Hungria

Sr. Product Manager — Power, Electronics & Software, Chatsworth Products

References

Related documents

Apart from the uniform pricing constraint it considers the unregulated benchmark and symmetric price- regulation according to the welfare maximization scenario with only a

2 expressly requires auditors to disclose "specific information about the nature of any material weakness, and its actual and potential effect on the presentation

Figure 1: Most cloud infrastructures are ultra high density environments with significantly greater rack-level power requirements than conventional data

Ramp CDU Coolant Distribution Unit Cold Aisle Cold Aisle Hot Aisle Hot Aisle Cold Aisle Rack Front Rack Back Rack Back Rack Front Video Recording and PoE Equipment. Non-contact

APTS technologies are a collection of technologies that increase the efficiency and safety of public transportation systems and offer users greater access

To ensure peak operating efficiency – an absolute necessity given modern business demands – data centers must monitor and manage power distribution at a granular level. Advanced

Idle capacity is the current actual excess capacity that is held available to meet the as-configured maximum potential power or cooling demand.. The existing IT equipment might