Читать книгу Maintaining Mission Critical Systems in a 24/7 Environment - Peter M. Curtis - Страница 54

3.8 The Evolution of Mission Critical Facility Design

Оглавление

To avoid downtime, facilities managers also must understand the trends that affect the utilization of data centers, given the rapid evolution of technology and its requirements on power distribution systems. The degree of power reliability in a data center will impact the design of the facility infrastructure, the technology plant, system architecture, and end‐user connectivity.

Today, data centers are pushed to the limit. Servers are crammed into racks, and their high‐performance processors all add up to outrageous power consumption. In the early 2000s, data center power consumption increased by about 25% each year, according to Hewlett Packard. The load in DC’s has leveled off in my mind. They were designed for 200 or 300W/sq. ft. at one time, and they never came close to that rating. 100W to 200W/sq. ft. is more common today and offers sufficient power and cooling for these loads. At the same time, processor performance has gone up 500%, and as equipment footprints shrink, the free floor area is populated with more hardware. However, because the smaller equipment is still rejecting the same amount of heat, cooling densities are growing dramatically and are rapidly consuming more floor space. Traditional design using watts per square foot has continued to grow and can also be calculated as transactions per watt. All this increased processing power generates heat, but if the data center gets too hot, all applications grind to a halt.

Electrical power is easy to design by serving each cabinet with an A and B UPS power source sized to support the IT load within the cabinet. Note the data center industry is now predominantly dual corded equipment fed by 208V power, making it more efficient than the old 120V single‐phase power standard. These cabinet feeds are typically 20A, 208V single phase circuits or 3‐phase 30A or 60A, 208V circuits for large IT installations. Cooling, on the other hand, to remove the heat is more complex to design and maintain since cold air cannot be easily adjusted to each cabinet to neutralize its heat output and data processing board requirements. Underfloor air distribution with a 36 or 48‐inch raised floor may be required for the high heat densities. Other data center operators have been successful with equipment cabinets on the floor slab with an overhead air supply in “cold aisles” and hot air returning to the CRAH units in the hot aisles. Another method to increase heat removal and efficiency is installing cold or hot aisle containment. The cold (or hot) aisle is closed off at each end with doors, and a barrier is extended up to the ceiling to ensure the maximum amount of cold air is supplied to the IT equipment intakes. For hot aisle containment, the hot aisle is barriered to remove the exhausted cabinet heat quickly and not short cycle back to the cold aisle.

Many data center designers (and their clients) would like to build for a 20‐year life cycle, yet the reality is that most cannot realistically look beyond 2 to 5 years. As companies push to wring more data‐crunching ability from the same real estate, the lynchpin technology of future data centers will not necessarily involve greater processing power or more servers, but improved heat dissipation and better airflow management.

To combat high temperatures and maintain the current trend toward more powerful processors, engineers are reintroducing old technology: liquid cooling, which was used to cool mainframe computers decades ago. To successfully reintroduce liquid into computer rooms, standards will need to be developed, another arena where standardization can promote reliable solutions that mitigate risk for the industry.

The large footprint now required for reliable power without planned downtime also affects the planning and maintenance of data center facilities. Over the past two decades, the cost of the facility relative to the computer hardware it houses has not grown proportionately. Budget priorities that favor computer hardware over facilities improvement can lead to insufficient performance. The best way to ensure a balanced allocation of capital is to prepare a business analysis that shows the costs associated with the risk of downtime.

Maintaining Mission Critical Systems in a 24/7 Environment

Подняться наверх