By Christopher M. Johnston, P.E.
Published in the July 2008 issue of Today’s Facility Manager
Power is a critical element in data center design because its capacity and performance affects the proper execution of every other system in the facility. Power must be coordinated with cooling (the other critical element) to ensure that each works hand in glove with the other.
A data center’s power system brings energy into the space in the form of electricity, and the cooling system removes the same energy after it is transformed into heat. If the power system is not up to the task of supporting that system (or vice versa), the entire data center could suffer.
Proper power design should focus on these major areas: appropriate availability and capacity; robust and flexible design and equipment; simplicity; and sustainability.
Availability And Capacity
The power design’s availability should be appropriate for the potential losses (financial or human) due to unexpected downtime. A Tier IV (system + system or 2N) arrangement with 0.99999 (five 9s) availability is not necessary for every project, even though it is considered today’s most state of the art set up. Instead, a Tier II (N+1 redundant components) arrangement with 0.999 (three 9s) availability may be better for a data center that serves a call center that is backed up by other call centers.
Facility managers (fms) should have an understanding of the worst case scenario when deciding on the amount of availability required. This standard has been used in the financial community for many years, and a corresponding principle is developing in healthcare operations.
If the initial budget is limited, it is better to focus on availability and reduce initial capacity (with the option to add more later as demand grows). Upgrades of availability in operating data centers are often difficult to perform and always subject the computer load to additional risk during the upgrade.
Hard lessons were learned in the late 1990s and early part of this decade: too much available capacity is just as much of a problem as not enough. Uninterruptible Power Supply (UPS) systems do not operate easily or efficiently at low percentage loads, and standby generators encounter similar problems. Electric utility companies are often skeptical about large initial loads and are reluctant to dedicate significant investments that will become stranded if the data center power load doesn’t develop.
Robust And Flexible Designs
In today’s business climate, scalable and modular designs are the best tools to accommodate growth in data center power requirements. For example, the capacity of a parallel redundant UPS system can be increased by adding more modules, as long as proper provisions are included in the original design. Likewise, additional UPS systems can be added if included from the beginning.
If the total UPS forecast is for 1mW for year 0, 2mW for year 3, and 4mW for year 5, then fms should consider designing for two 2mW systems. The first system will accommodate the load through year 3; the second system should be added in year 3 to accommodate growth beyond 2mW. Astute owners appreciate this build and pay as you go strategy, and astute designers will learn to look at their projects through the fm’s eyes.
Robust and flexible design and equipment should be employed, because there are few fms with an appetite for a data center outage that triggers maintenance, problem remediation, or growth. Generally, if a problem exists from the outset, an opportunity to fix it may never arrive again.
Some best practices for robustness and flexibility include: using drawout or plug in mounted circuit breakers instead of stationary mounted circuit breakers (wherever feasible); using switchgears and switchboards that have rear access and barriers between sections; providing physical isolation between redundant systems where necessary; and providing dual inputs for switchgears, switchboards, panelboards, power distribution units (PDUs), and remote power panelboards (RPPs).
A drawout mounted circuit breaker can be removed from service for maintenance without de-energizing its supply bus and creating a potential outage. However, removing a stationary mounted circuit breaker requires that the supply bus be de-energized.
Switchgears and switchboards with rear access and section barriers offer increased flexibility and availability over front access only equipment. Physical isolation between redundant systems reduces the risk of a fault in one system being transferred to the other.
A spare input into a panelboard or similar equipment provides a “back door” that can be used to accomplish modifications without downtime and with minimal risk. A prime example is retrofitting a static transfer switch (STS) on the supply side of an existing PDU. This retrofit is much more easily accomplished if the PDU has a second input circuit breaker.
Keep It Simple
Albert Einstein once said, “Life should be as simple as possible, but not more simple.” Too often engineers make designs overly complex, ignoring the needs of the operations staff, which must respond to problems (inevitably with short notice) without the luxury of a leisurely drawing review. On the other hand, the design should consider all the requirements and not overlook any for the sake of simplicity alone.
Simplicity in design eliminates unnecessary cost and complexity and provides substantial operating benefits. Some concepts in simplifying electrical system design include: reducing the number of layers in the electrical distribution system; eliminating unnecessary tie circuits; eliminating multiple circuit breakers of the same or similar ratings in series; providing fully selective overcurrent protective device coordination; and providing enhanced labeling and color coding.
Reducing the number of layers in the electrical distribution system makes it easier for staff to understand the system and boosts confidence in operating it. It also lowers the initial cost, reduces space required, and simplifies selective overcurrent protective device coordination.
Eliminating unnecessary tie circuits is another effective minimization strategy. Not using multiple circuit breakers of the same or similar ratings in series simplifies selective overcurrent protective device coordination.
If the circuit breaker is needed only as a disconnect, then it should be supplied with a non-automatic trip unit. Fully selective overcurrent protective device coordination increases availability and reduces staff confusion when a device trips.
Only the device immediately upstream of the fault or overload should trip. Providing clear, concise, and complete labels and color-coding improves staff training and confidence.
Sustainability should be designed into every project to minimize the owner’s total cost of ownership (TCO) for the fm. Since the cost of electricity continues to escalate at a rate that exceeds inflation, it is not appropriate to base TCO calculations on today’s electricity costs. Projections based on that number will already be out of date when the data center goes online.
Particular sustainable opportunities in the power system are: newer UPS technology with increased efficiency; UPS systems arranged for increased efficiency; equipment specified for peak efficiency at normal operating load; 575 VAC distribution rather than 480 VAC; 100% rated circuit breakers; and DC distribution at greater than 48V.
Today’s large capacity, double conversion static UPS systems work at less than 92% peak efficiency. New double conversion technologies operate at above 94% peak efficiency, and other technologies promise above 98% peak efficiency. Depending on power usage effectiveness (PUE) the data center, every watt reduction in UPS losses cuts data center demand by 1.6 to two watts.
Each UPS system in a system + system (or 2N) arrangement should never normally operate at more than 50% capacity and frequently operate at 25% capacity or lower. When at 25% capacity or lower, the operating efficiency will be approximately 88%.
If the UPS systems are arranged in 3N/2 and supply the same load, the operating efficiency will improve to over 90%. PDUs in a system + system (or 2N) arrangement normally operate at less than 50% of capacity and should be specified to have peak efficiency at their normal operating point, not at their peak capacity.
A carefully planned power system design can go a long way toward lowering overall operating costs and minimizing downtime. In this day and age, this type of system is an essential component of any data center project.
Johnston is a senior vice president and national chief engineer for New York-based Syska Hennessy Group’s Critical Facilities Team.
Powerful Tips to Protect Your Mission Critical IT Infrastructure
By Carl Walker, BSEE
Protecting critical information technology (IT) infrastructure is the cornerstone of business continuity planning for organizations of all sizes. In this age of critical computing systems and the Internet, IT infrastructure is vulnerable to damage not only from power outages but also from a myriad of hidden threats typical of the facility environment.
A 2005 study conducted by Berkeley Lab researchers for the U.S. Department of Energy’s Office of Electric Transmission and Distribution found that electric power outages and blackouts cost the nation nearly $80 billion annually, 98% of which was attributed to the commercial and industrial sectors. Large scale outages, such as the blackouts that struck the northeast in 2003 or those that occurred throughout southeast Florida in February 2008, tend to get the most attention. However small scale interruptions (those lasting five minutes or less) are the most damaging, costing businesses $52 billion (in comparison to $26 billion lost in outages lasting five minutes or longer).
Variations in power quality account for much of the damage. A study conducted by Price Waterhouse Cooper found that nearly half of the damage to IT infrastructure can be attributed to hardware failure frequently triggered by power problems, including power failure, power sags, power surges, brownouts, line noise, high voltage, frequency variation, switching transients, and harmonic distortion. IT equipment is also vulnerable to damages that stem from high density, high heat environments, and human error.
For many small businesses, the rack environment is the data center. When planning, it is important to consider the same logistics as in a large data center: access control, thermal management, power protection, power distribution, cable management, flexibility, and monitoring. Generators and surge suppressors are basic components in a power protection strategy, but many businesses make the mistake of stopping there.
One possibility for facility managers (fms) is to consider investing in an uninterruptible power supply (UPS) system. In addition to protecting personal computers, the UPS system can protect mission critical equipment by completely isolating it from raw utility power to deliver the cleanest energy possible.
One UPS can protect a single device, a rack of devices, or several racks of equipment. It can be deployed at an employee’s desktop, in the IT rack or enclosure, or at main power distribution points. For server rooms where there are several racks, it is common to choose a centralized solution as opposed to separate power protection for each rack, because this approach is typically more cost-effective.
In many data centers, racks are loaded with power requirements from 1.5kW to 3 kW. As equipment gets smaller, blade servers become more popular, and space constraints require companies to pack racks tighter; racks will increasingly need more power in the future.
A rack full of 1U blade servers can potentially draw 20kW or more. The best design for UPS systems is modular, allowing a business to add or remove components as its business needs dictate.
Heat related downtime is a constant threat in high density, high heat equipment environments. The best solution for controlling temperature in the data center is surprisingly low tech-fans. Door mounted fans offer many advantages, including evenly distributing cool air in a horizontal, front to back flow, without taking up any rack unit space. Depending on the rack setup, an exhaust fan (attached to the rear of the rack enclosure), multiple fans, or blanking panels (to partition unused spaces in the enclosure) may also be appropriate. Roof fans are not recommended, since rack mounted equipment is typically designed for front to rear airflow, and employing bottom to top airflow can lead to improper cooling within the rack or enclosure.
UPS can provide clean, continuous power, and fans can regulate the temperature of the rack environment. Neither of these solutions addresses another problem-human error. One solution is to employ diversified access control strategies to manage entry at the level of function and/or individual, while giving a top-level administrator control of the master key.
Another strategy that helps control costs and improves on equipment reliability and longevity is proper cable management. Bundling and routing cables provides easy access in a fluid environment and eliminates another opportunity for human error if cables are rerouted incorrectly.
The performance and lifespan of IT equipment can be compromised if the environment is dirty, damp, hot, or insecure. Choosing UPS components that best meets business needs, maintaining an optimal environment for IT equipment and ongoing monitoring, and supporting the data center environment will help fms predict potential trouble and avoid costly downtime.