FM Issue: Today’s Data Center

By Melissa Chambal, RCDD, NTS
Published in the February 2011 issue of
Today’s Facility ManagerPhoto: Thinkstock

Today’s facility manager (fm) is a key contributor to the successful relocation or expansion for many organizations and their data centers. Fms provide the design team with the insight and understanding of how the critical infrastructure systems operate according to the expectations of every interested party in the facility. Power, HVAC, and access control are all vital to any data center, large or small, and the fm is responsible for them all. Whether from an internal facilities perspective or from a commercial building environment with multiple occupants, the fm is required to provide consistent and reliable service within the facility for a multitude of disciplines.

How Critical Is Critical?

Not all data centers are alike. Each facility will have its own distinct operating procedures, systems, layout, construction, and occupancy requirements. By understanding these and the organization’s expectations for the availability of the network, today’s fm is better positioned to provide a safe, continuous, consistent, and reliable infrastructure.

Regardless of industry and equipment, all data centers have one thing in common, and that’s the cost of downtime. Conservative estimates have listed this cost for financial institutions to be in the $7 million dollar per hour range, with credit card and banking around $3 million per hour (and climbing). Cross industry averages are approximately $47,000 per hour, which includes lost revenue, wages, and productivity.

IT departments know the critical network and storage demands for the business applications and software they must support. But requirements like “Is it a 24/7 operation?” or “Will ‘remote’ or ‘after hour’ network access be required for continuous support?” should be clearly communicated by IT to facilities management (FM) early in the project phase so fms can plan, integrate, and operate a building infrastructure adequately—at the very least.

Fms have become more aware of the demand and availability requirements in their mission critical operations. After all, if the data center within the facility is the only one the enterprise has (which is very common for many owner occupied and operated buildings), any outage can be crippling and have a long-term affect. Grasping how an organization will be impacted with an outage will help explore solutions in preventive maintenance and even expansion plans.

Technology is changing rapidly, and today’s cross industry corporate IT departments are in a continuing stage of proactive and reactive situations when it comes to the applications and processes that networks support. Consequently, the fm with one or more mission critical site(s) will always be in the same proactive and reactive situations; however, the emphasis for the fm will be on other specialized disciplines (such as power, HVAC, fire suppression, and access control).

Location, Location, Location

Not all locations are ideal for a data center. In areas subject to seismic instability or prone to high winds (such as hurricanes and tornados), facilities that will house data centers need specialized hardening of structural elements. Depending on the geographic location, these characteristics will be inherent in the initial design of the building, based on local and life safety codes.

Special consideration of data center placement in the building is also important. Computer rooms housing servers and mission critical equipment should not be next to elevator shafts, load bearing walls, or exterior walls or windows. This will limit expansion and pose security concerns. This industry best practice can cause space planning nightmares, but it is better to have the capability of expansion built in early, rather than being faced with the “out of room” possibility later in the lifespan of the facility.

Careful examination must be considered when the plans of a data center are explored for a particular space in a multi-story building. For high density data centers, industry experts call for floor loads of 250 pounds per square foot (with a hanging load of 50 pounds per square foot). These requirements can be difficult to meet in older commercial buildings. The FM team will be pivotal in providing engineers with the information they need to ensure weight loads are within tolerance. Most importantly, it will be the fm who will diplomatically run interference to ensure any subfloor reinforcement work will not inconvenience others in the building.

Industry standards recommend a minimum span of 15′ slab to slab distance between floors to accommodate a minimum of an 18″ raised floor; high density data centers require a 24″ raised floor. A minimum 10′ ceiling height (from above finished floor to ceiling) will allow the hot air return path from the servers to rise unobstructed to the plenum space above and return to the computer room air conditioners (CRACs). Unfortunately, many existing commercial data centers exist in cramped quarters that do not have enough clearance either above or below to allow for cool air circulation.

Anchoring of freestanding cabinets and racks to the slab is a best practice being implemented, regardless of location, with additional bracing if required by local codes. (For more on these issues and others, see the accompanying sidebar.)

Got Power?

Availability of redundant power is most attractive to the organization with large computer processing needs. Eliminating single points of failure in the power chain will increase redundancy.

The farther away the failure is from the servers, the more of an impact the failure has on the entire data center. Thus, for mission critical facilities, redundant power should feed from separate and distinct substations entering the building into their own separate electrical entrance rooms. An opposite side of the street scenario is ideal, but it’s also the most costly. Backup generators dedicated to maintaining the critical business functions in addition to life safety/emergency power generator requirements add an attractive benefit to the enterprise whose entire viability depends on the availability of power.

Most data centers will require their own uninterruptible power supply (UPS) or system—whether standalone or within the cabinets themselves—in addition to a percentage of the building UPS that may be available. This could present a concern with respect to floor loading (as mentioned earlier), as the standalone systems can be large and heavy. The modular approach to UPS systems can help organizations that may wish to expand critical network equipment and allow for proper shutdown or continuous operation, but it will all depend on how the electrical system is designed.

Grounding and bonding are essential from life safety and performance perspectives. Today’s sophisticated equipment requires less than five ohms of resistance. Nothing creates sporadic and intermittent problems, failures, and equipment shutdowns more than poor grounding and bonding systems. Proper techniques throughout the facility may eliminate many problems before they even happen. Regular facility inspections are strongly recommended to ensure the grounding infrastructure is intact and that any new cabinets or equipment is being properly bonded to the building ground system.

Brrr…Is It Cold In Here?

Cooling is the largest power requirement for a data center. But it actually might be a bit too cold in some data centers, especially if fms are unaware of the recent changes in data center cooling.

Delivering the proper amount of cold air can be a challenge, but fortunately, it can be delivered in a variety of ways. Most common in data centers is underfloor cooling architecture. CRAC units deliver cold air under the raised floor by way of properly placed perforated floor tiles.

This cooling architecture works best when the underfloor space is free from obstructions. Cabinets are aligned in a hot and cold aisle configuration, allowing the cool air to rise up from the cold aisle through perforated floor tiles. However, overhead and in row cooling may need to be deployed if no raised floor is present (or a lack of space or clearance causes concerns). A combination of in row, CRAC, or individual water cooled cabinets may need to be deployed as the data center migrates to higher density blade servers.

One thing that’s changing strategies is an update of ASHRAE’s Thermal Guidelines for Data Processing Environments (2009). With a new recommendation of a temperature range of 64.4˚F to 80.6˚F (or 18.3˚C to 26.7˚C)—this 3˚F increase will translate into power savings. Measurements must be monitored at the air intake of the equipment. (The temperature indicated on the CRAC unit itself is not indicative of the temperature down the cold aisle with a new blade server in production.)

What Is PM Anyway?

In the world of FM, there may be some confusion over the initials PM. Is it preventive maintenance or project management? Actually, it’s both.

Preventive maintenance is vital in terms of the health of facility infrastructure. Preventive maintenance procedures must be strictly followed to ensure the safety of those performing the work, but it must also ensure that people, property, and systems continue to operate as intended. Unfortunately, many documented data center outages are due to neglected routine maintenance on key infrastructure systems.

Facility project management becomes vital in coordinating planned, scheduled maintenance. Contractors should provide a step by step scope of work detail of what is to occur, and many fms require a detailed back out plan if the procedure runs into a problem and cannot be completed. This ensures the system will be restored to its pre-maintenance condition and gives the team time to explore solutions without compromising expected services. As tedious as this may seem, it allows for contingencies that may not have been there, had the procedures not been enforced.

Since routine maintenance for infrastructure systems typically falls under the responsibility of FM, it is vital to inform users of procedures, just in case alternate methods of network access must be arranged. Many small- to midsize organizations will require a complete shutdown of network facilities without any alternate means of access. This is common, and when planned and managed accordingly, these routine maintenance projects can—and should—be invisible to users.

What Now?

Technology is getting faster. Enterprises depend on their networks to carry out their business objectives. These networks are housed in facilities with complex and diverse systems that keep the servers humming without a second thought. So what now? How do fms prepare their facilities to meet demands?

Technology links everyone, everywhere. Fms and data center managers must keep this link open in order to communicate what changes have—and will—occur within the enterprise environment.

Is more power in the future? Yes. Good clean reliable power will be a commodity many will seek out in their existing or new places of business. Newer buildings are being designed with power and technology entrance rooms. Energy efficient programs with local power providers and materials will help minimize the building’s usage and carbon footprint.

Virtualization will free up floor space by consolidating multiple applications onto fewer servers. However, those applications will reside on more power hungry blade servers, churning out high quantities of heat that need to be cooled or removed.

Regardless of the application, communication must take place between the FM team and the IT department—person to person—around the design table, conference table, or drafting table. Bridges have been established to help IT and fm understand the industry requirements for a productive and functional data center. Resources available through professional networking groups and published industry standards and best practices can offer assistance.

Power and cooling are the focal point of many technical white papers and blog postings. Fms looking to conduct research on this topic simply need to search under the terms “data center,” “power,” and “cooling,” and numerous resources will be supplied. These resources will offer suggestions, recommendations, and solutions for the data center manager and illustrate how fms can achieve success.

In the era of technological innovations, many fms will be riding a high speed roller coaster with their building occupants as they migrate to new platforms, speeds, and processes which will be embedded into the normal course of doing business. What’s the best advice for fms entering the data center world? Just hang on and enjoy the ride!

Chambal, RCDD, NTS, BICSI ITS Technician, has spent over 25 years within telecommunications and data centers. She currently is a Master Instructor with BICSI and is a leading expert for the organization network and data center design courses.