* This article is in conjunction with a recent episode of the Mass Construction Show
Have you ever wondered what would happen if during major surgery there was a utility outage while you were on the operating table? What about when you step foot on an aircraft and trust that the air-traffic control system will monitor your aircraft and all the others too to keep the skies safe? Do you think about the internet, this “always on” marvel of the modern era allows commerce, finance, banking, search engines, video streaming, and thousands of applications to share content instantly with just an internet connection?
Supporting almost all technology applications are a complex system of data centers that are constantly receiving, sorting, archiving, and mining internet traffic. “Mission critical” is a term used to describe data centers and high-tech industries that need to maintain “always on” reliability. Outages of any duration could have serious, sometimes life-threatening impacts. One can easily imagine how dangerous it would be for an air traffic control system to suddenly stop working mid flights. Similarly, consider the financial and business continuity impacts if online banking and credit card transactions could not be processed for even a short period of time. Data centers are incredibly important to modern day technology and, as a result, mission critical construction is becoming increasingly relevant as a multi billion dollar sector.
Design of Data Centers
Design of data centers needs particular attention to accomplish “always on” reliability. The two areas of focus are electrical power and mechanical cooling. The electrical system is most critical because it supports all computer servers and powers the mechanical cooling equipment. Events that are predictable and need to be mitigated include utility outages, maintenance shutdowns, and unforeseen equipment malfunctions. Every facility is built to a different resilience standard based on the function of the facility. Critical safety infrastructure, such as the air traffic control system, would naturally have many additional layers of controls and redundancy over a data center that supports a less critical function.
N + 1 Design
A common method used to incorporate resilience and reliability is Need + 1 design, (N+1). N+1 means that if you “need” one electrical source and one generator of a certain size then these are provided, but as a back-up you install a totally separate lineup with a different electrical source and separate generator to meet the “one additional” (+1) criteria. In this scenario, N+1, you can drop one entire lineup to do planned maintenance, handle an unforeseen malfunction, or possibly add to the system while providing constant power through the second lineup. Below, Figure 1 shows an elementary N+1 lineup. As you can see, if you take out any part of the system there is a second pathway for power to continually get to the server rack. At the equipment level you will see that some of the equipment is fed from just one source bus A or B. In this case, yes, these racks would go down, but equipment is usually paired with mirroring capabilities to maintain continuity if one power source goes down.
For some server equipment, power interruptions of just milliseconds can interfere with the processes they are performing and require the equipment to go through lengthy re-boots. With the example in Figure 1, it should be considered that a generator takes some time — usually around 30 seconds — to receive the command to start up, start the engine, and get into sync before supplying power to replace the utility supply. This 30-second loss of power will translate into a power loss at the server. For many data center applications this is unacceptable. To mitigate this temporary power loss situation more resilient systems have a bank of batteries that are constantly being charged and constantly powering the electrical distribution bus. In the event of a loss of power, for example the 30 seconds it takes the generator to start up, this battery bank — also called an Uninterruptible Power Supply (UPS) — picks up the slack and provides continuous, uninterrupted power to the system.
It should be noted that these UPS set-ups become expensive quickly and will add hundreds of thousands to the job capital costs with significant life cycle costs for maintenance and replacement. However, if the function of a data center is critical, these costs are necessary and a justified investment.
5 Things to Consider During Design
- Know your client and their redundancy requirements – Every point in the system can be boiled down to “how many points of failure.” Typically in a data center you want to maintain “ring” style designs in all systems, and where that is not achievable, ensure multiple pathways for redundancy. This is true in design of piping, electrical, communication, etc. A good data center designer will challenge every part of every system at the highest and lowest levels to minimize single point of failure.
- Energy Efficiency – In the competitive data center, market increasing energy efficiency means delivering more IT power for less overall energy. This results in lower costs to owners and lower, more competitive costs for their clients. Today’s most energy efficient data centers are in the 1.05 – 1.08 range for PUE. They utilize evaporative cooling, efficient and variable speed motors for pumping and fans, as well as higher chilled water and supply air temperatures
- Site Location & Understanding Resource Constraints – The electrical and mechanical systems installed are highly dependent on the availability of clean power, water, and fuel oil.
- Scalability – In colocation environments, data centers need to be built such that they can scale with current population of the building. Stranded capacity reduces efficiency. It’s important to think about cost-effective building modules that allow scalability to minimize stranded capacity, both from electrical and cooling capacity perspectives.
- Quality & Testing – the data center is only as effective as the testing. It is important to design equipment such that it can be initially tested and regularly maintained.
5 Things to Consider During Construction
- There are always extensive equipment purchases with long lead times. Almost always, the first thing to do will be to purchase the equipment and get it released into production. This is often 20+ weeks for most of the equipment. Once the building structure is constructed, the equipment needs to be arriving onsite and ready to install.
- You are always trying to squeeze a lot of equipment into small spaces. All mechanical trades need to be coordinated in a single BIM to make sure utilities, mechanical, electrical, fire sprinkler, plumbing, and structural disciplines will work within the given space constraints.
- Almost every data center has aggressive schedule delivery expectations. Work with the team to understand all required trade-dependent steps to get each system started up, integrated, commissioned, and accepted in an operational state. Often there are dependencies that need to be understood up front and planned in the schedule.
- Data centers have many unique construction elements that are unusual to other industries. These include major electrical and mechanical infrastructure and unique fire alarm and suppression systems that facilitate fire response without damage to sensitive computer equipment (we cover this in detail on the podcast). Similar to prisons and secure government facilities, one can expect highly controlled security, access, and monitoring.
- Computer servers can get totally destroyed if they suck up any metal shavings. During construction, and especially towards operations/ hand-over, all construction activities need to prevent any dust, debris or contaminants within the data halls. Much like clean rooms in industrial facilities and hospitals, it is important to push a culture of cleanliness onsite and clean as you go with every task.
Let’s continue the discussion… What are your thoughts about Mission Critical construction? What did we miss? Where is the industry going? Reach me at email@example.com, and let’s connect on linkedin.”