The sheer number of moving pieces and parts in a data center is staggering. Cabinets, servers, CRACs, CRAHs, chillers, UPS, routers, and switches, oh my!
As we know from the second law of thermodynamics, the entropy of this closed system only increases over time, leading to a natural state of disorder. The measure of day-to-day success in the data center, uptime, is simply the management of this entropy via flawless execution and masterful planning.
This is why data center operators are intrinsically cautious, conservative, and fear operational deviation. Relying on time-tested technology and practices has served them well while keeping the internet’s lights on during emergencies and natural disasters. Perhaps even more impressive is how this operational discipline has thrived in the midst of massive data center growth. Some 90% of the data that exists today has been created in the last two years! This means data center providers are continually building new and expanding existing facilities. Read that as “more moving parts and more entropy, requiring even more flawless execution at very large scale.”
Paradoxically, the more data we house in these facilities, the more important it becomes to assure its availability. In the throes of the Fourth Industrial Revolution, where data has become currency, we rely on data centers to communicate, pay bills, trade stocks, track health statistics, drive our businesses, and even our cars. Both our economy and our daily lives have no tolerance for downtime now.
This expectation for instant and reliable data access along with entropy’s persistent drive towards chaos and failure makes disaster recovery an absolute mandate. It’s no surprise surveys conducted by Schneider Electric and 451 Research of more than 800 global operators have found that 97% of providers’ customers are asking for contractual commitments.
Whether it’s extreme weather, grid failures, or a pandemic, data centers must have backup systems to guarantee these commitments. With redundancy and uptime in mind, let’s take a look at essential backup power systems and understand how they are deployed to both protect and advance global IT interests.
Where Does Downtime Begin?
Unfortunately, we don’t get to choose the how, why, and where of an outage. Texas didn’t get to opt out last year with unseasonably harsh winter conditions their power grid was not designed to handle. The grid went down. Due to properly designed and implemented disaster recovery systems (diesel generators), Texas data centers maintained system uptime for hundreds of millions of customers.
Even though the industry is driving toward lights out data center facilities, we also don’t yet get to remove human error from the uptime equation. Just last year, the Uptime Institute’s data center resiliency survey revealed 42% of respondents had experienced an outage in the last three years due to human error, 57% of which were attributed to failures to follow procedure and 44% of which were cited as incorrect staff processes.
On the bright side, the rapid expansion at scale has forced the collective industry to learn, improving operations via process excellence, and become less susceptible to systemic failure.
Corporate enterprises have also learned their in-house data center program cannot compete with cloud service provider technology, staff, security, and infrastructure capabilities, moving en masse to these massive offsite data center facilities. This flight-to-cloud phenomena over the last 10 years has increased both the stability and sustainability of many core business systems.
With this in mind, it’s no surprise the global data center generator market size was valued at $6.7 billion in 2019 and is expected to grow at a CAGR of 6.5% from 2020 to 2027. Generator markets have continued to dramatically increase revenue in all segments for the last few years. Data center operators are making investments in their own longevity and viability.
Power that Protects — and Propels
The data center version of the “old tree in the woods” adage: Is it an outage if customers don’t notice? Contractually, the answer is clearly no. If data center operators design, maintain, and execute infrastructure properly, a grid failure which is picked up by backup power systems without dropping a bit is an incident, not an outage.
Today, due to the long track record of stellar performance of which operators are famously fond, the role of backup power systems is dominated by diesel generators. For more than 100 years they’ve literally kept the lights on. With the contractual stakes so high for data center operators, there is little tolerance for untested or inconsistent power solutions. However, in recent years the sustainability topic has entered the backup power discussion with the traditionally unapologetic consumption of diesel fuel and subsequent particulate emissions. Because of this fossil fuel focus, diesel has garnered a somewhat dirty perception. However, mission critical folks love their generators – and aren’t going to give them up easily – so they’ve been marketing and innovating.
On the marketing front, they point out that backup power systems run a little as a few hours per year, limiting their carbon impact. The pace of innovation has been rapid. Low carbon fuels like hydrotreated vegetable oil (HVO), comprised of natural oils, are already in production-use at many facilities. This type of continuous improvement of an established technology — evolutionary versus revolutionary sustainability — allows the industry to keep growing at a torrid pace while still answering to the green imperative.
Similarly, the optimization of engine technology is yielding impressive results. One of the latest generator engines was designed from the ground up to shrink the frequency, duration, and load required during testing windows. The result is more than 50% less fuel consumption and emissions. This is material impact due to an evolutionary approach.
It’s just as exciting to see the hyperscale data center providers funding and testing revolutionary new approaches to backup power systems to address disaster recovery. Google is doing amazing things with large-scale battery arrays, running 1MW+ production environments at European facilities. Microsoft is working furiously in the development of revolutionary replacement technologies like hydrogen fuel cells to make good on their corporate commitment to have a diesel-free future. There’s comfort in the fact that the best, brightest, and most well-funded brains are engaged in building the backup power product of the future.
We now have a summary of where the industry is headed. But where will it end up? What about private nuclear? Hydrogen fuel cells sounded like a joke 10 years ago too, so we should never underestimate the power of creative process combined with the pressure of the sustainability imperative. We do know backup power is a pressing need with tremendous growth projections, guaranteeing keen interest in the segment for many years to come.