The data center industry is in a continual state of evolution, but one factor remains constant: the need to ensure continuity and the availability of business-critical data and applications. Every year developments in the industry create new challenges to business continuity while simultaneously increasing the pressure to eliminate downtime. This year is no exception, with two trends in particular warranting the attention of continuity specialists.

Tackling Carbon Emissions and Water Use

The most significant trend in the data center industry in 2022 is the sense of urgency operators are bringing to the issues of climate change and sustainability.

Most continuity specialists already have a sense of urgency around climate change, as they have had to adapt their disaster plans, including data center operations, to the reality of more frequent and severe climate events. What we are seeing now is the other side of the coin: a movement to dramatically reduce or eliminate the carbon footprint and environmental impact of data center operations.

Catalyzed by the pandemic and enabled by a wave of innovation, large data center operators have set ambitious targets for becoming carbon neutral and water free. Google, for example, announced their goal of employing only carbon-free energy sources by 2030, while Microsoft announced plans to be carbon negative and water positive by 2030.

These large operators are paving the way, but they will have plenty of company as this trend builds momentum in the coming year. Few data centers — whether cloud, colocation, or on-premises — will find themselves exempt from the need to take a serious and multi-faceted approach to data center sustainability. And many are expected to follow a similar roadmap.

The first phase of that roadmap is to offset data center energy consumption with renewable energy, including use of renewable energy credits and other financial vehicles. In most cases, this step will need to be supported with additional measures that improve resource utilization and reduce data centers’ dependence on the grid.

Taking the Next Step Toward More Efficient Operations

When it comes to efficiency and resource utilization, most data centers have already harvested the low-hanging fruit. According to the Uptime Institute’s annual Data Center Industry Survey, the average PUE — the main measure of data center energy efficiency — dropped from over 2.0 in 2010 to 1.58 in 2018. However, operators appear to have then hit a wall as PUEs have remained relatively flat for the last three years. To break through that wall, it’s necessary to leverage recent innovations capable of enabling the next wave of improvement.

Optimizing the critical power system:
Operators have long accepted a certain amount of inefficiency in the power system to ensure the highest availability of critical systems. However, the precision and intelligence of the new generation of power equipment is creating opportunities to drive up utilization and drive out inefficiencies — without increasing risk.

One source of stranded capacity is oversizing due to both derating of power equipment and sizing systems based on infrequent peaks in demand. Precisely manufactured equipment is now available that can operate at 100% of rated capacity, eliminating the need for derating. Operators can also leverage the UPS systems’ ability to safely operate above rated capacity for short periods to handle short peaks without oversizing.

Operators and equipment manufacturers are also working together on more sophisticated redundancy strategies that dramatically increase power equipment utilization to over 90%. Intelligent UPS controls can now safely cut energy losses from power conditioning by up to 50%, depending on utility power quality.

These innovations can work together to increase available capacity within existing data centers, delaying or eliminating the need for expansion, while driving energy losses within the system well below current levels to reduce carbon emissions.

Reducing water use:
One of the ways data center designers drove down PUEs in the past was to use water-intensive technologies to increase the efficiency of data center thermal management systems. That strategy has come into question as water availability has become scarce in some areas and municipalities have begun to limit the availability of water for cooling. The Uptime institute even introduced a metric similar to PUE to help operators measure their Water Usage Effectiveness (WUE).

Figure 1: An evaluation of thermal management technologies under identical conditions found that DX systems with pumped refrigerant delivered a pPPU below 1.12 while achieving a WUE of zero.

As a result, there is a trend away from a singular focus on energy efficiency in the design of thermal management systems to a more balanced approach that optimizes both energy efficiency and water use. This is being accomplished through adoption of water-free direct expansion (DX) systems, which deliver energy efficiency similar to indirect evaporative systems while eliminating the millions of gallons of water used by those systems (Figure 1). Like the changes being implemented in the critical power system, water-free DX systems can reduce the environmental impact of a data center while maintaining a high level of availability.

Reducing Dependence on the Grid

For the industry to meet its long-term sustainability objectives, data centers must ultimately reduce their dependence on utility power and diesel and natural gas generators.

In the short-term, fuel cells create the opportunity to replace carbon-fueled generators as a source of backup power. Proton-Exchange Membrane (PEM) fuel cells have excellent power density and can start quickly, even in low temperatures, making them ideal for backup power applications. The key obstacles restraining the use of PEM fuel cells as a backup power source today are the cost of hydrogen, which will come down as adoption of fuel cells increases across various industries, and the challenge of transporting and storing the quantities of hydrogen required to ensure 24 or 48 hours of backup power.

Ultimately, this second obstacle will be addressed by implementing on-site hydrolyzation that, when powered by renewable sources, creates enough green hydrogen to enable fuel cells to serve not only as the backup power source. but as the primary source of power when renewables are not producing energy.

Figure 2: Renewable energy powers the data center and produces hydrogen to power fuel cells. When renewables aren’t producing power, the fuel cells enable carbon-free data center operations.

Here’s how it can work. Excess wind or solar energy generated on site is used to power hydrolyzers that generate clean hydrogen that supports Solid Oxide Fuel Cells (SOFCs). This hydrogen can be stored on site and when the sun stops shining or the wind isn’t blowing, the SOFCs can power the data center (Figure 2). When the hydrogen fuel is depleted, the UPS switches the data center to the grid to maintain continuous operations. The UPS provides energy management capabilities in addition to its power conditioning and backup power functions.

While this would be a major change in how data centers are powered — and the data center industry is rightly leery of major changes — it represents a necessary evolution that will enable data centers to continue to play a critical role in every aspect of our lives without contributing to climate change.

Ensuring Continuity at the Edge

Sustainability isn’t the only data center trend continuity specialists should be tracking in 2022. Data processing and storage is continuing to migrate to the network edge to support latency-sensitive applications such as predictive maintenance, autonomous robots, smart cities and a host of other emerging applications. VMware projects that edge as a percent of total enterprise workloads will grow from their current 5% to over 30% in the next five years.

That growth will enable new applications that improve customer experiences, reduce costs and minimize downtime for a wide range of industrial equipment. But it also creates continuity challenges. Many edge use cases will need to deliver close to data center levels of availability in locations that were not designed to support IT systems. In addition, edge locations often lack trained IT personnel and can, unless properly protected, leave IT systems exposed to unauthorized access.

These challenges can be addressed through another trend that has emerged in the industry: increased adoption of integrated micro data centers and other fully integrated solutions. In the past, it was common to take a component-based approach to critical infrastructure for IT systems located outside the central data center. IT specialists did their best to select UPS, power distribution and thermal management components that could work together, and then integrated these components into a system on site.

With integrated solutions, infrastructure components are configured to the workload, application and environment and then fully integrated at the factory. Instead of receiving each component separately and having to build the system on site, integrated solutions arrive as a complete, IT-ready system designed to meet application availability requirements. These systems can also help enterprises address the challenges of managing a growing network of edge sites as they can be configured with integrated monitoring and management technologies that enable centralized management of multiple distributed locations.

In 2022, we expect to see continued traction for micro data center solutions as well as the application of the benefits of integration to larger systems. Power and cooling systems for large data centers can be fully integrated and factory-tested off site, arriving on site on skids ready to be installed and commissioned. Taken a step further, this concept is now being used to streamline the development of new freestanding data centers. In a prefabricated data center solution, the complete data center is engineered as a system, prefabricated in modules at the factory and assembled on site to create a highly efficient, scalable and reliable data center.

Availability Will Not Be Compromised

The major trends discussed in this article — reducing carbon emissions and the distribution of compute to the edge of network — represent significant changes to both the data center ecosystem and the way data centers are designed and operated. That could prove concerning for some continuity specialists; however, infrastructure manufacturers are working closely with data center operators on both fronts to ensure these trends can continue to develop without compromising the availability of data center operations.


TJ Faze

TJ Faze serves as the head of ESG strategy and engagement at Vertiv, setting long-term strategies and goals while ensuring program execution and continuous improvement of environmental and social matters across Vertiv’s global operations. He sits on the board of directors of the Ohio Energy Project, a non-profit focused on delivering energy education and resources across the state. Faze uses his 10 years of innovation, startup, non-profit, and sustainable business experience to further the sustainable development of digital infrastructure. He holds a bachelor's degree in business from Miami University (Oxford, Ohio) and a Master of Business Administration from The Ohio State University.

Achieving Data Resilience at Scale
A Radically New Approach to Backup and Recovery in the Data Age Nobody likes to lose things. As long as...
Geo-Redundancy: Key to Resiliency When Disaster Strikes
The summer of 2023 was one for the books, with record-high temperatures sweeping across North America and the worst wildfire...
The Data Center Playbook for Disaster Preparedness
Developing a good disaster recovery strategy is essential for both enterprises and the data centers which serve them. A widely...
Disaster Recovery Investments Grow Revenue, Not Just Cut Costs
Today’s organizations are navigating unique business challenges amidst the increased cost and frequency of data breaches. Security incidents are overwhelmingly...