Blog

Data Center Reliability and Efficiency in the “Zettabyte Era”

The world as we knew it has ended. The new world is defined by exponentially increasing compute loads that are increasingly dynamic. And it demands innovation in data center infrastructure. 

“There is a convergence of large, dense, dynamic power draws on an unstable grid that itself is dynamic. We need to chart a course for innovation of the next generation of data center infrastructure to support this new world.”

– Jakob Carnemark, CEO, Aligned Data Centers at DCD Colo + Cloud, Dallas 2016

It’s the end of the world as we knew it. Welcome to what Cisco calls the “zettabyte era.”

“We've begun the move to digital business, including rich content via mobile devices, where people, their devices and even unattended ‘things’ become actors in transactions,” says Gartner analyst Bob Gill in the report Eight Trends Will Shape the Colocation Market in 2016.

No industry goes untouched by the possibilities presented by digital business: transportation, healthcare, manufacturing, energy, retail, agriculture, and every other one. According to MIT Sloan research, the companies that are adapting to a digital world are 26% more profitable than their industry peers. And the possibilities are growing: according to IDC, actionable data – the data that is analyzed and used to change business processes – will increase almost ten-fold from 2020 to 2025.

Compute loads are increasing exponentially

With the exponential increase in the number of “things” that are actors in transactions and the exponential growth of actionable data, compute loads are growing exponentially as well. Having increased fivefold between 2010 and 2015, by the end of 2016 annual global Internet traffic (a reasonable proxy for compute load trends) will have exceeded one zettabyte, according to Cisco. Just four years later, it will have doubled.

To visualize how much compute loads have grown, think about it this way: in the early days of the Internet, the amount of traffic could be represented by a 2-inch tall mouse. 


In comparison, traffic today is 23 times the diameter of the earth. There isn’t enough space on this page to accurately depict the magnitude of that increase. 


Compute loads are also increasingly dynamic

Compute workloads are dynamic from month to month – the ecommerce site’s traffic spikes at the end of November through the end of December, for example. They’re also dynamic hour to hour – traffic to a search engine’s U.S. data centers is high during the day and lower in the middle of the night, though then traffic to its Asia data centers is high – demand for compute, in this case, “follows the sun.” Compute workloads also depend on the business cycle – development work might be very high for a period, then drop off, then spike again.

And as workloads driven by continuous growth of “uncertain data” such as IoT sensors and devices, social media, and VoIP rise, the unpredictability rises even more. As just one example: spikes in Twitter traffic during the last presidential debate are understandable, and perhaps Twitter even expected them, but regardless, an 18% rise in traffic in the span of an hour (almost 100 million new tweets) is characteristic of this new world – a world of highly dynamic workloads. 


Source: IBM Investor Briefing

IT infrastructure has kept pace, but data center infrastructure has not

IT infrastructure – servers for compute and storage, and networking equipment – have kept pace with the exponential increases in work they’re asked to do, as well as with the increasingly dynamic work. In the last five years alone the amount of compute load an individual server handles has increased exponentially.


The data center infrastructure that supports those compute loads, however, hasn’t changed much at all (with the obvious exception of hyper scale data centers, where there has been significant innovation). When electricity passes through a transistor, there’s friction, which creates heat. All else equal, the more transistors there are in a given space (i.e., the higher the server density) the more heat it generates. Servers have gotten better at mitigating that heat with their own internal fans, but most of the burden still falls on the data center.

So that begs the question: How well equipped are traditional on-premises and colocation data centers to support the exponentially increasing, and increasingly dynamic, compute loads?

In a traditional data center, where power and cooling infrastructure is static, there is a tradeoff between reliability and efficiency. The only way to ensure that the servers don’t overheat and fail is to either half-fill the racks or spread the racks apart. Both approaches create stranded capacity. And when the cooling system is a traditional chiller plant that is only efficient at full load, the approach also minimizes efficiency.

Furthermore, given the exponential rise in compute loads, and the increasingly dynamic nature of compute loads, “future-proofing” a traditional data center means over-building power and cooling infrastructure, since that infrastructure has a lifespan of several decades, even as servers are refreshed every couple of years. Again, there’s a tradeoff between reliability and efficiency.

However, if the data center were responsive to compute loads, then you could have reliability and efficiency too. At Aligned Data Centers, that’s made possible by our cooling system from Inertech, also an Aligned Energy company. Here’s how it works:

Instead of a computer room air handling (CRAH) unit that forces cold air into the data hall, heat sinks that sit above each rack absorb the heat. Because the heat sinks are close-coupled with the racks, the system is dynamic in real time – it ramps up and down based on server load. Variable speed fans respond to changing loads, and variable frequency drive (VFD) pumps allow flow rate to be adjustable to load as well. The combination of close-coupled heat sinks and enclosed pods enables us to remove more hot air, more quickly and more efficiently. That allows us to support much higher densities – up to 25+ kW per rack than a traditional cooling system does.

The system is future-proof as well – increasing density is as simple as adding more heat sinks. Additional heat sinks can be installed at the rack level to allow for increased density in that rack. Likewise, the cooling distribution units (CDUs) can be easily installed in a localized area in the data center to allow for vertical scalability without reconfiguring the data center. Power infrastructure is similarly dynamic – varying power densities can be fed with the same busway infrastructure, so higher densities can be easily accommodated in the future.

As a result, at Aligned Data Centers we can offer a PUE of 1.15 guaranteed, no matter the power density or utilization. So despite the exponentially increasing amounts of data, its pressure on compute loads, and its pressure on data center infrastructure, Aligned Data Centers delivers both efficiency and reliability.

Here’s what Peter Judge, from Datacenter Dynamics, wrote about his visit to our Plano data center: “The demo pod is designed to replicate the real world – which means it has every conceivable kind of rack, all of them tested with heat load units. I’m not sharing my photos, but I’m betting that aisle would impress any visitors who are real data center customers. It shows something most colo providers say is impossible: running real-world, mixed and messy technology at peak efficiency.”

We couldn’t have said it better ourselves. A data center for the real world.

Learn more: Watch our CEO Jakob Carnemark share his thoughts on this topic at DCD Colo + Cloud, Dallas 2016

Related Resources

Schedule a Data Center Tour