Sustainable Data Centers and Energy
As an industry, energy use—specifically lowering it—is a hot topic of conversation. This installment of the roundtable discussion focuses on understanding what affects data center energy usage, energy efficiency, cooling solutions, site selection and the effects a microgrid has on our daily lives.
With increased demand for data centers as consumers continue to depend on digital services, how should the industry address this core usage challenge?
Data centers consume 1-2% of the world’s energy.
The demand for this building type continues to increase as society furthers its dependence on digital infrastructure to provide a service. In Part 1 of the Strategies for Sustainable Data Centers Roundtable series, DPR Construction, Equinix, Sheehan Nagle Hartray Architects and Brightworks Sustainability address energy and ways data centers can lower their energy use. Watch the video here.
The roundtable discussion panelists included (from left to right in the video):
Ryan Poole
Global Sustainability Leader at DPR Construction
Greg Metcalf
Senior Director, Global Design at Equinix
Joshua Hatch
Principal at Brightworks Sustainability
Denis Blanc
Director of Sustainability at Sheehan Nagle Hartray Architects
The roundtable series will be shared in four parts and will include segments covering energy, water, embodied carbon and what's next.
Transcription
This discussion has been edited for clarity.
What's top of mind in the data center space for energy, water, embodied carbon—for transitioning the built environment into being a more sustainable and resilient resource?
Ryan: Hey everyone, glad to have you all here today. We're here for a roundtable discussion with DPR Construction and a bunch of our partners to talk about sustainable data centers.
[0:23 Watch from here] Our first topic that we're going to get into is energy. That's probably one of the biggest topics that's been a focal point for us for numerous and numerous decades now and particularly in the data center space but all across the industry.
Josh: In a typical data center, 80-90% of the energy is consumed by the computers, the servers that are in the data center. So, that's where so much of the focus has been for a long time—more efficient servers, virtualization, layering on how to do more with less—to make servers really efficient. The next biggest component of energy use in data centers is the cooling for those servers. They're heating up as they're doing that work. The remaining 10-20% of energy is the cooling, and that's been a huge focus too.
Greg: Server fan energy—as servers get denser and denser and CPU power goes up and up, server fan energy goes up with that. As cabinets get denser, it becomes hard to pull the amount of air across the server that's necessary to cool it. So, as a proportion of the load, I can't quote specific numbers, it goes up. And that's work that's not done in compute, it's work done just moving air through the server. And this is behind the PUE meter, so you don't necessarily see it in the figures, it's part of the server load. In a very dense server you might be losing 25% of the power going to the server just moving the air through the server using the server fans. And that disappears behind the meter and it's hard for us to make direct comparisons.
Whereas in a liquid-cooled application, you get almost all that power delivered to the server actually doing the compute. It has a higher direct compute efficiency for the energy put into it. That's the fundamental that helps it in the first place, and beyond that you have the opportunity for doing chiller less liquid cooling versus chiller-based air cooling to cabinets. It doesn't mean that air cooling is going away necessarily—but a portion of the load in a liquid-cooled unit will still be air cooled—but the majority of it will be liquid-cooled and potentially chiller-free.
Josh: Really, the servers can support cooling and the cooling can support servers and I think that balance of if servers are allowed to run in a wider range of temperature and humidity—which is sometimes more possible for larger-scale, hyper-scale operators—that can allow cooling technologies that aren't necessarily available to co-location providers that are building to allow other customers to bring whatever technology they want into it, and have to have more of a consistent, tighter ranger for temperature and humidity.
As the industry has really focused on server efficiency, that's been super helpful to drive down the energy use and do more with less. Cooling has been the other big focus and that's—the PUE statistic really kind of mostly addresses cooling and how well that server compute is cooled. Most of what's left in a data center energy-use-wise is less than 1%—you think of lighting, hot water heating, many of these other uses that are much bigger in a typical building—a home or commercial building—are largely insignificant in a data center.
I think as the industry, not only has focused in efficiency in servers and cooling, they've also put a lot of emphasis in the supply side—the large-scale renewable energy. Data centers have such a large energy footprint that companies have really acknowledged that and tried to also be contributing on the supply side and there's a variety of mechanisms that they're using. Onsite can be part of the solution, but it's always a very small part because the energy use is so great that there's not enough land area on-site to ever make a dent—you can't even get to 1% with every square inch you have on a site. It's been thought of at an infrastructure scale—large-scale buying, power purchase agreements, large-scale renewable energy certificates purchasing—all that's helped both create demand for renewable energy and helped make real projects happen.
At the same time, as companies in the data center space are trending towards 100% renewable energy supply for their data centers—we're speeding towards it, most companies are well passed 50% on their path to 100% in the coming few years.
Coming back to your question on the densification of land use, a lot more of the remaining carbon that's left in the construction and operation of a data center is in the materials used—which brings us back to the point of density. And, do you go up—but that has more foundations. As we're trending towards 100% renewable, the embodied carbon is something that relates to liquid cooling, it relates to densification, they're all—it's all very inter-related. It makes it very fun but also very complex—
Ryan: A continuous balancing act, right? You've got to continue to figure out where you pull the lever here so you're not making too much of an impact here. It continues to balance out.
[5:21 Watch from here] Greg, maybe you can weigh in first from the owner's perspective. What are the first things that we're looking at from an energy perspective: connections with the utility provider; having the space available when we're first going and figuring out where do we want to go build a new data center from an infrastructure point of view?
Greg: Sure, so as an interconnection company our needs are a little bit different from some in the industry. We're very much driven by fiber connectivity and where the points of connection are. We often want to expand adjacent to existing facilities to continue that growth in that market. It does create some challenges for us about finding the more sustainable locations.
Our sort of suburban developments limit our ability to do things like onsite renewables because available land area—cost of land—is challenged in those suburban, urban, locations that we're based in because of fiber connectivity. Some different user providers have the ability to be more out of town, even more rural, to be building in locations where they can take advantage of onsite renewables.
[6:33 Watch from here] Maybe you can tell us a little bit about how those negotiations with an energy provider start up front?
Greg: Well the world is challenged in terms of getting utility connections these days. There are less and less sites. Perhaps the investment that went in globally this is true—that went in along time ago—and there was slack capacity in energy systems. That's been expended. And rarely do sites come up that there is substantial capacity available, easily used, delivered on a fast timeline.
You've got to delineate between getting a connection and where you get you're energy from—they're two different things. Getting your connection is just about getting the pipe in place to deliver the water, the power, the fiber. Getting your energy is something that's dealt with separately, potentially at a national level rather than locally.
Ryan: Yeah, absolutely, and thinking about that localization and how we're tying into the community from an infrastructure standpoint.
[7:34 Watch from here] Denis, maybe you could weigh in a little bit on heat waste—wasted heat recovery—and the opportunities that the data center spaces had to use some of that with connecting communities to provide heating elements?
Denis: In terms of reusing waste heat, it's something that we consider that is done. At the same time it's sometimes also considered low-grade heat so it can work in some specific locations—I'm not going to get into too much more details—that's definitely something you want to consider, especially in the coldest locations that can be done, but it cannot be done everywhere.
Ryan: Great points. It's good to talk about [that] there are geographical dependencies that are going to drive some of the solutions that we're actually able to do in some locations. That comes from climate adaptation depending on where you're located, and different climate zones, it can also come from policies from a municipality perspective and what we're actually allowed to do.
Josh: Well I think that's a good bridge back to the waste heat issue. Heat is one of the more significant kind of wastes on site at a data center, you don't see it but it's a bunch of hot air. In a rural area, it's hard to locate something right next to it that can use that hot air. As you move more towards urban or suburban environments, there might be adjacencies to housing or greenhouse or places that can use the low-grade heat.
You can't take 100 degree air and boil water with it and then use it for industrial processes. But what you can do with it is space heating. But you have to be next to something. You can't package up air in a box and ship it somewhere, it's got to be close. So as you guys move more urban there might be an opportunity to better co-locate.
Greg: Every time we look at site planning now, we're looking at the neighbors who are around us trying to evaluate what opportunities there are for waste heat reuse. We are doing more of it, we are seeing more opportunities, but these things take a long time to come through to fruition. Often, we're way ahead of what the neighbors are up to and we're way ahead of what jurisdictions are up to in terms of provisioning heat networks to enable us to connect. Right now, we feel like a lot of that is on our own back to go and execute on that stuff—to imagine supplying it to an apartment block that's nearby. We do need the buy-in of city council, Authorities Having Jurisdiction (AHJ), to help us transfer that heat from one location to another.
Ryan: Greg, it's so great you brought that topic up. It's particular to the all the built environment, right, it's our continuous need to focus on how we continuously drive more passive solutions, how we drive down that PUE or EUI depending on what you're looking at it from the perspective. All of that continually adds stress to the grid and the need for us to create more and more to be able to answer it. If we're not focused on that reduction piece as we continue to add to the building stock, then we're going to continue to add more and more stress to the grid without relief. It's a great point to bring up.
[10:52 Watch from here] Denis, I think a good place to weigh in here would be to talk about how energy models—the collaboration between our team and the development of that—and how it plays into the selection of these features?
Denis: Indeed, when you look at the full picture of a data center anything else other than what's dedicated to the IT equipment is negligible. However, we actually have clients who ask us to—clients who are co-location, who are renting a shell—they are asking us to look at their facility from a more typical angle such as you would for an office building. We would [do] some type of early design modeling at planning stage, we have tools for that—but that would be excluding the IT loads because they don't know what they will be, they come from firms who rent their space. Even though the majority of the loads is for computing and IT equipment, we also look at the buildings for the data halls. We include lighting loads in our modeling and we address the efficiency of the other spaces—the support spaces, office spaces—it's a small part but for the bottom line of certain operators that's what they are interested in. They also ask us to exclude cooling systems associated with those loads because they just don't know what they will be at early planning when you design a core and shell prototype data center.
This is our focus for energy: early design, location-specific obviously, and it can or can not include IT loads and many operators are actually interested in having that level of analysis as well.
Ryan: Absolutely. One other topic that Josh, you pulled on, that we should dive a little deeper on is resiliency around data centers and how we're actually providing backup in the event of grid failure. You mentioned traditionally that's always been through combustion fossil fuel generators and that's how it is across the markets. We've seen a lot of transition there with clients that are focusing on sustainable spaces away from—fossil fuel-less generator production for backup, and whether that's through renewable energy, renewable gas that could be supplied or battery backup. It would be great to think about that.
[13:52 Watch from here] Greg, I know Equinix has been on the forefront of this and thinking about that transition away from [diesel generators], and [towards fossil fuel-less] or at least reduced combustion for backup. Any thoughts to weigh in there?
Greg: Diesel generators remain the predominant backup energy technology. In some applications there may be opportunity for battery energy storage to augment that and perhaps play some kind of part with the grid—responding to grid systems. That can improve the efficiency of the grid, it doesn't necessarily do anything for the data center. Diesels themselves we've switched where we can to hydro-treated vegetable oil (HVO) is a lower carbon diesel solution compared to "dino-diesel" as you might call it. It ameliorates the situation but it's by no means a complete solution, it still has emissions in the production of that fuel.
With regard [to] gas generation, it's possible but storing gas onsite is a challenge even where we have gas generation—we do have it in some locations—we're tending to do that with duel-fuel machines that still store diesel for emergency use, for autonomous operation, but can still operate on gas during normal conditions. We're doing some fuel cells in silicon valley that are going to run as primary generation, it's not necessarily 100% renewable but the point of that exercise is to prove a building can run without calling on its' diesels. That building does have diesels but we're trying to prove the reliability of the gas system and the reliability of a fuel cell system with the backup of diesel generation if needed. And potentially if a customer wanted to take a part of that building, we could do part of that building just omitting the diesel plan in the future.
Josh: I think as the trend of data centers being the preferred location for where compute happens, has shifted over the last few decades. Initially, the functions that were pulled out and put into a data center—versus in a room in a building—were the most critical functions. That industry has always had this really important drive for uptime. That kind of drive for liability because the compute that's happening is so essential to financial markets and other business transactions, life safety and supporting all these different functions that when they go down people and dollars and all the important things to us suffer.
I think one of the things, that as the trend has become to put so much more in purpose-built buildings because they can be so much more efficient than having a server room or a server closet—the industry can come back to thinking about how to tier some of those functions into spaces or buildings that don't need to have as much reliability. Ultimately, I think we're providing a very resilient solution to some functions that might not need it.
And I think also a grid reliability point here—as the grid shifts to becoming more renewable, there's some benefit to getting more reliability but there's also some—renewable energy isn't as dispatchable, you can't call up a windy area and tell it to blow more if you need more power like you can a natural gas plant. With more and more renewable energy across a wider range of geographies—and solar and wind having different profiles—eventually you get a higher level of firmed up capacity. But it's still—you can't tell the sun to come out, it's gonna come out when it comes out. You can't tell the wind to blow.
So, how do we increasingly—there's a grid component to this too, is my point. That, if the grid can become more resilient and there are places where the grid is more resilient that data centers have put in not 100% backup power. There are situations and certain countries where there's overlapping grids, multiple different providers, and there are lower risks. And some more progressive operators have said we only need backup power for half of it because the grid hasn't gone down here in 20 years, or we don't need any of it. It's a risk calculation because these are critical functions that are happening.
Greg: When you're talking about the core network, that's the bit that must keep operating. It's imperative that it doesn't stop whether there's a hurricane, whether there's an earthquake, whether there's a pandemic—that's the bit has to keep going. Some of the stuff that happens lower down the chain, maybe some of that can be optional and it can take a break for five minutes or an hour or a day depending on how important it is. It's crucial that fiber networks and cloud networks and financial networks can continue to exchange without any perceptible interruption.
Ryan: Yeah. We think about resiliency a lot on the front end of the buildings we live in—that serve us from our health perspective, our hospital systems and things like that. But we also have to think about that all of those now run on data-centered and supplied platforms that house all of the information we need to actually provide that service.
See the
Series
Data centers have become a cornerstone of modern life. At the same time, these facilities have traditionally had significant environmental footprints. The question is: how do we support the growth of digital infrastructure while also better managing its energy, water and carbon footprint?
Posted on November 8, 2022
Last Updated March 2, 2023