The rise of AI data centers is reshaping the outlook for US power markets. Forecast to account for nearly a 10th of all US electricity demand by 2035, data centers are gobbling up power more quickly than electric vehicles, hydrogen or any other demand class this decade. A profound concentration of capital has allowed for this rapid expansion, which is now exerting influence over energy infrastructure and planning investment. But what forms do data centers take, and what are the factors and strategies that influence associated decision making? On today’s show, Tom Rowlands-Rees is joined by BloombergNEF’s Head of US Power, Helen Kou, and Senior Associate Nathalie Limandibhratha to discuss their recent note “US Data Center Market Outlook: The Age of AI”.
Complementary BNEF research on the trends driving the transition to a lower-carbon economy can be found at BNEF<GO> on the Bloomberg Terminal or on bnef.com
Links to research notes from this episode:
US Data Center Market Outlook: The Age of AI - https://www.bnef.com/insights/36281
This is Tom Rowlands Reese and you're listening to Switched on the podcast brought to you by BNF. The rapid rise of energy intensive AI data centers is reshaping the near term outlook for US power markets. Outpacing the energy demand growth of EVS, hydrogen and all other demand classes out to twenty thirty, data centers will account for eight point six percent of US electricity demand by twenty thirty five. That's almost twice as much as today. Largely owned and operated by a few highly consolidated companies with very deep pockets, this concentration of capital allows for rapid expansion and a significant influence over future energy infrastructure investment. So what strategies are these companies employing to optimize their data center rollout? On today's show, I'm joined by BNF's head of US Power Helen Co and US Power Senior Associate Natalie Lemandebrata, and together we discuss findings from their note US data center market outlook the Age of AI, which B and EF clients can find at BNF go on the Bloomberg Terminal or on BNF dot com. All right, let's get to talking about the outlook AI data centers with Helen and Natalie. Helen, thank you for being here. Thanks Tom and Natalie thanks for being here as well.
Thank you Tom.
So Natalie reports up to Helen, and Helen reports up to me, and I'm not saying that to flex. I'm saying it for a couple of reasons. One is, I'm like super proud to have such smart people on my team, and I'm also particularly proud of the work they've done around data centers. But also this situation of this reporting line means that I get to have catch ups with them regularly, which has been pretty useful to me personally because this question around AI and data centers has had a lot of people talking, a lot of people have opinions, and so I have often found myself in situations where people are expressing their opinions, and in those situations, I've developed this tactic to different myself from the pack, which is that in the situation, I just regurgitate whatever Helen and Natalie last told me about data centers, and everyone thinks that I'm really smart. So first off, let's start just with the headline numbers. How much data center build are we expecting in the US According to the report that we just published, so.
BNF's latest outlook has data center and demand more than doubling from thirty five gigatts today to close to eighty gigawatts in twenty thirty five. This would account for close to nine percent of total US electricity demand.
Wow, so we're expecting let me just do the mass suddenly like forty five gigawatts ish, which for those of you who are you know, maybe new to the energy space, that's like twenty to thirty nuclear plants, and nuclear plants are really really big and take a long time to build. That's a lot of demand. So we are forecasting some astronomical amount of data center build. How do we compare to everyone else that has an opinion on this topic?
We're relatively conservative and relatively conservative. Yes, yeah, our overall demand build is fairly low in terms of uptake relative to our third parties.
Okay, so how come they are forecasting something so much more aggressive than us, or how come we are so much more conservative than them?
Well, we don't really know what our third party counterparts do in terms of their forecast, but what we do know is like how we forecast data centers, and our focus was really to look at like how data centers move from one stage to the next. So in our project data base, what we know is that we can see data center stages. So we have like early stage, which is basically anything kind of just just got announced. We have projects that are committed, which is anything that has some type of like land or permitting agreement, things that are under construction and then live. And what we did was we tracked how these data centers moved from one stage to the next, and we looked at like the probability of how these data centers moved.
From and stay to the next. So typically, how long does it take for a data center, you know, from early on in this pipeline to being commissioned, how long would that take?
Yeah, So what we've found based on data between twenty twenty to twenty twenty four is that it takes seven years to build a data center, which is a really, really long time.
Okay, So that's really interesting. So in a way, our forecast we're saying with a fair amount of confidence because you know, twenty thirty five is only just over a decade away, and we have data on everything that's getting built, and we know that it takes most of that decade for it to get built. So anyone who's forecasting something more aggressive than us either has a different view on how long it takes to build data centers, or they must have different data, or maybe they're using a completely different methodology. But it's good to know that we're the ones doing it right. So who's building all of these data centers this sort of colossal volume of new demand.
Yeah, so the data center market is pretty concentrated. There's two main types of owners. There's colo data centers who have buildings with multiple tenants, and you have these self build companies, which are typically your large tech companies or hyperscalers is what they're usually referred to. And the hyperscalers of Google, Amazon, Microsoft, and Meta are close to fifty percent of total operating capacity today and this is only set to grow. They're building much larger campuses close to gigawatts scale. Amazon has multiple gigawat data center campuses in development in Virginia. Meta has another two gigawatts in Louisiana, and just as a point of reference, in the last decade, data centers have typically been in the tens of megawatts. So really, as we're pushing through these hundreds of megawatts and gigawat size, the rise of uptake will be much faster and larger.
Okay, And so just just to make sure I've understood overall correctly, those four companies are fifty percent of the data center build today, but we think that there's going to be even more because they're the companies that are building these really big data centers that are so much of what we're expecting. So you've already alluded to these data centers are maybe bigger than the ones we've seen in the past, that's the trend. But in what other ways are the data centers that we're expecting different to the ones that we've seen in the past.
Yeah, before we jump into that, I think it's important to understand how be any of categorized as data centers. So we categorize data centers in three different ways. Vers size, which Natalie had alluded to, first retail, wholesale and hyperscaler So that's just based on the project size of a data center. And then there's operator type, which is the ownership so either self build or co location, which Natalie had already explained. And then there's workload, So workload is based on just the computing process of a data center, and there are many different types of workload from cloud or enterprise, telecom, crypto mining, and that determines a lot about the data center's overall infrastructure and then also their overall load and power consumption.
So then, I mean one of the other things that you've spoken to me about is when we're talking about AI data centers, that there's two main flavors, So can you just talk me through those as well.
Within AI, we mainly branch it out in two main workloads, AI training and AI inference. AI training is processing a large amount of data in order to train these large language models, and AI inference is taking those already train models for real time applications in use, like when you're careering in chat GPT, And those two workloads have also different location and constraints of the data center itself. AI inference, since they're theoretically interacting with the end user in real time, they'll care more about latency and location parameters to that end user. AI training a lot of that processing happens on site, so theoretically they could be more flexible on where they locate, and they could follow where there's available power or other constraints.
Okay, I mean you use the word theoretically there, which maybe is doing a lot of work. And we'll come back to where people are actually building data centers later on, because I do have a question about that. But roughly, what is do we have an idea of what proportion of the data center building the pipeline is for AI and what proportion of it is training and what proportion of it is inference.
That's actually very tough to ascertain exactly what the split is. If we look at a large Gigawat campus, some of their buildings could be used for training today, but it could be used for inference in the future. Similarly, we talked about different owner types a co location building, they could have, you know, multiple tenants, and unless you know exactly who the tenant is and what type of workloads they're running, it's also difficult to know if it is for AI training at inference. But we can infer based on where they're getting built.
But these distinctions, I mean, what I'm here here is that it's not just that nobody tells us which it is that makes it a little bit of a gray area. It's that actually even a data center itself might sometimes at one point in its life be doing one thing and at a different point in it's life be doing something else.
I think for power folks, I often try to like frame it like a data center is very similar to a battery. Like batteries can do multiple different types of energy services or ancillary services, a data center can do multiple different types of workload. It can be doing AI training or cloud as long as the configuration is correct, or it could do those types of workload.
Got it. So just because you've optimized one to one thing doesn't mean that it can't do the other thing.
Yeah, And particularly in colocation data centers where basically these companies are renting out their RT servers to tenants, those tenants are probably doing different types of workload. There can be multiple applications in a data center.
Yeah, a workload, which is often why you see in colocation that they're optimizing for everything because they don't know who their tenant is. I will add, though, in terms of AI data centers that they are quite different from data centers today. We already talked that these data centers are getting larger. A lot of that has to do with the GPUs in those servers being much more so.
Can you just what a GPUs.
Graphical processing units, which is they're similar to CPUs central processing units, but they're very specialized in they're cooler.
I think that we've got to the level of understanding. I'm good with it. So it's like it's like a chip, but it's a different kind of chip.
Yeah, it's very good at parallel processing, which is optimized for training large language models. I'm sure lots of people have heard of Navidia and their GPUs. A lot of tech companies are also building their AI accelerators, which are specialized chips for these models. So a lot of the design of the data center today in order to accommodate AI training workloads are different. A lot of that has to do with these GPUs and the rack that they're on are going to be much more power dense, which means they need a lot more sophisticated cooling technologies to accommodate those sort of rack densities that could be tenext to what typical data centers are today. We've seen, for example, Metas scrapping data centers that were not ail ready, so it's quite difficult to retrofit a data center from five years ago to an AI workload.
So while a.
Data center in the future could have multiple uses, it's also very specialized to what they're going to be running.
Got it is it? Would it be right for me to say that, like, a data center designed for AI can be used for other things too, but a data center not designed for AI probably can't do AI? Is that a fair statement?
I think that's a pretty fair statement. We're basically seeing new data centers being designed in a way that allows for AI training, and so there's an influence of like AI and data center design to be larger and more power tents.
So in these training data centers that building these models. And one of the charts in the note that I really loved but haven't quite fully digested is one showing the amount of and you don't have to explain this unit amount of terror flops required too. Terrorflop that's just the unit of like computer work, isn't it.
Yeah, it's just a basic unit of computation.
Yeah, the amount of terror flops needed to design different AI models as they've become more and more in sophisticated and so obviously there's been an expectation that trend is going to continue, and everyone was you know, freaking out in both good ways and bad ways about all the power demand that this will intel. And then I remember deep seek came along and a lot of the people were like, oh, this changes everything. Can you just provide a bit of clarity on all of this.
There was a couple of things I really tried to understand about all of this, just in terms of like power market fundamentals. I think when it comes to like forecasting for power demand, all things come down to some very basic constructs. It's usually like how much quantity of something and then the energy intensity of that something. And for data centers it's a very similar process. And when we think about like the energy efficiency of large language model AI training data centers, there's like a couple things to think through. First is around the energy intensity of power consumption from a chips perspective, and chip innovation is often like confused with like increasing energy efficiency. That's not necessarily the case. Typically when we think about chip innovation, it's often optimized for like operations per second, which means that like typically more like every generation of chips tend to have draw more power and therefore, like it increases the energy intensity of a data center. So there are new chips that are getting invented, like the Nvidia Blackwell that focuses on energy efficiency. But in general what we've seen is like the more advanced chips tend to draw on more power. The other thing that is really important for AI training is then like the number of parameters that an AI training model focuses on. So parameters is just like points of active information for a model to think through.
So it's just the number of different things it thinks about sort of like so if I was thinking, like should I go to work today or should I walk to work today? Like, if I had one parameter, it might be what's the weather like outside? And if the two parameter might be what's the weather like outside and what day of the week is it? Yeah, you might determine whether or not I go to work. So that's like what each of those is a parameter.
Exactly, exactly precisely and in general, like in the large language model community historically, or at least in the power industry historically, what people had thought through was that like more sophisticated models required more parameters. So with every generation of like chatchipt or Gemina or cloud. If you look at their like technical reports, what you'll see is that there's increasing amount of parameters that is used to like train these large language models, and so in general, like there was this overall consensus that the energy intensity of air training is an upward trend. You use more powerful chips, and you're using your training on more and more parameters, and so that's all driving more and more electricity consumption. Deep Seek came out in December twenty twenty four, and in their technical report, what was really cool was that they had a different training process. So it uses something called a mixture of experts training process, where instead of just like plugging in all of their parameters all at once, what they did was they pre categorized their parameters into experts. So like certain parameters that specialize in math or like colors, they would like pre categze them so that when you put in a query of a question to ask the model to train, it would know which specialization to pull the parameters from, which then drew on less power. So the other thing that the technical report published was that deep Seek performed at a very high level with chat GPT or like other types of large language models that used a lot more parameters, and so it kind of broke the assumption that you needed a whole bunch of parameters to make really sophisticated large language models.
And so do you think that that will massively change the outlook for power demand from AI?
So to answer your question from like a data center demand perspective, not necessarily in our near term forecast on in our data center outlook, we use like a project by project level forecasts, right. But in our new energy outlook, which focused more on like long term forecasts for data center demand, it focused on that fundamentals based way of forecasting, which it looks at long term in any given like market, what data generation and data usage looks like relative to the energy intensity of that data generation.
Right.
And there's a point you make in the report I recall and I don't think you were talking about deep seek here. I think you were talking about data center efficiency. But bring up Jevins paradox, which is this idea that if you make something more efficient, it doesn't mean necessarily that we save energy, it's that we just do more with what we were going to consume anyway, And could the same logic be applied for deep seek is if it is more efficient with parameters and therefore energy and also computer usage, then that just opens the door to do cooler things with AI than would have previously been possible, rather than to save energy.
Yeah, the energy intensity curve of AI training data centers like instead of it being like an upward swing, it goes down. But we also then know it opens up a lot more opportunities for a lot of different types of business to maybe do AI training right, which means that you have more companies that may be doing this. So got it runs paradox.
Very interesting and actually I think that brings some real clarity into what this whole deep seat thing means for power demand, which my main takeaway is like not that much. Ultimately, it's very difficult to say, but we shouldn't be saying, oh, this means all this data center demand growth isn't going to happen. Yes, So pulling out again, there's all of this data center build that's going to happen in the US, four companies are going to be behind a little bit more than half of it. Where are they going to be building all of this? And why are they going to be building in those places?
Yeah? I'll also add on those four companies that are most of the data center market, they're also the companies that you know aren't building these AI training models and have the capacity to train large scale models because it's a very costly exercise that is only set to grow. They're also forty of the data center market, but they're also the ones training AI models because actually, not many companies can train A models, right, right.
So a lot of the new big demand is becoming from these four companies. So a lot of the demand that we're talking about.
Yeah, I guess today when we look at the data center fleet, they're mostly for cloud. But going forward, the companies that can actually train AI models is less than ten companies training like frontier models, and that those are going to be the big tech companies.
And so then where is this happening and why in those locations.
So in our forecast we see three main markets emerge through twenty thirty five. We break them down by power region for power forecasting purposes.
And that's just because that's how we think. I don't see states, I see power region.
Yeah, so we see PGM or Coat and Southeast, and within PGM, which spans fourteen states, we see Virginia continuing being one of the biggest markets so than Virginia has been data center capital for the last decade. If Virginia was a country, it would follow the US and China as having the largest data center market.
In the world.
Wow. Can I say that again? Wow?
Yeah.
So Northern Virginia has been kind of at the center of data center build out. A lot of this has to do with a bit of history. They had the first Internet exchange point in the nineties, which was kind of the beginning of small data center build out, and data centers typically continue to cluster an existing markets, so as data centers grow, they'll have supporting infrastructure like fiber optics, utility relationships, and workforce availability that allows more data centers to continue to grow. So over time, Northern Virginia just has gotten hotter and hotter, and we see in our project Pipeline a lot of that continuing to grow. We do know that a lot of this could be more AI inference rather than AI training, but still a lot of data center demand happening. There. Another state in PJM is Ohio, which is emerging as one of the main hubs in the Midwest. Google is building data centers in New Albany, which is just outside Columbus, one of the largest cities in Ohio, as well as other co location and hyperscular companies.
You know, you've highlighted some a couple of major markets with PGM. I think I think I thought was really interesting and maybe slightly paradoxical in the report you wrote was there's this chart from that has a survey of data center developers saying what do you prioritize when you're thinking about where to build a data center? And I think we said that the top three survey results it wasn't our survey, it was a third parties. The top three survey results all related to energy. It was something like security of supply, how cheap the energy is, how green the energy is. I can't remember, don't quote me on them, but it was energy related. When you look at the data of where they're currently building and have been, well, it kind of completely contradicts that thesis. You know, PGM doesn't have the cheapest or cleanest electricity in the US. I mean you could say it has good security of supply, So what is behind this paradox?
I think we a lot of the hype right now in data center built out is AI related, and we did talk about how AI training could be more flexible. And we do see a lot of our merging innovations of siting near stranded renewable assets and going against the grain of traditional sighting. But most as we said, most of them are continuing to build out an existing data center market. So market's pre AI one theory is basically, they're investing billions of dollars in a data center for the next decade or so. While in the near term they can plan for AI training. In the future, it could be AI inference, or they could you know, retrofit and sell it to a co location company altogether, and they need to plan for those latency and redidancy requirements today. So even though in the near term they could cite it in you know, West Texas where there's a lot of renewables, we still see like Dallas Fort Worth and like northern Virginia as being hotspots, although we are seeing a trend of going a bit outside of urban locations and going more to where there is power supply.
Got it. I remember earlier in the podcast you said in theory about you know where you could build training data cents and this is the this is what you're saying is in practice, it's like this, but we are seeing a bit of a trend, but just maybe not as much as you would think to the sort of energy ideal locations.
Yeah.
I guess one thing that we did also notice, like to your point on that little contradiction, is that hyperscalers do take a dual strategy. They're both building in existing regions and also trialing in new locations. If you look in our report, you're seeing that the large four companies they're building with in their own data center clusters and also like looking for new markets at the same time. And they're doing that because for them, speed and scale of development is really critical, particularly because like their AI business requires them to kind of take a like a.
Winner is that it's like a there's a computing power arms race happening, yeah now, and so it's all very well saying, oh, we'd ideally build it here, but it's like we just need this right now, and we're going to do what's tried.
Yeah.
Like, I guess it's like a winner takes all game in the AI business. So they want to build data centers as quickly as possible, So they're taking all options.
They want their their particular AI model to be the Coca Cola of AI. So what does all this mean for the power sector? I mean, how is this going to affect the supply mix just keeping up with all this demand? How's it going to affect the regulatory model? You know, it's not designed for just suddenly having loads of new demand dropped on a in very concentrated regions, as we I've just learned. And then you know what innovations are we seeing to try and cope with all of this new demand in the power sector.
Yeah, So what we're currently seeing is a very strong reaction towards all of this new data center demand, particularly from utilities. So in Ohio, where I guess there's a lot of new investment in data centers, what we're seeing is ap Ohio, which is the utility in the region. They've proposed a new tariff which in that tariff they've acquired or requested that new data centers pay up to eighty five percent of their projected energy usage, which makes it less attractive of a market for these data centers to want to build in that region. In a way, it's to prevent increasing retail rates from a utilities perspective, but there's a lot of pushback for installment.
Yeah, that's I mean, because that's controversial because the entire basis of the regulated monopoly is providing equal access to all consumers, even if those consumers that are being discriminated against our massive corporations, still does undermine the philosophical basis. So I'm kind of interested to see what's going to happen there. What do you think's going to mean for the supply mix, you know, wind, solar, gas, what's going to do it?
So I think one of the emerging innovations in moving from you know, this hyperload growth environment and there's not much supply available is this trend of colocation or having on site generation. A lot of traditional grid planning was for that peak power, and I think we saw this a couple of years ago, and you know a lot of taxes and how they integrated crypto mining is having these as flexible loads and curtailing during hours of peak demand. We know that non crypto workloads like AI training or inference may not necessarily curtail, but we do see one model in which they'll have some sort of on site generation or colocation supply where they could as a whole campus interact with grid needs in terms of the total supply mix. A lot of the short term needs means that they'll build whatever technology is fastest, and in our data most of that is when solar batteries, but reliability is also a huge part of data centers. They're known for having five nines, which is ninety nine point nine nine nine percent uptime, which means that you know, firm capacity like natural gas generation or diesel gent chats that they've used traditionally will also be a large part of the solution in the short term.
So it's a real bit of an open question there. It's either how quickly things can get built and what truly is the fastest solution versus the long term needs. Helen, thank you very much for joining us today.
Thank you, Ton and Natalie, thank you so much for joining Thanks Sam.
Day's episode of Switched On was produced by Cam Gray with production assistance from Kamala Shelling.
Bloomberg NIF is a service provided by Bloomberg Finance LP and its affiliates.
This recording does not constitute, nor should it be construed as investment in vice investment recommendations, or a recommendation as to an investment or other strategy. Bloomberg ANIF should not be considered as information sufficient upon which to base an investment decision. Neither Bloomberg Finance LP nor any of its affiliates makes any representation or warranty as to the accuracy or completeness of the information contained in this recording, and any
Liability as a result of this recording is expressly disclaimed