is using AI worse than driving a car?

Published Jun 4, 2025, 7:10 AM

Dexter doesn't have a car. But he uses AI… a lot. So, is his environmental impact worse than someone who drives a truck to work every day? 

This seemingly simple question led us to a bigger conversation about water, public health, and why we still don't know much about the true environmental cost of artificial intelligence. 

To help us understand all this, we talk to researchers Shaolei Ren and Alex de Vries, who’ve been studying the toll AI takes on the planet.

Got something you’re curious about? Hit us up killswitch@kaleidoscope.nyc, or @dexdigi on IG or Bluesky.

I've lived in LA for a decade, and this whole time, I haven't owned a car. When I tell people that, they usually look at me weird. And yes, riding the bus and walking and using a bike is less convenient, but at this point I'm used to it. But sometimes I do wonder if I should just give in and buy a car like everyone else. So to help me decide, I did what a lot of people do recently when they're weighing options. I asked AI. I opened up Claude dot AI and I input all my current transportation costs. I put in my bus fares, my bike, the cost of my occasional uber and I asked it to compare that to the average costs of car ownership in Los Angeles, so parking, gas, insurance, repairs, all that sort of thing. And I asked it to give me the pros and cons on either side, and it did. The big con is convenience, which I already knew. And on the pro side, it said that I was saving thousands of dollars per year, but it added one extra thing. It said that I could have the nice feeling of knowing that I was also being eco friendly. And I thought, hold on, wait a second, eco friendly, I just spent half an hour running scenarios through an LM, which I know is built off the back of a massive amount of computing, which in turn means a massive amount of energy. So am I actually helping the environment here? Or am I hurting it? So this week I set out to answer what seems like a pretty simple question, how bad is AI's environmental impact? Really? And yes, before you ask, I did consider asking Claude and maybe chat GBT about AI's own impact on the environment. But then I figured, you know what, maybe this is a question I should ask actual human beings. And I found a couple of people who've been studying this stuff for a while to help me parse all of this. Is there a way that I can compare? Is my AI usage worse or better than car usage, or worse or better than my impact on the environment from eating meat or something like that? Are we able to make those kinds of comparisons.

That's what carbon footprints were kind of invented for, so you can make this type of comparison. If you're driving in fossil fuel based car, you know exactly how much gas you're using and what that might mean in terms of carbon emissions. That's pretty straightforward. It's much harder to do this for AI.

I'm afraid from Kaleidoscope and iHeart podcasts, this is kill switch. I'm Jackster Thomas.

I'm sorry.

If you use social media, you've probably seen people being criticized for using AI, and depending on who you hang out with, that criticism can be kind of different in how it shows up. When I see someone post an AI generated image or some AI generated text and there's angry comments in the comment section, it's usually one of two types. The first is people saying that it's disrespectful that by posting AI generated poetry or drawings you're devaluing the original artists who didn't consent to having their work fed into an l l M. That one's pretty easy to understand, even if you don't agree with it or think it's a big deal. The other comment I see a lot of is people saying that using AI is destroying the environment. Figuring out whether that's a big deal or not is a little bit less straightforward.

What I tried to do in my research, I tried to keep track of how the global electricity consumption of AI is developing.

Alex Deviries is the founder of digit Economists and a PhD candidate at the VU Amsterdam. He's been researching the sustainability of new technologies for about a decade.

The way I do that is by looking at how many machines of specialized AI's devices are being produced by the AI hardware supply chain, and then considering that power consumption profile how much power is now being consumed by all of these devices. Which is a very imperfect way of keeping track of this, but it's kind of like the only rule you have available at the moment.

And even Alex is having a hard time keeping up with this. What I called him. He was in the middle of putting together new research. Back in twenty twenty three. His data showed that by twenty twenty seven, new AI service sold could use the same amount of energy annually as the yearly energy consumption of a country like Argentina or the Netherlands. But things have accelerated. His current research shows that it won't take until twenty twenty seven for that to happen. At this rate, we're going to hit that mark sometime this year.

Simply because now the amount of devices that's being produced by the AI hardware supply chain is way higher than it was two years ago.

So it's even exceeding your pretty bleak estimations that you made a while ago.

Oh yeah, it's just that the hype is so big, and then the mod for this type of hardware is so big that the numbers are going up much faster than could be anticipated just two years ago.

But hold on, before we get too much further, let's just clarify what we're even talking about when we say AI. If you could break it down for me, how does artificial intelligence use natural resources?

Heah, it's a general umbrella term that includes many different things. But right now, if you're talking to a person on the streets random like when they say AI, they're referring to large languine models, or maybe you meage generation models. So these are the generaryty AI models.

Chale Wren is an associate professor of electrical and computer engineering at the University of California, Riverside, and he's kind of a colleague of mine. Our fields are completely different. But a couple of years ago I taught a class in the building right next to his. I'd had no idea that on the same campus there was an expert who'd been researching the environmental impact of generative AI the whole time, and I thought, perfect, this guy's kind of a colleague, so I can stop doing all this research on my own and just go ask him. Can you give me an idea of how, say, car usage compares to usage of an AI model.

I would say having a large language or medium sized language model. Right, roughly ten short emails could be consuming a quarter of the electric kill what hour energy, So that's roughly enough to drive a test in a Model three.

For one mile, or as Alex puts it, Chat GPT must be running on like something like five hundred mega what hours a day, which is enough to power a small city.

Basically, chat GPT's overall daily energy use, it's about the same as powering every home, every grocery store, every street light in a small city like San Luis, Opistpo in California or Ithaca in upstate New York. But what does that actually mean for me and you? How much energy does it take to just ask chat GBT one question on a.

Per interaction basis? It's actually not that much. You're talking about something like three one hours maybe per interaction that's something like a low lumin led build that you have running for one hour. It's not a lot of power, but it's it's nevertheless significantly more than a standard Google.

Search in one second. Just as an aside here, we usually don't think of something like a Google Search as using electricity. I mean, your phone or your computer is already on, so what does it matter if you're typing stuff into it or not. But on the other end of that Google search you typed in, they're servers and those are using energy. So as we keep going in this episode, maybe think about that that on your end you're not seeing any energy used or environmental effects, but doing a Google search, watching a video, or even downloading this podcast that does use some amount of energy.

Even Google's CEO at some point commented, like, hey, interacting with these large language models, it takes ten times more power than the standard Google search. So and that would mean that if you're talking about three one hours for interaction in a large language model, for a standard Google search, it would be like zero point three one hours, which is a very very tiny amount.

Just to explain here, a what hour is a unit that tells you how much energy device uses over time. For example, a sixty watt light bulb running for an hour uses sixty watt hours. A single Google search uses about zero point three watt hours. That's enough to power that same light bulb for around eighteen seconds. But now there's that AI add on that comes stacked by default on top of every Google search, which takes that number up ten x, up to three full watt hours per search. That's a little different now you're running that same light bulb for three full minutes and then but.

It's of course in the number of interactions where these numbers start to stack up quickly, Because if you're talking about Google Skill, you're talking about nine billion interactions a day, going as three one hours per interaction. Then, interestingly, the whole company Google would require as much power as Ireland just to serve as a search engine.

If that was the case, Wow, using as much power as a small country sounds wild, but if we think about it, it kind of makes sense. We've default ten xed our energy use overnight across nine billion searches a day. That energy use is going to add up pretty fast. But there's another thing to consider when we talk about AI's energy use. The difference between training the model or giving it a bunch of data to teach it how to work, and using it like when you ask it to write a cover letter or I ask it if I should buy a car. When we talk about AI and the energy consumption that can go into AI, there's different phases, right. There's the training phase. There's me actually sitting down and asking you an agent, a question. Can you break that down for me?

The training part we call it learning, so based on the data we try to optimize the parameters so that when we see some new queries from the users, we can give you as equate an answer as possible. And training is really one time. Of course, later we're going to do some update fine tuning. Inference is when the users actually interact with the model, and depending on the popularity of the model, but once it gets trained, it could be used by many hundreds of millions of times or even billions of times if you train the large lunguine model like LAMA three point one. According to the data released by Meta Training, a large lan grain model like that air pllutant we gamerated through the training will be roughly equivalent to more than ten thousand round trips by cart LA and New.

York City ten thousand round trips by car. Yeah, so that sounds bad, that sounds like a lot. But is that a one time and it's just the one time.

It's a one time.

Let's clear something up here. That number ten thousand round trips from LA to New York by car. It's not just about carbon it's about air pollution, specifically things like nitrogen oxides and fine particles that come from power plants and can get deep into your lungs. This isn't theoretical. This is stuff that raises risks of diseases like cancer, and it doesn't just affect people next to the place where all those computers are. Pollution travels and it lingers. So what Challet's talking about here isn't just numbers. His calculations are showing that training a single model the size of Metaslama three point one can produce that level of pollution on its own. So yes, training these models is a one time hit, but it's a big one. If we're talking just about energy usage, using an LM to say, write ten emails might be like driving an electric vehicle for a mile. And since an electric vehicle was maybe three times more efficient than a gas vehicle. Figure, those ten emails might get you a quarter to a third of a mile in a regular car. And yeah, maybe these are relevant numbers for me and my decision about whether or not my AI usage is counterbalanced by me not having a car. But these numbers are just estimates, and we are going to get to that. But the bigger issue here is that running those data centers doesn't just use electricity. And this is where Chalet's research comes in because we've heard about AI's carbon footprint, but what about its water footprint, which could be a much bigger concern for us living here on earth. That's after the break. So you had a study come out last year called making AI Less Thirsty, uncovering and addressing the secret water footprint of AI models. What made you want to look at this?

Maybe that was due to my childhood experience. I spent a couple of years in a small town in China where we only had water access for half an hour each day, so we just had to think about how to use water wisely and to every possible means to save water. Then in twenty thirteen, oh I saw this issue. I want you to find out more about it. What about the water consumption and nobody new at that time.

A big environmental impact we don't talk about as often its carbon emissions is water usage. And the impact that water usage has on all of us depends on where that water comes from and where it goes when it comes to AI. A main use of water is to cool down the data centers, which, as we know, use a lot of energy. This is how they make sure that they don't overheat.

To prevent syrup from overheating, usually we use water evaporation, and that's a very efficient way to move the heat, to dissipate the heat to the environment, and this water evaporation could be in the cooling towers. That is essentially evaporating water twenty four to seven.

When water evaporates from a data center's cooling system, it goes out into the air and is basically considered gone, at least from the local supply. You might be thinking of the water that you use when you take a shower. How that water goes down the drain, It gets treated and it can be reused. But evaporated water rises up into the atmosphere and you can't reuse it. It can eventually come back down as rain, but that takes a while.

Some tech companies they can use over twenty billion liters of water each year.

Twenty billion, Yeah, that.

Number basically is the same as some major beverage companies annual water consumption, the water they put into their product, basically the water we drink from a bottled water. Those are the water consumption for the beverage industry. So in some sense, this AI is turning these tech companies into a beverage company in term of water consumption.

Nobody's drinking that bottled water or those sodas is just evaporating.

Yes, yes, yes.

One important thing here is that when Challte's talking about water, he's talking about a specific kind of water. For example, you might have heard that for every kilogram of beef, it needs fifteen thousand liters of water. But ninety percent of that water is what's called green water. That's water that's naturally stored in soil and used by plants like rain water. It doesn't have to be clean enough for people to drink it. It would be nice if data centers could use that, but that's not really practical for their usage. They rely on what's called blue water, the stuff that's clean enough for humans to drink. So when Chalet is comparing a tech company's usage of water to say like Pepsi's global use of water, this is a pretty direct comparison. Use the phrase when you're evaluating GPT three, that GBT three needs to drink a certain amount of water.

A rap flight ten to fifty queries for five hundred million digits of water, so basically a bottle water.

Let's pause on that number for a second. Ten to fifty queries the kind of thing you might do in a single session using chat GPT that could drink half a liter of water. I'm pretty sure that me going back and forth about buying a car, I probably used about a leader, and that's using conservative estimates. Challeat and his team. We're focusing on GPT three, which was released back in twenty twenty. Even five years later, OpenAI hasn't released all the details researchers would need to give us a clear picture of its environmental impact. Do the companies know how much water that they're using?

Of course I can't really speak on their behalf, but I think they do. They could figure out the water consumption easily because they know their energy they know their water efficiency of the courting system, they know where they build the data centers, so they have the information, but we're not seeing their own disclosure.

By this point you might be picking up on a recurring theme here. Putting a specific number on the impact of AI is basically impossible, and it's not because the math is too difficult.

The thing is the tech companies are also refusing to tell us exactly what's going on. So if you take Google's environmental report, it will show you the numbers are bad because in twenty three they show that their carbon emations were up like fifty percent compared to five years before, and they were pointing to AI as the main culprit. They were saying, Okay, data center infrastructure is adding to our combon emissions, we're using more electricity. And at the same time they just don't specify exactly what's going on with regard to AI. They say that making distinctions is not meaningful at all, even though weirdly, Google was the company that just three years ago was in fact making this distinction. They were disclosing the ten to fifteen percent of their total energy costs were related to artificial intelligence. Now they stop doing that they don't want to tell us anymore.

All of a sudden, they it seems like something changed there. What do you think changed?

The numbers got big, that's what's changed.

Okay, not to spoil the end here, but it looks like I'm not going to get a direct answer to my question. But at least I have something of a ballpark, even if it's a conservative one. And I also know that we're using AI every day for everything. We might not know the exact environmental impact of AI, but we do know that it's increasing, So what do we do about it? That's after the break. So in this episode, we've been having some trouble figuring out the exact environmental costs of AI. But this is a pretty common problem. I mean, my friend Matthew Galt wrote up an article at four or four Media explaining that the Government Accountability Office, which is a nonpartisan group that answers to Congress, is struggling with the exact same thing. They came up with roughly the same numbers that we talked about earlier. They put together a forty seven page report that acknowledges that even after interviewing agency officials, researchers, experts, they're still left with having to do estimates because, as they said, quote, generative AI uses significant energy and water resources, but companies are generally not reporting details of these uses. So even the US government has no idea exactly how much carbon we're pumping out or how much water we're pouring into the sand. And this is an issue because when researchers like Chalet and Alex were first looking into AI's environmental impact, the biggest concern was training. That's the one time process of feeding those massive data sets in the powerful machines. That's what was making headlines for energy use. But then came chat GBT three and suddenly people weren't just training models, they were using them all the time, and that shift changed everything.

As an end user, you can't even manage it properly because the companies are not telling you. So it's not like when you're interacting with chut GPT that judge GPT is gonna tell you, Okay, be aware, now the carbon footprint of this conversation has already exceeded this amount. Open AI knows this kind of stuff. They could tell you, but they won't, And then other people are left trying to make some kind of customer to figure out what might be going on. We also see that they are kind of downplaying the impact of what they're doing here. I mean, we see their environmental reports or disasters. The carbon emitions are shooting up, and the only thing they're saying is like, Okay, don't worry about it. AI will solve this in a couple of years from now.

So the thing that's causing the problem is going to solve the problem.

Also, yeah, that's the excuse they're using. AI is going to solve it. It's bad right now, but everything will be better in a couple of years, trust us. But it's one hundred percent wishful thinking. And to be honest, if you look at the whole history of technological developments, even if we do end up realizing a lot of efficiency gains with AI, this is definitely not a given. It doesn't mean that our resource uses in total is going to go down. This is the infamous Jevins paradox.

Jevins paradox is a concept that comes up a lot in AI recently. Basically, in the Industrial Revolution, cold powered engines started to get more efficient, and some people assume that, Okay, this is going to mean that now we're going to use less coal overall, but an economist named William Jevins said, no, this is going to have the opposite effect. As coal powered energy gets cheaper, demand will increase, and total consumption of coal won't go down, it'll go up. He was right, and that effect seems to keep repeating.

Despite all the efficiency gains that we had. We're not saving on resources, we are using more resources.

And essentially you're saying here is even if we are able to make AI more efficient, we're just going to use it more, and so any efficiency gains are going to be offset by the fact that we're just constantly using this more and more and more.

One thing that's extra annoying with AI is that there's also this bigger is better dynamic going on, whereas if you make the models bigger, you'll actually end up with a better performing model, but it just means that your efficiency gains are completely negated all the time.

Every chat, every prompt, every AI generated jibbli image adds up. We just don't see that impact directly, So let's all just stop using AI. Right, Well, that's probably not realistic at this point, and that's not necessarily what everyone's recommending.

So I work on optimization, and I think this is a problem. We can optimize it, we can make it better, reduce the cost, and there are a lot of opportunities, so we should definitely not panic. I hope the model developer can disclosee that cost to the users. I will figure out should I use it now or should I use it later.

Let's say that I log in the chat GBT and it says this query is going to use this much energy, this much carbon, and this much water. And if I have that information up front, then I, the user might decide maybe I don't need to have it summarize the entirety of the collective works of Shakespeare today.

Yeah.

Maybe. Or they could tell you if you do it later, in one hour or in the evening, the cost will be different. And do you figure it out? Do you want to do it now or do it later.

What Shelley is proposing here is that developers could build in a system that would alert users that their query is coming in at a high impact time of day, and it could suggest that there might be a better time to make that request when data centers have lower usage. They can use optimization techniques to reduce energy consumption. This concept isn't totally new. Google flights shows carbon emissions estimates for flights and it will show you which option has the least impact. So something like this for AI is definitely possible, but I'm not totally convinced people would actually care. The last time I booked a flight, I saw the most carbon friendly option, but I didn't pick it because it had a long layover. I didn't want to deal with that. Putting the responsibility on users can sound good in theory, but the flip side of that is it can just be a way for companies to avoid doing anything themselves. So should this responsibility really fall on us? I mean, sure, you could decide to skip the chatbot and take notes by hand, and that only really works if you know what the trade off actually is, and right now we don't, because the companies building these tools aren't giving us the data that we would need to make informed decisions in the first place. So maybe the responsibility should fall elsewhere. Like policy makers, Shelley is already thinking about what this could look like and how much of a difference it could make.

We're informing the policy makers, hopefully when they make decisions they could take into account this public health burden, water consumption, power strain on their infrastructures. These are the cost the local people will be paying for the companies. I think, especially for those big techs, they already have the systems ready to do this type of optimization. They are doing it for carbon orware computing. And we use the math as location as an example. If they factor in the public health burden into their decision making, for example, where they route to their workload, they can reduce the public health cost by about twenty five percent really and reduce the energy bo by about two percent and also cut the carbon by about one point three percent.

So just by being more intentional where they route digital traffic, a company like Meta could reduce detrimental impacts on public health and they'd be saving some cash at the same time. This is called geographic load balancing, and for the user it's totally seamless. You log in your feed loads, you don't notice anything, but behind the scenes, your request is going somewhere where it's cleaner, cheaper, and less harmful to process. Even beyond where companies route traffic, they can also consider where they build the data centers from a public health perspective.

When they built data centers in the future, they can take into account this of factors because the decision that we make today will be impacting the public health, the water consumption, the power infrastructure for many years to come.

Shelle is thinking about the future and research on future optimization is a big deal because the AI boom is already here. Big tech companies are projected to spend three hundred and twenty billion dollars on AI technology and data centers this year, which is nearly one hundred billion more than last year. So where we put these data centers and where we route the traffic really matters.

Something that I was not expecting to be widespread because I was thinking, if I leave, let's say, five miles away from a data center or power plan, I wouldn't be affected. That was wrong. These air pollutants are what EPA defines as cross state air pollutants. They do travel hundreds of miles along with a wind. We're going to have a significant impact just by strategically placing the data centers for the public health.

What that really highlights is something that we don't think about with tech infrastructure. It doesn't just impact the people who live next door. When air pollution travels hundreds of miles, it turns these data centers into regional issues, not just local ones. I'll give you an example right here. As we're working on this episode, I saw this article in Politico and I just want to read you the first sentence quote Elon Musk's artificial intelligence company is belching smog forming pollution into an area of South Memphis that already leads the state in emergency department visits for ASTHMA end quote. That's probably enough to give you the idea. But just to explain more, XAI, which is the company behind groc which is the AI chatbot that you use on Twitter, set up shop in Memphis with enough methane gas turbines to power two hundred and eighty thousand homes. The company didn't get the required air pollution permits. They're run without the emission controls that federal law usually requires, and in under a year of operation, XAI is now one of the largest emitters of smog producing nitrogen oxides in the entire county. And this facility is located near predominantly black neighborhoods that are already dealing with high levels of indust pollution. These inequalities already existed, and tech development is not making it better, it's making it worse. It is often like this. There are absolutely people who are feeling the impacts of this right now, and there's people who will feel it in the future. Maybe somebody will write an article about them, maybe not. So I was hoping that I could use this podcast to solve all my personal problems. But apparently we're over one here, because when I started working on this episode, I was thinking that this section right here, the outro is where I'd say, Wow, now I know exactly what impact my use of AI is having on the planet. But I don't. And that's pretty annoying because, and I guess this is as close to an answer as we're going to get. It's not really about how often I personally decide to use Chat, GPT or Gemini or Claude or whatever. It's about what happens when companies build systems that are this powerful but also this resource hungry, and they refuse to tell us what it really costs. And I think we deserve to know, not just so that we can make individual choices about how often to use Chat, GIBT or Gemini or whatever, but so that we can hold the right people accountable, because if AI is really going to change the future like they say it will, we shouldn't know how much that future costs. Thank you so much for listening to kill Switch. If you got any ideas or thoughts about the show, you could hit us at kill Switch at Kaleidoscope dot NYC, or you could hit me at dex Digi that's d e x d ig I on Instagram or on Blue Sky if that's more your thing. And if you like this episode, if you're on Apple Podcasts or Spotify, take your phone out your pocket and leave us a review. It really helps people find the show, and in turn, that helps us keep doing our thing. Kill Switch is hosted by Me Dexter Thomas. It's produced by sen Ozaki, darl Luk Potts, and Kate Osborne. Our theme song is by me and Kyle Murdoch, and Kyle also mixed the show from Kaleidoscope. Our executive producers are Ozma Lashin, Mungesh Hatigadour, and Kate Osborne from iHeart. Our executive producers are Katrina Norville and Nikki e. Tour. See you on the next one

In 1 playlist(s)

  1. kill switch

    25 clip(s)

kill switch

Were we sleeping when everything changed? Seems like the technologically driven future is already he 
Social links
Follow podcast
Recent clips
Browse 25 clip(s)