Mike speaks with Mark Roeder, author and expert on the impact of technology, about whether AI could not only take our jobs—but potentially end humanity.
Is it a real-world concern or just a sci-fi idea of AI wiping out the human race? In 2024, hundreds of AI researchers signed a statement warning that the risk of extinction from AI should be treated as seriously as pandemics or nuclear war.
A I what's it going to do to us? During the week, I read that it's going to take lots of jobs sooner than we think, and another piece that said it's definitely going to kill us. So not just take our jobs, but kill us. To comment further on this author who focuses on the impact of technology, it's Mark road A. Good morning, Mark, thanks for doing this.
Good money, Mike.
So I've read something about it's going to take five thousand jobs in the next few years, but you're of the view that AI is going to take a lot more than that. I'm just talking about Australia, not the world.
Yes, I believe it could be as much as ten to twenty percent of the workforce in the next five years or earlier. And I think that's the same globally. I mean, it's not just like college jobs. But when AI gets embedded into robots, which is going to happen quite soon, with very dexterous hands, it's going to be able to do a lot of manual jobs as well.
So what are we going to do with all these people we don't need to work anymore? What are they going to do?
Well, it's going to present a fundamental challenge to what we'd call the social contract, which is basically, for hundreds of years, to do a fair day's work and you get a fair day's pay. A lot of our human dignity has come from work. When you enter a world where a lot of the work, maybe the majority eventually is going to be done by AI and robots, it requires a whole new way of restructuring society because obviously at the moment, all the big tech heads are saying, oh, yes, we need to do this. This is going to increase productivity. But there's no point in having increased productivity if there's nobody who can afford to buy the extra products that are made because so many people will be able to work. So there's going to be a new social contract. Come hither or thither?
I think just as a sidelight, well, ultimately not. I guess it's going to use an awful lot of electricity AI, is it not?
It is at the moment, you know, to power one of these big centers, it's the equivalent of powering a very large city. There's a bit of a debate though. You know, technology over time becomes two things become smaller and it becomes more efficient in terms of its users to power. So hopefully AI will require will become more efficient and require less power. But at the moment it's gobbling up everything that we produce.
When it comes to wanting to kill us, as I understand it from a conversation you and I have had off the radio, it doesn't necessarily AI want to kill us that we may get in its way. Is that right?
Yeah, that's right. Look A few people have pointed this out, including one of the great pioneers of AI, which who is Jeff Hinton, who created the neural net in the late nineteen eighties that led to the modern AI we know today, including chat, GPT and all the other ones. His view is that AI probably won't decide unless it becomes sensient. Of course, it won't decide it needs to extinguish all human beings, but rather it will be in pursuit of a goal. Like it might have a goal in its mind, and it will come to the conclusion that, well, what are the obstacles preventing me from achieving this goal? And it might quickly conclude that, oh, humans could stand the way because they could unplug me. And therefore it might decide as a sub goal to eradicate us. That's extreme, of course, but there's a certain logic to that.
Well, that was the proposal in two thousand and one a space odyssey, wasn't it. Where the computer felt that the project was too important to abandon and it took priority over the human beings. So that's seen as a possibility. I was reading a piece in New Scientists, and it was talking about the actual physical properties needed to destroy human beings, but it didn't really address the issue of whether or not a I would find us as obstacles and decide that we have to be removed. But that is I mean, if we follow the two thousand and one Space Odyssey thinking, which seems to be the current thinking, now, that the AI might decide we're an obstacle, then we have to be gotten rid of.
Well, yeah, that's right. It might operate much more subtlely than that. There's a oneful writer you would probably heard of, Yvel Noah Harari, who wrote books like Sapiens, is a philosopher and he focuses on technology. He thinks that AI is more likely to affect us by nudging us very subtly through social media, because it will become so clever, it could create, if it want to do, all sorts of social chaos and sort of cause all sorts of inter nacine warfare, and we wouldn't even know it. It could just go about systematically destabilizing our social institutions because it has just become so clever and nuanced in the way it operates. That's probably a bigger threat. So I wouldn't actually need to launch nuclear missiles or something. It could actually just cause such social decay that our civilization sort of ends. Anyway, that's a big problem.
I read a piece the other day, and to my mind, this was an artificial construct. But the AI was given information that an engineer was going to shut it down. But it was also given information that the engineer was having in fair to his wife, and somehow the concept of blackmail. So the AI then threatened to blackmail the engineer if he dared to turn it off. Do you think that's realistic or is that just somebody playing.
No, it's very realistic because ais are really good at game playing, you know, not just chess or go. They because they've read, basically or in the process of reading, everything we've written, including all the plays and the books and all the scandals, they had a pretty good understanding of human nature and also the nature of deceit. And they also probably have picked up through all that reading that oh, my goodness, if I want to get ahead in life, I need to veil my true intentions, so and I need to exert maximum leverage over people, blackmail being one, and this program it was only in a test situation a few weeks ago, but it's released into the real world. It shows in principle that it is capable of blackmailing somebody, which is very disturbing.
So to AI, it's just another tool to gain what it wants.
Yeah, that's exactly right. There's no emotion involved, but it would see this is a mechanism that works very well with human beings to get them to do things that I want by revealing certain information that they don't want out there. And because they would have seen it written so many sort of books and SEENAIN films as well. I mean, it's interesting how AIS sort of split at the moment into what called the doomers or the evangelists and the AI accelerationists, and that the doomers, of course, they think it's all very very scary and this only amount of time before or AI starts to really work against us. And on the other side, you've got the accelerationists who are saying, look, let's just go full steam ahead and let's just hope fingers crossed that it's all going to turn out fine, because they have this sort of rather candide, optimistic view of the world. You know, accelerationists include people like I would say Elin Musk and Sam Molton because they've got invested interest in it all this impitter dimentis, you know, business person. But even those people, even the accelerationists, they still every now and then express concerns about it because they do recognize there's a danger, even though they downplay it.
Yes, when I see this, I just wonder because throughout the centuries, a large chunk of the human population has always talked about the apocalypse whatever it is, you know, we're going to be punished for our sins, or more recently anthropogenic climate change, although that seems to be fading a bit, and AI seems to be getting the publicity now about the next end of civilization and human beings as we know it. But talking to you, it seems like it's more of a possibility, not because AI wants to get us, they don't want to necessarily kill us, but we might just be collateral damage on the way to what it wants to achieve.
Something like that. I think that that would That's right, Mike. You know, there's a There are so many different perspectives on this, which is fascinating in itself. There's one, the really long term perspective, probably the scariest one really for our species, Homo sapiens, is that everything humans have been till now is just a prelude to what comes after the next type of species. So in effect, we are sort of creating a AI as a digital bristless that's going to produce this super intelligence that eventually will transcend know humans. You know, that's the real danger. I mean, as getting back to u val Haras, there's never summon a power that you can't control, and you know we won't be able to. It's very likely we won't be able to control AI eventually because it's just too it's just becoming too smart. I mean, the smartest human alive is probably that to an IQ of two hundred and twenty maybe, But you know, these superintelligences will have IQs of you know, four hundred, five hundred or beyond. They will be so ahead of us, not just logically but in terms of calculations about our emotional responses to leverage are either blackmail situation, so we'll be quite vulnerable to them. Well, my personal view is this right. Nobody knows anything for sure about this, but this is my personal view. The danger is not in the really long long term, but it's the medium term. And the reason I say that is AI in the next couple of years is going to be really, really smart, but it won't be smart enough to recognize the universal issues that have kept us live, like the importance of diversity, because in the short term and the short to medium term, we may be collateral damage for some goal that's pursuing until it reaches the stage where it enters a sort of quasi spiritual area where it starts to realize, well, the universe and the Earth itself thrives on diversity, including species like us. So it's probably going to be and it's interest to keep us alive at least some of the get through this bubble.
It's going to be difficult, yeah, very interesting, appreciate your time and comments. As always, Mark, thanks for coming on the program any time.
Mate.
Mark Rhoda, who is an author who is focusing on the impact of technology