OpenAI Part 3: Heaven and Hell, Part 1

Published Jun 13, 2024, 10:03 AM

A group of high-level early employees who had glimpsed the fluidity and power of the technology behind ChatGPT left OpenAI to start a new company. Their exit sparked concerns that OpenAI is not prioritizing safety. Meanwhile, the success of ChatGPT drove a craze within the tech industry but also added fuel to existing apocalyptic fears. In this episode, reporter Ellen Huet examines the frenzy around AI, both its utopic dreams and visions of impending doom.

Today, we're going to start on a drive in Hawaii.

We're on north Shore, going deeper into the jungle on the north shore, so we're passing twin falls right now.

I'm driving through the lush green forests of Maui. Annie Altman, Sam Altman's little sister is sitting in the passenger seat. You heard from her briefly in the first episode.

And I love this because you wouldn't turn here if you didn't know you could turn in.

I certainly would not be driving down this if I didn't tell you a tiny little round. We're taking a tour of the different places Annie has moved around in the last couple of years, driving down dirt roads to look at cabins and houses hidden behind enormous tropical plants.

We got a huge monster.

I look at that.

It's so it's so goofy. Then I'm driving this, I know because then you see it in someone's office plant and you're like, is that the same thing.

For much of the past two years, Annie hasn't been able to afford a stable place to live.

The place you just passed is one of the places I stayed at longer term.

In all of the houselessness two months on a newly built no running water or no electricity house at the far and back of the property.

And I think she's an important part of Sam's story.

And at the time I had nowhere to stay and no rent money, certainly no deposit money, and barely enough room, barely enough money for rent.

Recently, over the course of just a year, she moved twenty two times, and that's on average about twice a month. Sometimes she has stayed places for a week at a time, or even just a night or two. Some of them have been illegal rentals without running water. She says she's slept on floors and friends' houses. She stayed with strangers when she didn't have another option.

The man who lived in the front house messaged me on Instagram, and I stayed in his kids.

Room the week that they weren't there, and then slept on the floor in the common room the week that the kids were there.

I was houseless. I didn't have somewhere to go. I stayed in this cabin with destly anti roof right there for three months.

How many different places have you lived in that didn't have running water?

Maybe five ish five or sus six, I don't know.

Meanwhile, thousands of miles away in San Francisco, her brother Sam was having a spectacular year. In twenty twenty three, the success of chat GBT had launched open Ai into the stratosphere. Sam was named CEO of the Year by Time magazine. He spent months flying around the globe talking to world leaders about AI.

So what outforted you?

Let's give a big round of applause.

There a special guests.

Sam Altman.

It is with great pleasure that I invite on stage Sam Altman.

The individual transcends introduction, mister Sam Altman, the CEO pop open AI.

Sure, Sam had been positioning himself as a public intellectual for some time, but for a long while he had merely been Silicon Valley famous. This was a new level of fame. Suddenly it seemed like Sam Altman was a spokesman for this entire AI boom. On stage, on podcasts and interviews, people kept turning to Sam for answers. They were asking him what our AI future would hold. In May of that year, he confidently suggested a future where no one is poor. It's an idea he's talked about for years, and the remarks show that his tune hasn't changed despite growing renown and wealth.

One thing I think we all could agree on is that we just shouldn't have poverty in the world.

So he's saying something like poverty, which is as deeply entrenched as civilization itself, could be fixed. In the age of AI. Everyone will have what they need to live, including a home.

I think we are not that far away from being able to eliminate poverty asthectically worldwide, certainly developed countries, and I think there are things that are fifty years from now will be basic human rights, like healthcare and enough food to eat and a place to live. I'm confident we're going to get that done.

He sounds so sure of himself. How will these changes happen? Sam says that they could come from AI. He's basically saying the technology his company is working on can eliminate poverty. Here he is at a Bloomberg conference in June twenty twenty three.

I think it'd be good to ten poverty. Maybe you think we should stop a technology they can do that, I personally don't.

He's even scolding people who say they want to stop or slow down the development of AI by saying that they want to keep people poor. Solving poverty seems to become part of his personal brand. There's almost a stump speech of sorts, and there's one word he likes to use to encapsulate our bountiful future.

I think it's how we get to this world of abundance. In a world with the level of abundance AI and the abundance that comes with that, If we can have abundant and cheap energy and intelligence that will transform people's lives.

It sounds wonderful, almost utopian. But Sam was saying on stage that everyone should have enough money, enough food, everyone should have a place to live, while his own sister was struggling with homelessness. I want to believe Sam's promises about abundance, but Annie's story complicates a lot of the things Sam has projected about the future. You're listening to Foundering. I'm your host, Ellen Hewitt. By the time open ai had been around for a few years, its employees began sensing that they were working on something profound. AI was starting to do things even its creators didn't quite understand, and the response divided people. Some people at open ai, including Sam, argued that an all powerful computer intelligence would be good for the world. Some thought it could be very, very bad. These visions were extreme, polarizing, as different as heaven and hell. This period ultimately led up to the release of chat GPT, which was a defining moment for the company and for the entire AI complex. The two predictions people talked about the most during this time were that AI would either wipe out the human race or completely change the definition of work. The first one is hugely controversial, the second one is less controversial. People will lose jobs and the economy will have to adapt. I'm going to tell this story in two parts. First, the disputes within open Ai, the reckoning with what they'd built, and the elevation of Sam as some kind of AI hero. In the second part, we'll hear more from Sam's sister and explore the debate around poverty and what an AI economy might look like. And you'll see exactly how far away Annie's life is from Sam's vision of AI utopia.

We'll be right back.

Let's pick the story up in twenty twenty one. It's about a year and a half before open Ai launches chat GPT, and open AI's technology had been rapidly gaining new capabilities.

This was around the time that OpenAI was building more and more powerful models such as GPT two and GBT three that could ingest text and spit out ever more human sounding text.

That's my colleague Rachel Metz, who covers AI.

They were focused on building increasingly powerful AI models, doing it very rapidly, moving towards this goal that they have long stated of creating artificial general intelligence.

This rapid progress was good. It was great for Sam, great for the company, but it made some open ai employees nervous, even ones very high up. Here's Dario Amidae, who at the time was the vice president of research at open Ai. He's talking on the Logan Bartlett podcast and he sounds genuinely agitated.

So, you know, I really freaked out about this stuff. In twenty eighteen or twenty nineteen or so. The first time I looked at GBT too, I was like, oh my, this is like this is crazy. This is you know, there's there's nothing like, there's nothing like this in the world, Like it's crazy that this is possible.

He looked at what he was helping build at open ai, and his overwhelming emotion was fear.

Why am I scared.

Okay, this thing's going to be powerful. It could destroy us, and like all the ones built so far, like you know, are at pretty decent risk of doing some random shit. We don't understand if such a model wanted to recavoc and destroy humanity or whatever. I think we have basically no ability to stop it.

That's from a tech show called the Door Kesh podcast. You can hear Dario's uneasiness, which spread to some of his peers at work. In twenty twenty one, he and six of his open ai colleagues all left at once and they started a new rival company focused on building AI that was safe.

We need to make these models safe in a certain way, and you know, we need to do them within an organization where we can really believe that these principles are incorporated top to bottom.

While his tone sounds innocuous, if you read between the lines, he's implying that open ai was not that concerned with safety and not following those principles top to bottom. He and his co founders felt they needed to create a whole new company focused on preventing AI from potentially wiping us all out. This idea was baked into their name anthropic, which refers to the existence of humanity on Earth. Anthropics split from open ai was a big deal. The group of employees who walked away included executives and key people who had worked there for a long time. It cast this suspicious pall over open Ai. Their exit suggested that they didn't trust Sam to do the right thing. So fast forward to late twenty twenty two, open Ai released chat GBT. They weren't expecting a huge response, but it drew people in gradually. Then suddenly everyone was trying it out. Here's Rachel again.

I kept an eye on it for a few days and I started noticing people were talking about it. But it started getting more and more traction, and I remember reaching out to my editors and saying, hey, we should write about this.

Chat GBT wasn't new technology. It was basically powered by GPT three point five, pre existing model, but it was a new way of presenting that technology to the world. It was a free, easy to use chat based AI tool. It felt a little like texting a friend.

This was much more fluid feeling. It gave you a sense that you were communicating with a person, a person that would make a lot of things up right, who's the answers? You would have to constantly.

Check and people were hooked. Chat gpt reached one hundred million users in just two months, the fastest growth ever at the time. For reference, it took TikTok nine months and Instagram two and a half years to reach the same popularity. Chat gpt felt intelligent even when it got things wrong. It prompted a lot of people to ask themselves, how is AI going to change my life? The technology had some immediate practical uses. Started using chat gpt to write code more quickly, to translate documents more fluidly, and to draft emails. Students used it for homework help good and to cheat on papers not so good. Needless to say, all this excitement was a huge boost for open Ai.

With chat gbt, OpenAI basically skyrocketed to brand awareness and success, and people immediately started thinking of it as the hot AI company, a leader in AI.

It also skyrocketed Sam's public profile. Now Ai was the story of twenty twenty three, and Sam was the main character. He became a household name and across the tech industry. Investors were desperate to throw money at AI. Companies pivoted their focus to AI, and highway billboards touted new AI startups, and this chat gbt mania did something else too.

In addition to making all kinds of people interested in and aware of cutting edge AI, the release of chat gbt inflamed this small subsection of people that have long been interested in this idea that AI is going to become more and more powerful, and that it will inevitably go rogue and threaten us, and we have to figure out how to save humanity from it.

Fear of AI destroying the world. It's the fear that Dario described feeling before he quit Open AI to found Anthropic. It might sound a little ridiculous, but to many people it's dead serious. This Silicon Valley subculture that believes AI might destroy us soon has steadily been growing more influential and powerful. These ideas are a major motivating force in AI right now. Many tech workers in AI feel that they are working on the most consequential thing possible for the human race, a matter of life and death for the whole planet. This belief influences where billions of dollars are directed, and which problems get worked on and which ones get ignored. Chat GBT brought these fears into the public consciousness big time. One survey in the UK shows that in the course of one year, right after chat gbt's release, the percentage of people who believed that AI was a top possible cause of human extinction more than doubled. To be clear, this claim is hotly disputed. A lot of people, even in Silicon Valley, think AI numerism is overblown or a quasi religious ideology, but many take it seriously. We'll be right back. I want to talk about how it feels to believe AI might destroy humanity. Here's Chao Chu Yun. He's just one of many people in tech who held this belief. His introduction to this world was bizarrely through Harry Potter fan fiction he found online that was written by a very influential AI doomer named Eliezer Yudkowski. That discovery led him to lots more of Eliezer's writing about AI. He eventually became convinced that the world was probably going to be swallowed by super intelligent AI.

Soon I was like, Okay, it's happening, We're getting there. It's going to happen in my lifetime. It's going to happen like before I retire. I made a conscious decision around this time to not open a retirement account, which some people might think was silly, but I was like, I really do not think money is going to matter by the time I hit retirement age. I think by the time I hit retirement age, the world is going to be unrecognizably strange. I just don't think money is going to be a thing. Either we're all going to be dead, or we're going to be into some crazy post Singularity environment where it's just I just don't think it's going to matter.

The Singularity is the point when robots become smarter than humans. For him, it went beyond skipping out on a retirement account. Once chou Chu really started believing this, other things in his life just didn't seem to matter. He dropped out of his math PhD program five years in. He drifted away from old friends. He thought they didn't understand the magnitude of the situation.

It was like this black hole in my kind of sense of in which things were important, and like sort of the closer I got to it, like the less important everything else seemed that I was like, oh, well, there's this other stuff that doesn't really matter. There's this other stuff that doesn't really matter, and this is like maybe the only thing that matters.

I know this sounds scary, extreme and maybe ridiculous, but if you spend enough time talking to people in AI and around Silicon Valley, you will hear this attitude. Sometimes it's whispered, sometimes it's said loudly. Sometimes they'll use terms like AI safety, AI alignment, or AI existential risk, but they're all talking about the same thing. There are significant numbers of influential people in AI who believe that our world in twenty years will be utterly unrecognizable from the world of today. Some people think it'll be way better, but a lot think it might be way worse, and they feel that they should devote their lives to preventing AI doomsday. In the last five or ten years, a lot of famous, rich tech people have thrown money at this new cause. Facebook co founder Dustin Moskovitz and crypto fraudster Sam Bankman Freed each pledged hundreds of millions of dollars toward AI safety projects. Tech leaders have signed public statements about how AI could put our human civilization at risk. Chau Chu and his fellow adherents felt it was deeply important to get other people to accept this threat. They became evangelists. At events. They would trap people in long conversations about AI apocalypse.

And they started off skeptical and then they ended up somewhere between like either like really convinced or like scared of just like oh, oh this is terrifying. This is like really like I'm like kind of sicked up about this. This was a very frightening idea. It's like, oh, hey, what if everyone dies in ten years? That's a scary idea, and like it's it's it's sort of safer psychologically to just dismiss it as like that's ridiculous, that could never happen. But if someone really like get gets to you about that, like as someone you know spends four hours talking to him very carefully convincing you, like what if it's possible? Though, Like that's really that's very scary. That's like oh that, how do I?

Oh?

Like that's like terriff Like what do I do about that? There's a world The difference between like reflexively dismissing that idea and like really considering it seriously as a possibility. On an emotional level.

Chou Chu took jobs at AI safety organizations. He volunteered at rationality workshops to try to get more young minds working on AI safety. In this world, when people talk about what's at stake, it's on a galactic scale. Okay, So that's one point of view that the most important thing right now is to make sure AI is safe in the future. But there are some really smart people, academics and researchers who find this AI doomsday frenzy very frustrating and harmful. Like Emily Bender, an academic who specializes computational linguistics.

It's a distraction because there's all kinds of harms that are happening right now in terms of labor exploitation, in terms of data theft, in terms of discriminative outcomes, in terms of representational harms. And the more we focus on these very sort of exciting in the way that action movies are exciting fantasy scenarios at exisential risks, the less time and effort goes in to actually dealing with the real harms that are happening right now.

Emily and lots of other experts argue that all of this talk of AI doomsday is an enormous, dangerous distraction, like a big flashing alarm screaming We're all going to die. It steals all the air in the room. It makes it easy to ignore issues happening today, like racial bias in AI systems used in criminal justice, stealing copyrighted work of artists to train models, and so many more.

You've got the surveillance applications of these technologies that are being used to over police communities, or performance of facial recognition technology, especially for women with darker skin in particular Google and putting identity terms up for sale to search for black girls, you get a whole.

Bunch of porn. There's enormous risks around synthetic media, all of our ability to find trustworthy information and then trust it when we find it is threatened helblic health, democracy.

The list goes on and on. Emily reminds us that these are urgent problems, ways that people are being harmed by AI right.

Now, and yet we are wasting our time talking about these fantasy scenarios because the people with all the money decided to get worried about it. In the meantime, there's the doomer stuff, the existential stuff is also another kind of AI hype, because if these systems are so powerful they might destroy the world, then these systems are really powerful.

That hype is a key part of this, because a super powerful AI system that's exciting to investors and to employees. Emily thinks AI apocalypse beliefs are harmful and misleading, but she also thinks many of the people concerned with AI safety are coming from a sincere place.

But it does seem to be a genuine belief of some of these folks. The people who get deep into this set of beliefs about super intelligent AI taking over the world see themselves as the heroes and their stories who are going to work to stop that. I think it's genuine. I think it's misplaced, but I think it's genuine.

I think Emily is hitting on something real here, that this urge to be a hero becomes a motivating force in the AI industry. Here's chaw choo again.

There is this kind of very boyish desire to be a hero, Like most people don't get a chance to do that in any meaningful sense. Not only just what if I could be a hero, but like, oh what if I could like smart my way to being a hero. I think it'll be difficult for some people to admit this because it's like kind of it feels immature or something. But I really do think that is part of the of the appeal of the pitch.

Being a hero by using your brain. Chauchu is saying the quiet part out loud that many people in this field are motivated by this pride. They want to feel like they're important and that their work is cosmically significant.

People want to feel like they made a difference. Like people want to feel like their lives matter, and that's that's a huge part of the hook is like what if this is like the most significant era of human history that there ever has been, and like the choices we make now are going to like reverberate into the future. You know, like people on Twitter who started like getting really into this stuff will uses like very hyper block clamers. Oh yeah, we're going to like conquer the stars, you know, We're going to like go create a galactic civilization. Like I think some people think that those guys are exaggerating and joking, Like I think they should be taking exactly at face value, Like literally, that's what people think the sticks.

Are believing that this is the most significant era in human history because of what we're building with AI. You know who that sounds like Sam? In speech after speech over the years, he has said that the AI we're working on now is going to be historically significant. Here he is with my colleague Emily Chang.

I think we have an opportunity that comes along only every couple of centuries to redo the socioeconomic contract. And how we include everybody in that, make everybody a winner, and how we don't destroy ourselves in the process is a huge question. You know, what does it meaning to build something that is more capable than ourselves, Like, what does that say about our humanity? What's that world going to look like? What's in place in that world? How is that going to be equitably shared? How do we make sure that it's not like a handful of people in San Francisco making the decisions and reaping all the benefits.

He says that the AI being built now will reshape the world, our humanity, our social contract. His words are brimming with the heroism that Emily and Chow Chew are pointing out. It's pretty common for people in tech to think they are uniquely smart enough to fix big hairy problems like the tech billionaires who are creating a new city near San Francisco, or the entire crypto industry, which thinks it has invented a superior financial system. Maybe Sam sees himself as a hero and he understands the motivating power of a good story. In twenty nineteen, he even hired a fiction writer on contract to write for open ai for a few months.

I ended up basically writing a novella full of short stories, science fiction short stories.

That's Patrick House. He's a neuroscientist and an author. He says he has no idea whether OpenAI still uses his novella in any way, but they saw value in commissioning it.

Sam Maltman is influenced by certain kinds of fiction. A lot of startups are kind of motivated by story, and that this story often comes from science fiction.

A powerful story can make your employees work really hard.

How do you motivate people, like, how do you motivate people, especially in the like mostly secular San Francisco. Maybe you give him a foundational document in an apocalypse myth and you know, tell them they're averting, averting the end times, and that's a tried and true, historically known way to motivate people.

All of this made me wonder, does Sam believe that AI might destroy humanity? Honestly, it's hard to tell. His answers have changed over the years. In twenty fifteen, he very clearly wrote that he believes advanced AI is quote probably the greatest threat to the continued existence of humanity. We heard him say in the past with a kind of funny tone.

I think AI will probably like most likely sort of lead to the end of the world.

And around that same time, in a New Yorker profile, Sam basically said he was a doomsday prepper. He told a reporter that he had stockpiled guns, gold, ammonium, antibiotics, and gas masks from the Israeli Defense Force. He said he has a big patch of land in Big Sur that he can fly to in case the world crumbles. This was really funny to me and probably to anyone who has visited Big Sur, because the land out there is literally crumbling right now, leading to rock slides, road closures, and difficulty getting food and supplies in and out. Seems like a precarious place for your apocalypse bunker. When another reporter asked him follow up questions. Sam tried to blow it off. He said that he does this stuff for fun because it quote appeals to little boys survival fantasies. In the years then, Sam has notably avoided mentioning the big sur property or his stockpile of guns. Annie, his sister, says she never saw the property, but that it's in line.

With what she knows about him.

He's big into that or was uh safety like guns gold, how to stock up on the worst case scenario apocalyptic movie scene events. My guess would be that he has it and stop talking about it.

Sam is a savvy guy. As his profile has gotten bigger after he helped build the world's leading AI company, he has stopped saying things like AI will kill us all. Instead, he talks about how society will be profoundly changed, but overall it will be for the better. Since his newfound chat GPT fame, he has shifted toward presenting himself and by extension, open AI, as more middle of the road. Sam is allowed to change his views, but people have also so complain to me in private that Sam has a tendency to talk out of both sides of his mouth. He's good at telling people what they want to hear in that moment, so it's not surprising that if it's advantageous for him to seem more moderate, that he would start to sound that way. In any case, he definitely knows how powerful an explosive apocalypse story is. We trigger more easily on dramatic fears than boring ones. Sam says this himself here he is speaking at an Airbnb conference in twenty fifteen. He's talking about nuclear power. But it's an analogy that maps very clearly onto AI existential risk, and you can hear again Sam's undertone of arrogance, pointing out how he's aware of something rational that other people are blind to.

People are much more sensitive to sort of like theatrical extreme risk than they are to sort of like boring, slow plotting risk. Like nuclear energy said, unbelievable safety record. You know, it's like a thousand or ten thousand times safer than coal, But most people would they rather live next to a coal plant or a nucleanertry plant. They picked the coal plant all day long, and when they die in thirty years of lung cancer, it doesn't feel as sort of dramatic is dying in a nuclear meltdown, and no, it's true, like this is like the human risk miscalculation that always happens.

Right.

People always underweight the boring slow stuff and overweight the quick dramatic stuff.

People underweight the boring slow stuff like misinformation, racial bias, and they overweight the big dramatic stuff. That may be why the AI industry has been telling us disaster stories. You've just listened to part one of Heaven and Hell. Listen to part two for the rest of this story. Foundering is hosted by me Ellen Hewitt. Sean Wen is our executive producer. Rachel Metz contributed reporting to this episode. Molly Nugent is our associate producer. Blake Maples is our audio engineer, Mark Million, Anne Vandermay, Seth Fiegerman, Tom Giles, and Molly Schutz are our story editors. We had production help from Jessica Nix and Antonia mufferetch. Thanks for listening. If you like our show, leave a review, and most importantly, tell your friends. See you next time.

In 1 playlist(s)

  1. Foundering

    182 clip(s)

Foundering

Foundering is an award-winning, serialized podcast from the journalists at Bloomberg Technology. Eac 
Social links
Follow podcast
Recent clips
Browse 182 clip(s)