OpenAI employees were motivated by the dream of building an artificial general intelligence, which could think and solve problems like a human mind. In reality, they sometimes spent their days building bots that played video games. But new research allowed the company to create increasingly powerful AI models, giving them a lead in the AI race. Meanwhile, a power struggle between Sam Altman and Elon Musk led to Musk’s departure from the company. In this episode, reporter Ellen Huet takes a look at OpenAI’s early days and the company’s shift away from its promises to be open-source and nonprofit.
I want to start by talking about a dream, the dream of building an artificial mind. It's something people have imagined and written about for decades, humans working together to construct a new entity more powerful than ourselves, and some researchers think this dream may be within reach.
The day will come when the digital brains that live inside our computers will become as good and even better than our own biological brains. Computers will become smarter than us. Vicoll such ANAI and AGI artificial general intelligence.
That's Ilia set Skiver, one of the co founders of open ai and at ted talk he gave last year, and he often sounds like a religious mystic when he talks about the future of artificial intelligence. But right now he's just talking about this quest to build artificial general intelligence, an AI that can think and solve a variety of problems. Like a person. It could switch between playing games, solving science problems, creating beautiful art, and driving a car. Open AI's goal is to build AGI. It's a pretty out there idea in the AI world, or at least it used to be. Ilia frames AGI as this almost mystical, momentous leap forward, like Prometheus channeling fire and the consequences will be huge. It will usher us into technological glory and at the same time into chaos. In this tape from the documentary film I Human, he sounds certain of the tidal waves that will come now.
AI is a great thing because AI will solve all the problems that you have today. If you solve employment, if you solve disease, it will solve poverty. But it will also create new problems. The problem fake news is going to be a million times worse. Cyber attacts will become much more extreme if people have totally automated AI weapons.
Ilia is an incredibly accomplished AI researcher. Before open Ai, he worked at Google, and he has several passions that I see as a celebration of being human. He plays the piano, he draws and paints. One of his paintings hangs in the Open Ai office. It's a flower in the shape of the company's logo. At the same time, he's also hyper focused on his AI research. He told a reporter once, I lead a very simple life. I go to work, then I go home. I don't do much else. There are a lot of social activities one can engage in lots of events one could go to, which I don't. He spends a lot of time looking at the current trajectory of AI and extrapolating to try to predict the future. In particular, Ilia is worried about what happens if AGI gets its own desires and its own goals. You can hear this dreamy equality in his voice.
It's not said it's going to actively hate humans and want to harm them, but it is going to be too powerful. And I think a good analogy would be the way humans treat animals. It loves to be hate animals, but when the time comes to build a highway between two cities, we are not asking the animals for permission.
Imagining it this huge.
Unstoppable force, and I think it's pretty likely the entire surface of the areas will be covered with solid panels and data centers.
I want to pause here for a minute. This is a really intense, powerful image that we are creating some new kind of being that would view us with interest, but ultimately with indiffer friends, like the way we look at deer. What strikes me most in this audio is Ilia's tone of voice isn't one of fear. It sounds more like awe Ilia imagines an AGI that we create that would be likely to bulldoze over us in order to reach its own desires. It's a dramatic vision, hard to really grasp, and it has a religious quality in its conception of a supernatural, all powerful entity. I should mention this is all totally theoretical. We are still nowhere close to AGI. Open AI's best efforts are statistical models that convincingly mimic humans, and mimicry is a far cry from AI that can think for itself. Still, open Ai wants to do this, and do it right, and do it first. Here's Sam Altman testifying in front of Congress in twenty twenty three.
My worst fears are that we cause significant We feel the technology the industry caused significant harm to the world. It's why we started the company. I think if this technology goes wrong, it can go quite wrong.
You're listening to Foundering. I'm your host, Ellen Hewitt, and in this episode will take you inside the messy and idealistic early years of open Ai. We'll discuss this dream of building all powerful agi. It's important because this is the destination that open ai is speeding toward. It's this generation's race to the moon. We'll discuss how ai technology changed dramatically and quickly, and how that change made this dream of AGI feel closer than ever before. In just a few years, it went from an eccentric idea that people were scoffing at to a milestone some experts think could happen within a few years. Sam Altman has even suggested twenty twenty eight. And we'll examine the compromises open Ai made in its pursuit of this dream. At first, the company made promises to share its research widely and to not be corrupted by for profit incentives. But once their technology began to advance and it looked like there was serious power to be had, they.
Made a u turn.
Then this pivotal moment, kareemed into a power struggle at open Ai, and Sam Altman took charge.
We'll be right back.
We'll start In twenty fifteen, open Ai had just been founded. It had a commitment from Elon Musk for a billion dollars in funding, plus some money from other donors as well. It was this small, scrappy research lab. Sam and Elon weren't around much in the early days. At the time, Sam was actually still running y Combinator, the startup accelerator, but he was beginning to position himself as a thought leader in the AI space. In particular, he was talking about AI doomsday scenarios. In twenty fifteen, he declared on his blog development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. He also wrote that AI could destroy every human in the universe. Here he is at a tech event the same year, referencing the founding of open AI.
I actually just agreed to fund a company doing AI safety research. You know, I think AI will probably, like most likely sort of lead to the end of the world. But in the meantime there will be great companies created with serious machine learning.
I want to talk about that comment for a second. He's saying AI might kill us all, and he's asking us to trust his conclusions as an expert, but he's also being glib about making money along the way. In the beginning of open AA, Sam and Elon weren't around for the day to day. They were out glad handing, recruiting, and talking to journalists. They would pop in once a week or so to get progress updates. In those days, Sam would swoop into a conversation then leave.
So he struck me as very very sharp, incisive, and also superficient with this time. When the conversation is done, it's done.
That's Peter Abiel, a researcher who worked at open ai in its first two years. He says, in the early days open ai looked like a typical startup. They didn't even have an office for a while. They met in the home of one of the co founders.
When we started out late twenty fifteen early twenty sixteen, it was in Greg Brockman's apartment in San Francisco Mission District. It was you know, we're sitting essentially on a couch, at a kitchen counter and on a bed, and that's pretty much that's where the work is getting done. It's it's kind of crazy to think that, you know, that's where something big got startup. We just had twenty of the world's best AI researchers together, really focused on trying to get some things done and I've never been done before.
In the absence of Sam and Elon, the main leaders were two people who aren't famous but who will play a mammoth role later on. There's Greg whose apartment served as their office, and Ilia, the research scientist we heard from at the beginning of the episode. You can think of Greg as the workhorse in charge of business operations, and Ilia as the AI genius, and together they ran open AI. Peter remembers going on weekly walks with Ilia around the neighborhood in San Francisco, talking about big picture stuff, asking themselves, are we working on the right problems?
I feel like he he just kind of saw AI what it could be doing, could be capable of, more clearly and earlier than anybody else. He's seeing it more optimistic than everybody else. He would come up with analogies like, okay, and the ONAL network is just a computer program. It's just a circuit. We're just programming it differently.
Greg meanwhile, was grinding away.
Greg is somebody who can just apply himself. He can just you know, keep working and keep working and keep working. I've seen some people like that, but very few.
Even after open AI moved out of Greg's apartment, he still practically lived at the office. One former employee said he would be hunched over his laptop when they showed up to work in the morning and was still tapping away when they went home at night. Years later, when Greg got married, he even held a civil ceremony in the office with a big backdrop made of flowers, again in the shape of the open ai logo. The ring bear was a robot hand and the officiant was Ilia. When Greg and Ilia joined open ai, they didn't need money. Ilia had sold a company to Google, and Greg owned a lot of shares of stripe, and that company was worth tens of billions of dollars. In Silicon Valley, people usually create startups because they think they can build a lucrative business, but open ai was a nonprofit. Greg and Ilia were motivated by this dream. Here's Reid Hoffman, one of the earliest donors to open ai.
There was no equity upside for that. Initial crew was like, look, we're doing this for humanity.
Doing it for humanity. Open ai talks like this all the time. Their website says, our mission is to ensure that artificial general intelligence benefits all of humanity. Okay, So it's well known that Silicon Valley loves grandiose mission statements. We work wanted to elevate the world's consciousness, but open AI's mission statement is even more sweeping. And it has this overtone of altruism. When Sam talks about the company's work, he often discusses potential disasters. Here he is with Rebecca Jarvis on ABC. His voice sounds grave. Again, he's positioning himself as a thought leader in this space.
So what is the worst possible outcome?
There's like a set of very bad outcomes. One thing I'm particularly worried about is that these models could be used for large scale disinformation. I am worried that these systems, now that they're getting better at writing computer code, could be used for offensive cyber attacks.
But you raise an important point, which is the humans who are in control of the machine right now also have a huge amount of power.
We do worry a lot about authoritarian governments developing this.
Putin has himself said, whoever wins this artificial intelligence race is essentially the controller of humankind.
Do you agree with that?
So that was a chilling.
Statement for sure, Sam saying this stuff is so valuable that global superpowers are going to fight over it. The cynical take is that if you make what you're working on sound really important, you attract a lot more attention and money. We'll talk more about this dynamic in the next episode. In open AI's early years, their humanity saving plan wasn't that clear. Their strategy was a bit scattered. Here's Peter again.
We looked at robotics, did some work there. We looked at simulated robotics, did a bunch of work there. We looked at digital agents that navigate the web and do all kinds of tasks online, like booking flights. We looked at video games.
Open Ai said one of its first goals would be to build a robot butler which could set and clear a table, kind of like the maid on the Jetsons.
Coming Sir, Hey William Sir.
The company also built a robot arm that could solve a Rubik's Cube single handedly, and they put a lot of effort into building bots that could play Doda two, a massively popular multiplayer video game. They imagined that the complexity of the game environment could lead to an AI that could better navigate the real world. Here's someone testing the bot.
The bot is good. The bot is better than I could have ever imagined.
Those Doda bots even competed against professional players.
Already up stop the actions kicking off sopen Ai will claim first blood.
No tawn is to be caught out here.
A bot that could play Doda was technically impressive, but it didn't look very impressive to the average person, and the commercial applications for these products were not immediately clear. Here's how one former employee put it. We were doing random stuff and seeing what would happen. There were not really defined goals. Sometimes it felt like there was a big gap between what was being built and what was being imagined. People would spend their days programming bots that played video games. Then they would sit around the lunch table and talk about saving humanity. The prevailing wisdom in the AI world was that in order to make something powerful, you sometimes have to start with something trivial. Video games and robot maids would pave the way to self driving cars and cancer curing AI. Internally, at open AI, they sometimes compared themselves to the Manhattan Project, the team given the mission to create the first atomic bomb, and they meant it as a good thing, ambitious and important. Here's how one former employee described it to me. It's an arms race. They all want to make the first AGI. They believe they can do it best. I didn't see a lot of fear of AI itself. I just saw excitement to build AI. Back in twenty fifteen, AI looked pretty different from today. It was weaker and harder to train. At the time, the major breakthrough was that a bot had been able to beat the world's best player and Go, a complex strategy board game from China, but that AI could only play Go, it couldn't do anything else. Here's Orn Etzioni, a computer science professor and the former research director for an AI institute.
The thing about these is these were narrow systems, very highly targeted. So the system that played Go couldn't even play chess, certainly could not cross the street or understand language. And the system that understood airfare fluctuations and predicted very well whether airfares were going up or down could not handle text either. Right, So basically, every time you had an application, you'd have to train up a new system. And this took a long time, took a lot of labeled data, etc.
But then came a major breakthrough in AI technology. In twenty seventeen, a group of researchers from Google Brain published a paper called Attention Is All You Need, and in it they describe a new kind of AI architecture called the transformer, and the transformer did something huge. At the time, AI systems needed to be fed very specific data. Each piece of data had to be labeled this is correct, this is incorrect. Spam not spam, cancer not cancer. But the transformer allowed AI to take in messy, unlabeled data, and it could actually do so even more efficiently than expected, using less computing power than before. Now these transformer based models could just teach themselves in a way. It was like if you wanted to teach a kid to read, and you used to have to hire a tutor to sit there with flashcards, and now instead you could just let the kid run through a library and they would emerge knowing how to read and write. This was, as one investor described to me, a surprising and bitter realization that the best AI would come not from the most specialized training techniques, but from whichever had the most data. Peter, the early open AI employee, says Ilia immediately saw its promise.
Ilia's reaction was pretty affirmative right away. It's like, this is something special we need to be we need to be looking at this. This seems a big breakthrough.
Even in the early days of open AI, Ilia had always had this hunch that big advances in AI wouldn't come from some specific tweak or new invention, but just from more data pouring more and more fuel into the engine. And now Ilia had the research that backed up his hypothesis.
Here's Oren again, Ilia from open ai is known as the person who said, it's the data, and it's the amount of data, and if we just scale that up tremendously of magnitude much much more, we're going to achieve what we need. That was not the common perception and some very smart and very famous people. I don't want to cast aspersions, but certainly I'm I'm not that smarter with that famous, but I'm one of the AI experts who did not see that coming.
Because of Ilia, open Ai started experimenting with the transformer. They were one of the earliest companies to do so. They made models with the now familiar acronym GPT Generative pre Trained Transformer, and in particular, they started experimenting with how the transformer performed with written words, because they could basically feed the model anything written any book, newspaper, article, reddit posts, blogs. Humans have spent a lot of time writing things down, and those words now had another purpose, training data. The Internet wasn't created to train, but in the end that may become its legacy. Open aiyes, models got better and better at generating text, and they weren't limited to just one field of knowledge.
The amazing thing about these GPT systems is that they're very broad. They are actually generalists. You can ask them about virtually any topic and they'll produce surprisingly good answers. And that's because they've been trained on effectively the entire or at least an approximation of the entire corpus of text that's available to humanity, billions of billions of sentences, all the books you've read, all the documents, the memos, the silliness, Harry Potter fan fiction. It's all grist for the mill. And then once it's read all that, it's remarkably general. So for the first time we would have a system that you could ask it about anything and it would give you a surprisingly intelligent answer. So we went from narrow AI to a kind of general or broad AI.
Through the massive amount of writing that they were feeding into their models, open ai found they could create AI that was much much better at forming convincing sounding responses to questions. In fact, at some point they started to worry it was maybe too good. When OpenAI announced its language model GBT two, they initially decided not to share the model more openly because they were concerned it could be dangerous. Here's Peter. He had by that time left open ai to start his own company, but he remembers the day of the release.
It was just obvious that it had a much better understanding of language than anything that had been trained before. Its release was indeed accompanied by a lot of I guess great marketing or caution or combination of both. It was headline that's too dangerous to be released, and so I think it was probably one of the first projects where open i decided to not release some of the work because all of a sudden, the thinking had become, well, what if it's something is so powerful that people could go misuse it in ways that we can't control.
As soon as open ai had a product that was actually powerful, they started rethinking their openness.
Open Ai started with that name where to open really stood for everything's going to be open sourced built, you know, anybody else can build on it.
Openness was a crucial part of the company's brand when they were founded, Sam told the journalist Stephen Levy, it will just be open source and usable by everyone. He also told him that their AI would be freely owned by the world. Open source software in its broadest sense, means that the source code is made available to the public freely, and that anyone can tweak the code and distribute it themselves. But the company soon started walking back those commitments.
Obviously that evolved over time into something that is not so open source, if open source at all for anything. I mean, it's definitely not open sourcing. It's hits work.
Right now, that open source ethos seemed to fade away. Here's Sam giving a talk in Munich in twenty twenty three.
I'm curious, if we stay on the same like GPT two to three to four trajectory for five and six, how many of you would like us to open source GPT six the day we finished training it. Wow, Well we're not going to do that. But that's interesting that.
Honestly, Sam sounds pretty arrogant here. He knows open AI started off with promises of being open source. Now he's pulling the audience about open sourcing models and immediately dismissing their response. Over the years, Sam has subtly changed the meaning of openness. It's become fuzzier here he is at a VC firm.
So I think that is that's what we call open ai, open AI. We want this to be open technology made available to.
Everyone, open technology made available to everyone. He says it so plainly, as though that's obvious what open means, But his definition strikes me as so vague that it's essentially meaningless. I mean, Google Search is available to everyone. It seems like open ai was happy to let people guess what they meant by open. In an internal email just months after it was founded, Ilia wrote, as we get closer to building AI, it will make sense to start being less open. The open in open ai means that everyone should benefit from the fruits of AI after it's built. But it's totally okay to not share the science, even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes. This email was really interesting because it shows that from the beginning open ai had planned not to freely share their se science. They didn't want to be open source, as they claimed, but they wanted to keep up the public appearance of openness because it gave them a recruiting advantage like, don't go build AI for the bad guys, come work for us, the open virtuous choice. When we asked about their changing definition of openness, a company spokesperson said, our mission has remained the same, but our tactics have had to change. Okay, So let me bring us back to twenty seventeen. It's two years after the company's founding, and another problem was brewing an Open AI a power struggle. Elon wanted to take over. He's someone who's used to being in charge. According to open AI, he wanted to move the company under Tesla, and he wanted to be CEO, and he wanted majority equity, and if it couldn't be done his way, he was out.
Like with everything Elon, as time goes on, he wants to assert more and more control and make sure the company is operating in exactly the image and way that Elon wants it to operate. And so this is going to create tension.
That's Ashley Vance, my colleague who has written a biography of Elon.
Elon's preferred role in anything is to be the CEO and the dominant force and the one who controls what's going on day to day and.
The guys actually running the day to day. Greg Brockman and Ilias Aitzkiv were wary because Elon was reckless, impulsive, and difficult, but he was also their main source of money. He had pledged them almost a billion dollars. Open Ai had other donors, but nothing close. One option was to go with Elon and keep the money.
The employees weren't all on board with that idea and had some concerns, and so you get to this, you get to this decision point where it's kind of like, you know, are we going to go on with Elon or without him? Almost always in recent years people have kind of put up with Elon and his demands.
Or another option was to split with Elon and figure out how to get a different source of cash, you know who would probably be good at raising money. Sam Altman.
Reached's point where Elon wanted the company to go one way and then the employees wanted it to go another, and Sam was picked as the person to lead open Ai forward.
Sam hadn't been that involved in open Ai for the first few years. He was still president of YC actually, but in this jostling for power, Sam beat out Elon, and that's a big deal. Elon was much more famous and experienced, and notably, he hates losing well.
In most conflicts, Elon reacts by trying to win at all costs, and whatever whatever which earth you know, may may arise from that, Elon doesn't. He doesn't lose too many battles. Usually he either. If it's not within a company, he usually sues somebody into submission. If it is within a company, he throws his weight around in politics until he gets what he wants. It's hard to find too many examples in recent years where he did not get what he wants, and so the turmoil inside of the company must have been quite drastic in order for this not to happen.
So in twenty eighteen, Elon walked away in a huff and took his money with him. Years later, he'll actually end up suing Sam and open Ai, claiming they broke their original commitment about remaining nonprofit and open source. Soon after Elon left, Sam became CEO of open Ai. There hadn't been a CEO before, but this power struggle crystallized Sam's new dominance over this company. Remember what the founder of YC once said, Sam is extremely good at becoming powerful. Sam's excitement about open Ai kept growing, his attention started drifting away from his job running YC. Sure, running a world famous startup accelerator is a position of a lot of influence, but the race for building AGI was heating up, and if OpenAI succeeded in creating AGI before anyone else, it's hard to imagine a position in the world with more power than being its CEO. But Sam didn't give up his job at YC right away. This situation made some of the people running the accelerator grumble. They felt like Sam was spread too thin, pushing to expand too fast, and prioritizing his own interests above those of YC. It earned him some enemies within his own ranks. In fact, according to a source, Sam's mentor, Paul Graham, the guy who put him in the job in the first place, flew in from the UK to ask Sam in person to step down. Paul had lost confidence in his former protege, but he also didn't want to create public drama, so Sam was ushered out and they kept the backstory quiet. Now focused only on open Ai, Sam had one big goal to raise money to train open Aiy's models. They needed a lot of computing power, and computing power is expensive. Sam tried to raise money but wasn't getting traction. Here he is on the Lex Friedman podcast.
We started as a nonprofit. We learned early on that we were going to need far more capital than we were able to raise as a nonprofit to do what we needed to go do. We had tried and failed enough to raise the money as a nonprofit. We didn't see a path forward there. So we needed some of the benefits of capitalism, but not too much. I remember at the time someone said, you know, as a nonprofit, not enough will happen. As a for profit, too much will happen.
They needed something in the middle, and honestly, Sam doesn't sound that hung up about leaving nonprofit life behind. He Frankenstein something together. Basically, he created a for profit entity that lived under the umbrella of the original nonprofit. The for profit could do all the things normal companies do, like raise investment and offer equity to employees, but its investors' returns were capped, whereas at other companies they'd be unlimited. This corporate structure was grafted together. Open Ai was essentially now a for profit controlled by the board of the nonprofit, which sounds a little unstable. Open Ai had spent years saying they would be a non profit. Now they had come up with this for profit workaround. After that change, a lot of people were upset, but open Ai was more focused on their end goal. They wanted to build Agi and they needed to raise money to do it. And then in twenty nineteen, Sam the deal maker made a big, hugely important deal. He raised a billion dollars from Microsoft. Here's Microsoft CEO Satya Nandela after they signed the deal.
Hi, I'm here with Sam Altman's CEO of open Ai.
Today.
We are very excited to announce a strategic partnership with open Ai.
And I thought one important thing Microsoft had was lots of raw computing power and open Ai could now use it. Remember open Ai had originally been conceived to be an antidote to Google. They presented themselves as fundamentally different from profit hungry tech giants, and then overnight they became intimately enmeshed with a tech company worth more than a trillion dollars. Now open Ai was in many ways an arm of Microsoft. This was a remarkable about phase. Reid Hoffman was on the board of open Ai and on the board of Microsoft at the time of the deal. He didn't see this as an abdication of open AI's initial premise.
There were parties who worried about with this corrupt the mission. But you know, I think that's a little bit of like kind of a modern naivete is to say corporation equals bad or corrupt, and it's just naive because there's lots of ways that companies are are collaborative with humanity anxiety. They try to serve the customer as well, they hire employees, they have shareholders, they exist within societies.
Okay, So Reid's perspective is that just because you want to make money doesn't mean you're bad, which is on brand for a billionaire venture capitalist. And I guess one way to look at it is that the Microsoft deal may have been the most practical way for open ai to continue its mission of creating safe agi for all of humanity, but it also highlighted an important pattern so that OpenAI often walked back its promises when it was convenient to do so. And amid all this, people started to doubt Sam's integrity both inside and outside the company, and that would lead to a major rift. That's next time on Foundering. Foundering is hosted by me Ellen Hewitt. Sean Wen is our executive producer. Rachel Metz contributed reporting to this episode. Molly Nugent is our associate producer. Blake Maples is our audio engineer. Mark Million and Vandermay Seth Fiegerman, Tom Giles and Molly Schutz are our story editors. We had production help from Jessica Nix and Antonia Mufferetch. Thanks for listening. If you like our show, leave a view, and most importantly, tell your friends. See you next time.