If the latest battle in the AI wars is between open-source models and closed ones, Meta CEO and Founder Mark Zuckerberg is right on the frontline. Since rebranding as Meta in 2021, the trillion dollar company formerly known as Facebook has been pouring billions into its long term bets on artificial intelligence and the Metaverse. Meta’s latest push is a major play to open-source AI, in contrast to closed-source competitors like Google and Open AI. In an exclusive interview, Bloomberg’s Emily Chang sits down with Zuckerberg to discuss how the company’s newest AI model Llama 3.1 will shape the future of business, technology and society. She also visits his retreat in Lake Tahoe and learns how to wakesurf with Zuckerberg and wife Priscilla Chan.
Hi everyone, I'm Emily Chang and this is the circuit. Mark Zuckerberg has fired his latest salvo in the AI wars, and he thinks it's a big one. Of course, you all know him as the face of Facebook, in the spotlight and in the hot seat. For twenty years, he's been on the offensive, buying Instagram, WhatsApp, and Oculus, and the defensive, battling a never ending cascade of criticisms about the social, political, and business impacts of his expanding empire. Lately, though, Zuckerberg seems to be having something of a zuck Aissance or a zuck Renaissance. He's rocking a different vibe. Gold chains, fuzzy jackets, martial arts on a barge in Lake Tahoe. Zuckerberg appears to be reinventing himself and his company, Metta's latest push is a major play to open source AI in contrast to closed source competitors like open AI and Google. Zuckerberg just unveiled his company's newest class of AI models, LAMA three point one, by the way, that stands for large Language Model Meta AI, and he believes this approach to AI will have a profound impact on the progress of tech, business, and maybe the world. I took a trip to Meta headquarters in Menlo Park, California to meet Zuck and hear about the AI future he says is the path forward. We also talked about what's next for social media, what's at stake in the US presidential election, and his famous side quests joining me now, Meta CEO and founder Mark Zuckerberg.
Goodness, see you again.
Thank you for having us.
I'm excited to do this.
I thought about sending my avatar to meet you, but I decided this was too good to pass up.
So we're gonna walk slowly.
Okay, you want to set the pace. I will set that.
I will try it. Sometimes I walk too.
Fast, so you will. This is your thing.
So so in the future, will we be meeting as avatars? And will that feel totally normal?
I think probably some of the time. Yeah. I mean, you know, I think you're going to be.
You're gonna have meetings where instead of having to use zoom and look at a screen, you're gonna just have the people you're talking to as kind of three D avatars, just on your couch or around the table. And that's going to be pretty good. I mean we'll also be able to probably have AI coworkers who will be embodied as avatars sitting around the table with us too. So that's something that you're not gonna be able to do physically.
But I don't know. I like it because I mean I'm a.
Big believer in like kind of physical presence and interaction, and I mean I just think that like the ability to actually be in a physical space, like around a table or at a couch and kind of have the person there, even if they're not physically there is a lot better than just looking at a screen.
How much of your day to day is meetings versus product reviews versus like big strategic thinking.
It really depends on a day to day basis. I don't know that there's like a normal day, the ideal day I get to to work on all the long term stuff. But I mean, in this role, you sometimes get sucked into stuff that is not.
What you want to be focused on within a given day. Yeah.
So yeah, but I view and part of this is like, all right, how many days am I actually focused on the things that I want to be building stuff for the long term?
How much are you still coding?
Oh? I don't code for work at all anymore. I do it for fun.
Yeah, I mean probably most of the coding that I do these days is with my daughters, teaching them.
They love it.
My daughter Aggie, she basically uses code as a medium for art and just to create somethings. She calls it code art, just like telling stories and different things by like making I don't know just's it's just coding in scratch. So I spent a bunch of time doing that with the kids, and that's my coding these days.
So I found an article in the Harvard Crimson going back to two thousand and three where you were talking about open source.
Okay, like over twenty years.
Ago, really, so you've been thinking about this really long time.
Well, yeah, it's a big part of the tech industry. I mean you wouldn't have been able to build the early version of Facebook without that. I mean all the stuff that we used, just the early versions of my sequel and PHP and Apache and Linux, I mean all that stuff. It's like, there's just no way I would have been able to build the first version without that whole stack.
So it's like super critical.
Oh yeah, yeah, I was a student.
I didn't have access to a lot of capital I couldn't have gotten like a really expensive proprietary Unix system or something. So I mean there's the whole kind of hacker mentality. You just take the code, use it for the thing that you need to. It's more cost efficient, and I mean that's how you can start something like this in a dorm room.
So speaking of that, some people see you as an unlikely champion of open source today you're laughing.
Well, I don't know why. I mean, I actually think, well, I get it, but yeah.
You understand, you understand the word unlikely.
Yeah, Well, we've actually Meta have been pretty big proponents of open source for a while, and it maybe is a little bit of an accident of history that you know, we got started after companies like Google and built up infrastructure, but it was never a proprietary advantage for us because we grew up after them. So instead of kind of hoarding it and saying, Okay, this is going to be something that we keep as an advantage, because it actually wasn't an advantage relative to Google and other companies.
It was just we were just catching up, like all right, let's create.
An open ecosystem and make it so we can benefit from the innovation that other people bring and.
It's been a really good formula for us.
So dating back to how we design servers, how we design our data centers, we have this whole open compute project that kind of standardized a lot of the infrastructure for the industry. It was really good for us because now that all these other companies are using the same stuff as us, the supply chains got more built out around it, which meant that it was cheaper for us. So we've saved billions of dollars by kind of having it out there. And that's going to how with AI too. So you can look at our history, there's all these different projects that we've open sourced, and I think that the positive experience that we've had is one of the reasons why we're just a little more willing to put ourselves out there and open source AI models and generally believe that we're going to get the positive benefits from the ecosystem on that too.
You're really putting a stake in the ground by open sourcing your AI in this attempt to build the AI rails for the future. How much of this is a strategic way to control or own the next technology wave.
It's a few things.
One is, if you just think about, like our kind of psychology and strategy on this, A lot of how we've grown up over the last ten or fifteen years was building our apps through phone platforms that our competitors controlled. And you know, there's all these analyzes that we've done where we would be like a lot more profitable, our business would be bigger if we hadn't gotten all these random taxes or rules that the mobile platforms.
Had put on us.
But honestly, that's not the big thing that bothered me. It was how it limited our creativity to build the best things that we could imagine. It's somewhat soul crushing to like go build something that you think is going to be good and then just get told by Apple that you can't ship it because they want to put us in a box because they view us as competitive. So you know, we're a big enough company now that one of the things that I've resolved is that for the next generation of technology, I want us to build and have more control over the next set of platforms that we're going to build. So I think AI is a critical one, and I think augmented and virtual reality is another critical one. But the thing is is that these platforms are not just It's not such a thing that you can go build in a lab.
It's an ecosystem.
So we can build the best AI model, but over time, like Linux, the way that it becomes valuable is that there's going to be a whole ecosystem of companies that integrate with it and build out all these different capabilities, And if we were to keep that to ourselves, we actually wouldn't benefit.
So it's kind of the way that we.
Can control our own destiny on this and make sure that we have access to leading AI is by building it and having it become an industry standard. So it's somewhat counterintuitive, and I think a lot of people woul look at that and say, Okay, why don't you just if you build this thing, why don't you just keep it for yourselves? But it actually gets stronger by being able to share it and have the ecosystem around it.
How much of an opportunity here do you see to take Apple out.
Of the middle.
Interestingly, Apple is not really the biggest player in AI, or at least major AI foundation models, so I'm sure that they'll do great stuff with their Apple Intelligence and the on device stuff, But I was more using that as an illustrative example that really kind of shaped me and I think has shaped the company over the last ten or fifteen years of one of the struggles that we've had building apps in their ecosystem. I mean, they're just not a neutral player in that right. I mean we're a competitor to them, and we have to deliver our services through a competitor, and that's a very difficult situation to be in. So going forward, I'm not sure that they are going to be the biggest challenge that we have, but I think the lessons from that If AI is going to be as important in the future as platforms are, then I just don't want to be in the position where we're accessing AI through I don't know whether it's Google or you know, whoever the other company is that also may be eventually down the road a competitive for us. It's just I think it's important if we want to build services that help people connect in all the ways that we want to do. It's just a thing where we're a technology company. We need to be able to build stuff not just at the app layer, but like all the way down, and it's worth it to us to make these massive investments to do that.
You're continuing to improve meta AI across all of your products, but also as a standalone chatbot. Why should we use meta over chat GPT?
Well, there's a bunch of things where it's better, And I mean one is there are all these tools for producing content.
You know.
One of the things that we're rolling out soon is the ability to just like imagine stuff.
You're typing something in real time. I do this with my daughters all the.
Time, and as you're typing and entering the query, it's just generating the images as you enter the keystrokes.
It's just really cool.
The other is just that it's kind of in integrated into the experiences that people use. So you can add medai to your chats with your friends and WhatsApp or Messenger or Instagram, so it can be there in group threads. I think that's really neat. But look, at the end of the day, I think the most important product for an AI assistant is going to be how smart it is. And the LAMA models that we're building are some of the most advanced in the world, and I think that that's why people want to use them that and obviously it's you know, we try to build our services and make them free. So a lot of other companies they take their best models and they charge for them, and we're going to try to take our best models and make them free so that way as many people as possible around the world can use them. And it's early, but it's basically working. My goal for the Medai launch, which I mean it's only really a few months old at this point, was to by the end of the year, have Metai be the most used AI assistant in the world.
And I think we're basically on track for that.
I mean, there's hundreds of millions of people who are at the end of this year, yeah, and I think we're going to be there before the end of the year. But there are already hundreds of millions of people who are using it. We're not even rolled out in all the languages yet. We have the big launch coming this week that's just like much better models across the board, so people are using that AI. It's just going to get smarter this week automatically. You don't have to pay for it. It's just going to keep on getting better and better.
So you're releasing LAMA three point one, this family of models big and small, including the biggest open source model ever four hundred and five billion primers.
Yeah, yeah, what does that jump unlock?
In Layman's term, the bigger the model, the more intelligence can be encoded in it, but also the more expensive it is to operate and run. Right, So you don't always just want to use the biggest model. You want to use the most sophisticated model for what.
You're trying to do.
So if you're trying to run it on a phone where there's limited compute, you actually want a much smaller model, maybe like a two billion parameter model or three or something like that.
People want to run stuff on.
Their own laptops, and you can run like a seventy billion parameter model on a laptop. The four hundred and five billion parameter model is a really big, very sophisticated models that we're shipping, is basically competitive with all the state of the art models. People can run it directly if they want, But I actually think the main thing that people are going to do, especially because it's open source, is use it as a teacher to train smaller models that they use in different applications. We've actually done this ourselves, so we have smaller models, which actually the main ones that we use in our products because again they're more cost effective to run. And as soon as we finish training the four hundred and five billion parameter model, we used it to now train a better version of the small models. And I think one of the really powerful things is that if you just think about like all the startups out there, or all the enterprises or even governments that are trying to do different things, they probably all need to at some level build custom models for what they're doing. And it's really hard to do that with closed systems out there, whether that's open AI or Gemini, Google's thing or whatever.
But with open source.
That's really easy because you have the weights, so you can basically use the model to distill and train whatever size model you want. Gets to a pretty core part of our philosophy is we don't believe there's gonna be like one AI to rule them all. Our vision is that there's going to be millions or just billions of different models out there, and I think that's really what the LAMA three point one, four or five billion is going to allow. It's just going to be this teacher that allows so many different organizations to create their own models rather than having to rely on the kind of off the shelf ones that the other guys are selling.
So not one god, but many, Is that the way to think about it?
Well, I don't think they're gonna be gods, but I mean I think I think there's this weird semi theological thing that I think. I think you're referring to this conversation that I had on Instagram recently, where I do think that to some degree, if you're like an organization, you think you're going to create like this one super intelligence that does have this feel to me of like people trying to create a god. And that's just I find that both the wrong way to look at it, but just also very unappealing. I would rather create a world, or help people create a world where there can just be like a lot of different diferent people and services that you interact with. So, you know, one of the things that I'm excited about is making it so that, like you know, there's almost two hundred million creators on our platforms, they all are trying to build their community. People want to interact with them, there aren't enough hours in the day, Like, I want to make it so that every single one of them can easily train like an AI version of themselves.
That they can make it what they want.
So it's almost like a kind of artistic artifact that they're putting out there for their community that allows their community and interact with them, but also gives them control over how that interaction happens.
And I think that that's going to be great, and there's going to.
Be millions there are, you know, eventually hundreds of millions of those simple businesses. There's hundreds of millions of small businesses on our platforms, and I want to make it so that any small business can just very easily pull in all their information from social media and their business catalog and get an agent that can help their customers with customer support and can help them with sales and can recommend new products. And I think that just like today every business has a web site and email address and social media accounts, I think in the future every business is going to have an AI business agent too, and I think that's going to be great. So I don't think that there's just like one AI use case, which is what we're trying to do. I think what we're trying to help enable the whole community to do is create all these different ais for all these things that people want to do. And that's kind of how I think the sense of being a good thing for the world.
Some people don't see your AIS so open. So how open are we talking?
Really?
Will you show us how the model was built or what data you've trained on it?
The model has open weights, so you can take the weights and you can modify them, and you can see how it's built, and you can see the architecture and all that. I'm not aware of anyone else doing more open work than this. The data question, I think is a kind of sensitive and important one. I mean, there is this question in science where people want to be able to reproduce science experiments, and I think some people would like to be able to do that here. The reality is, you know, it also costs for the LAMA three model at least hundreds of millions of dollars of compute, and going forward it's going to be billions and many billions of dollars of compute. So is it reason or is anyone actually going to be able to reproduce it.
I'm not sure.
I don't think that that's like as important of a thing to push on and for data, I think that even though it's open, but we are designing this also for ourselves and we want to make sure that this works well. And we do work with different organizations to license data in different ways, and some.
Of those sources are proprietary.
Even if the data is public, you know, you don't necessarily have the kind of right out of the gate to be able to train on it. You sometimes have to go make deals, and so we can't just go ship all the data even if it's public data that we've used to train on. And sometimes it's not even clear that it's the right thing to disclose or tell people what data we've trained on. So yeah, I mean I get the point on that. You know that you can always take what you're doing and make it more open and even more extreme way. But look, I mean, I think what we're doing is state of the art and kind of open source a eye models. I think for people who want to use this to build stuff, I think that they're pretty happy with.
What we do.
You are using Facebook and Instagram data right as a public data, public data as I understand it, and is that does that give you an advantage versus other models, and should those users get something in return for their data being used.
I actually think I'm not sure how much it's an advantage because a lot of the public data on those services we allow to be indexed and search engines, So I think Google and others actually have the ability to use a lot of.
That data too. In some way.
If we didn't use the data, we might actually be at a disadvantage because if it were in like Google Search, and we somehow couldn't use it because we were saying, Okay, we're not going to use data from our services even if it's public, but Google can use it because it's not from their services.
It like kind of doesn't make sense.
But I do think that overall, there's code and technology that you develop, there's compute to train the models on, and there's what your data mix is, and those are probably some of the biggest pieces that contribute to the end quality of the model. So we try to innovate on all of those, and we want to build the best thing because it's the foundation that we use for our products. But we also want to build the best thing because we want it to become the open source standard that then other people adopt and then it makes it even better, which helps us serve the people that we're trying to build for as well.
So let's get into the broader strategy and how this is all going to work, Like, will we have AI generated influencers with AI generated captions and avatars talking to avatars?
Yeah, I think we'll have all of it.
What and like, how do you want to create the first AI generated social network?
Well, I think that that will be part of this, but it's not going to be the only thing. I mean, it's people come to our services because they want to connect with people. But you know, actually, one of the most interesting use cases that people have for medai today, it's like in the top four use cases is role playing difficult social interactions that they're going to have.
So whether it's in a.
Professional context like okay, you want to ask your manager for a raise, or like I'm having this hard conversation with my girlfriend or my friend, they.
Could have role played this conversation with you today.
I mean, like, hopefully it's not stressful for you, but it's But yeah, I mean I think that that's like it's a clear social tool where there's there's no judgment. Right, the AI isn't sitting there, there's no like social repercussion for what you ask it.
But what evidence do you have that people want to live in this virtual world and socialize with avatars or that it's actually good for us?
Well, I think that people want to connect with each other. I actually think like all the other stuff is generally noise, but all the technology allows you to do that in a better and better way. Right when I got started with Facebook, it was mostly text, and then this whole there's been this whole kind of like technological evolution where we got smartphones that we could primarily be taking photos, and then the mobile network's got good, so then you could be kind of sharing video and videos a lot richer and consuming videos a better experience. So I just don't think that's the end of the line. I think it gets more immersive. And that's one of the reasons why I'm so convinced that ar glasses by the time you just get to like normal stylish glasses where you can have a sense of presence.
I mean, I think within.
Five years we're going to be at a point where we could be having this conversation and like I could just be like a full kind of three D avatar here, and it would feel like a much more realistic sense of presence than just talking to each other over a zoom screen or.
Something like that. That's what it's about, though.
I think people want to connect, and we're basically building all the different technology that allows people to do that.
You renamed your company meta, You're still pouring billions of dollars into the metaverse.
Are we as far along as you thought we'd be a few years down the line? Are there any lessons in the urgency of the pivot?
Well?
A lot of the reason why I did that is because I think we were getting pigeonholed as just this like social media app company, right, And that was like so ingrained, right.
So we were called Facebook.
Facebook is one of our apps, right, and the others are just as important now. And I've never thought about the company as a social media app company. I mean, we didn't start as that. We weren't you know, there weren't apps when we got started. We were a technology company that is to help people connect and kind of to build the future of human connection. And I thought that the rebrand around meta both was healthy because.
We just have more products at this point than just Facebook and two.
It kind of re anchored people's thinking about the company in terms of, oh, this is a company that's developing kind of longer term technology around people connecting, which is really how I think about what we should be doing. And when you're trying to push something in a direction, it's helpful for people both inside the company and outside to think about it in terms of what you're actually trying to do. So I've been very happy with how that's gone. The metaverse thing was always going to be a very long term thing. I think some things have gone better than I thought. Some have gone slower. The glasses, I think is probably the best example of something that is going better. I would have thought a few years ago that ar glasses wouldn't really be a mainstream thing until we got kind of these full holographic displays, like I think the type of stuff that you've been able to see. It's been one of the most positive surprises that the collaboration that we have with ray Band, it's going extremely well, and I think part of it is they're stylish, they're good glasses. Part of it is that it's a great form factor for AI. We didn't know that AI was going to be a thing when we started working on that project, or I mean, we thought it was going to be a thing like ten years from now. But if you asked me five years ago, I would have guessed that AR would come before AI, not the other direction. So some of this is just about kind of setting yourselves up to ride the different waves when they come in.
I've gotten to try out a bunch of your future facing technology here with Quest with the ray band, metas with O Ryan. How big an opportunity do you see to own your own operating system? And down the line do you see your reshuffling of the power relationships in the tech industry Now.
I don't always think about things primarily from a business perspective. I mean, I like building stuff. What I've found is that if you build good things, then eventually you were able to build successful products and businesses around it, even if it takes a while. And one of my lessons for the last ten years is just building apps that we ship through our competitors' mobile platforms. I just think we're going to be freed up to be more creative and build better stuff if we can control more of the core technology. So I mean, look, we're not going to be the only kind of operating system and augmented in virtual reality. There will be others. We're not going to be the only AI system. There will be others. But I don't necessarily think about it as trying to shuffle the industry. I just want to be able to build good things, and I think that like not being dependent on competitors that are trying to put you in a box, is an important part of doing that.
But it's also a luxury, right.
It's like when I was a startup, where we were a startup, like you don't necessarily have the resources to go invest tens of billions of dollars in building out these technologies. But you know, now we're here and we have a successful business and we can do this innovation. And it's just one of the things that I want to do. I don't want us to just like rest on our laurels and maximize our profits in the near term. I want to pour all the profits and success that we have into building the next generation of things, to do it in an open way. I think one analogy that people misread from history is there have always been multiple operating systems. Right with PC, it was Windows and Apple, and Windows was the open one back then, and it won, right So I think that a lot of people now have this massive recency bias and they think you just have the iPhone Android analogy in your head and you're like, oh, like, well Apple won that. It's like, yeah, they won this generation of computing. But it's not always that way. For PCs, the open one was actually the primary platform. And part of my goal is to make it so that for the next generation of platforms, whether it's AR glasses or mixed reality headsets or AI systems, it's not just that there should be an open platform. I think the open platform can actually be the best one, and I think that that's kind of a cool thing for the industry.
Sakoya calls AI the six hundred billion dollar question. There's allays investment in chips and the infrastructure and the data centers, but when does it start paying off?
Like is it a bubble? And well, if not, like, when do you start seeing the money.
I think bubbles are interesting because a lot of the bubbles ended up being things that were very valuable over time, and it's just more of a question of timing, like you're asking, right. Even the dot com bubble, it's like there's all this fiber laid and it ended up being super valuable, but it just wasn't as valuable as quickly as people thought.
So is that going to happen here? I don't know.
I mean, it's hard to predict what's going to happen in the next few years. I think AI is going to be very fundamental. If the products were able to grow massively over the next few years, which I think there's a very good chance of, then I'd much rather over invest in play for that outcome then just try to say, Okay, maybe we'll save some money by developing it a little more slowly. I think that there's a meaningful chance that a lot of the companies are overbuilding now and that you look back and you're like, oh, we maybe all spent some number of billions of dollars more than we had to. But on the flip side, I actually think all the companies that are investing are making a rational decision, because the downside of being is that you're out of position for the most important technology for the next ten to fifteen years, whereas if you overinvest, then you're probably just losing some amount of money that's for these companies generally an affordable amount of money that they can lose for something that's just a really important prize.
So with this economy, does the year of efficiency continue into the AI era? Are we talking years of efficiency or has it belt loosened?
Well, I'm really glad that we did all the efficiency push because I think that that basically created enough capital for us to go invest in massive amounts of infrastructure. And I think going forward, I would guess that most of the investment that we make is going to be building out AI compute rather than massively growing the number of.
People at the company.
We are growing, We're going to hire more people, but I think at this point the biggest part of the new investment that we're making is in building these kind of giant AI superclusters to train the future AIS.
So it was.
An interesting thing for like the first almost twenty years of the company that it was just like growing quickly in people every year, and I think it's pretty healthy to focus on efficiency around that. But yeah, I'm definitely glad that we kind of gave ourselves the flexibility.
To build this.
You said your goal is getting to artificial general intelligence or AGI. How do you define AGI and do you get there first?
That's a good question.
I'm not sure we can answer the second question about who gets there first, but actually maybe maybe we'll start there. I do think open sources gaining ground pretty quickly. If you look at LAMA two last year, we were like a whole generation behind the frontier, and for LAMA three, we're basically competitive with the state of the art models LAMA three and the LAMA three.
Point one release that we're putting out now.
We're basically already starting to work on LAMA four, and our goal is to completely close the gap with all the others on that. So I don't know, I mean, do we get to AGI first? I mean, I think that there will probably be some breakthroughs between now and then. It's hard to just predict in a straight line, but we're certainly putting together a world class effort and we're focused on it.
Then you get to the more complicated question, which is like what is it.
I don't know that there's one specific definition for this because I think intelligence is multivariant, right, It's not like there's one number that is your intelligence. I think of AGI is intelligence has different kind of capabilities, Right, So the first set of models could reason over text, and then you added in the ability to do photos, and now you're adding in the ability to do videos, to understand videos and produce videos, and then you're making that it works well with audio. I think the ability to reason and be able to produce three D worlds and three D content is going to be really important. I care a lot, and our company cares a lot about people interacting with each other. So like when you think about the human brain, there's like whole parts of it that are just focused on basically like reading people's expressions. It's like if you move your eyebrow a millimeter, it means something. It's like I could pick that up, whereas if something moves over there in the corner a millimeter, I'm not going to notice it. There's probably a specific aspect of intelligence or modality which is like reading people's faces and emotions, and that's something that I care about, so I think we'll probably try to build that in at some point. So I don't know if it's just like increasing the amount of knowledge or some like IQ score. It's kind of layering in all these different things, which is why it's a little bit hard to say who gets their first because I actually think over time, different companies might optimize for different things.
I know, you've always been fascinated by China and you learn to speak Mandarin, and what do you know about where China is on AI and AGI?
I don't personally know a ton. I know on open source they're leading models. You know, LAMA three point one is well ahead of that. The kind of leading Chinese model before was right at the level of the previous seventy billion parameter model. But there's been no Chinese version that's anywhere near the frontier, like a four hundred billion parameter model, and the LAMA three point one seventy billion is ahead of where they are.
So I feel good about that. I don't know.
I think that's going to be one of the interesting questions over time. You know, geopolitically, when you think about the rivalry, there's this question which is how should the US approach kind of AI competition with China. And there's one strain of thought which is like, okay, well, we need to like lock it all down and I just happen to think that that's really wrong because the US thrives on open and decentralized innovation. I mean, that's the way our economy works. That's like how we build awesome stuff. So I think that locking everything down would hamstring us and make us more likely to not be the leaders. And the other question is if we do lead, what's the chance that we're actually going to prevent them from being able to steal it anyway.
I mean, it fits on like a thumb.
Drive or so it's I just think that that's not a realistic way to approach it. I personally think that the right configuration of this is open source.
I think is going to be the leading ecosystem.
It will be very robust, it will eventually be available to everyone in the world, including China. But I think the leading companies should work with the US government and make sure that our national defense and things like that have sort of a perpetual first mover advantage on the leading technology in the world.
What do you say to the skeptics who think, you know, this could be exploited by our rivals, criminals could get their hands on it, Like, how do you think about those risks.
Well, I think that there are different kinds of risks, so you have to take them one by one. The risk of China or a very large sophisticated state that has a lot of resources, I think is different from like an individual criminal who might just try to do something bad. I think to neutralize a criminal, I mean, look, you want to do a lot of testing up front. We do all this work before we release every single one of the models to make sure that we understand what the risks are and that we mitigate them as much as possible.
At the end of the day.
The way that we've approached this with social media, and the way that I think society approaches this overall is you have kind of better AIS or more resources to go fight people who have less resources. So there are all these sophisticated folks who try to do bad stuff on the social networks, but we invest like billions of dollars a year and have these various sophisticated systems.
So I do think that will play out with AI.
Too, which is that more sophisticated AIS with more compute will be able to generally check less sophisticated folks, whether they're just kind of everyday criminals or whatever trying to use something but they have less compute.
So we win the AI wars this way.
Well, I think we're checking less sophisticated actors with less compute that will work most of the time. Now that doesn't mean it automatically works for everything. I think a lot of attention needs to be paid to it. So it's a thing that we need to focus on. But I do think that that's how society has been stable for hundreds of years, is you have larger forces, whether it's polices or armies or whatever, checking kind of smaller groups that are trying to create harm. How you deal with the geopolitical rivals is a more complicated question, but I think there there's the question of what can you hope to achieve. If you're trying to say, okay, should the US try to be five or ten years ahead of China. I just don't know if that's a reasonable goal, because I think China has a lot of resources and a lot of great scientists, and they're great at espionage, right it's like all the stuff. So I'm not sure if you can maintain that. But what I do think is a reasonable goal is maintaining a perpetual six month to eight month lead by making sure that the American company is in the American folks working on this continue producing the best AI systems, and then having a direct effort to try to make it so that those efforts are integrating with the National Security establishment, so that way, the US government has a kind of perpetual six.
Month eight month advantage on what everyone else in the world gets.
And at some level six months may not seem like a lot, but part of how I think about it is like you're using an iPhone, you know all the competitors, and it's like, is the iPhone from three years ago better than the best Samsung phone today?
No way.
But you know, the fact that Apple has just generally, in the eyes of most consumers, been a little bit ahead at each point has meant that over fifteen years or however long, in almost twenty years, they've just had this big compounding lead, And I think if the US can maintain that advantage over time, that's just a very big advantage.
So that's my philosophy.
I think it's what I think is the most reasonable thing to shoot for, and that optimizes for making sure that Americans continue to lead in this.
Well, look, you're not an AI dooomer, but some really influential people are like, how can you be so sure there won't be a roboc ellipse or THATAI will lead us to human extinction?
Well, I don't think you can ever be one hundred percent sure. There are a lot of bad things that could happen. Climate change is terrible, nuclear war is terrible. But I generally think you can manage this in a responsible way where you just maximize the chance of the net prosperity that's created in society being really good, and where.
You can manage the risk.
And it's like I think that for the AI risks, especially like the existential type stuff that you're talking about. I just personally think that we will have more of a sense of the different capabilities of these systems with each model that we build, and they're getting better, right, So LAMA three is better than Lama too, Lama four will be better than LAMA three. And this is why we do all the safety testing up front. We study safety. We try to have as much of a sense of here are the different capabilities that would be bad. Right, It's like we don't want it to be able to lie or self replicate, or like, just is it intentionally deceiving people or stuff? And I just don't know that we're seeing those things yet. So I think there are different questions that happen once you start getting to a level of intelligence that's a lot greater than where we are now. But there is this myth that I think people kind of anthropomorphize and assume that intelligence is going to take the form of something that's conscious or sentient or physical.
And I actually think in some ways.
This has been one of the most surprising things over the last few years, is you can have something that is pretty reasonably intelligent that is actually completely separate from a consciousness. I do think that there's this thing through the history of science where people keep thinking that they're special or that they're like the center of the universe in different ways. And I think people are very special and like life and kind of our consciousness and the connections we make. I think all that stuff really matters in like a deep way. But I'm not sure if the specific like intelligence of productivity is actually the thing that makes our lives meaningful. It's obviously it's one of the defining characteristics of being a person.
But I don't know.
I think it's like as much of it as like the love and connection and camaraderie that you feel with other people. And I don't know, I think that that stuff exists, even if you could like completely separate out intelligence from kind of the rest of the human experience.
Yeah, there are obviously a lot of players, you know, trying to do, you know, some similar things. What do you make of Sam Maltman's leadership. Do you trust him and Opening Eye to get to AGI responsibly.
I know Sam pretty well.
He's actually on the board of the Biohub with me at the chan Zuckerberg Initiative with me and Priscilla, So we kind of sit in there and we grill all the scientists and push them on using AI in more effective ways. And that's kind of a fun way that we get to work together. I mean, look, he caught a bunch of stuff early on what was going to scale with large language models that a lot of other people had written off, and I think he deserves a lot of credit for how that organization has developed, also having gotten a lot of public scrutiny. Myself, I think it's like, look, when you're going through it for the first time, you don't handle it as as perfectly as you would like. But I think he's handling it very gracefully and is generally doing I think he's doing better than I did, and I.
Respect him for that.
Over the long term, there's just the question of which model ends up being the right one. And it's somewhat ironic thing to have an organization that's named open AI but as sort of the leader in building closed AYE models. And it's not necessarily bad, but it's kind of a little funny. But I'm not like such an absolutist on open sources. I think anything closed is bad. I mean, at Meta we do a lot of open source stuff. We do a lot of closed source stuff, right, So it's like I do both. I think I get the value of it. I would imagine that open ai will continue being an important company for a while to come, But personally I am more optimistic about a more positive AI future where open source is the industry standard.
And that's that's just my view.
I want to talk about the twenty twenty four presidential election. Okay, Facebook has been a flashpoint in many elections around the world, and you personally have been called out, most recently by former President Trump.
This is a big election. What do you think is at stake?
Well, I mean, look, it's it's obviously a very important and it'll be a historic election. And look, I mean the main thing that I hear from people is that they actually want to see less political content on our services because they come to our services to connect with people. So you know, that's what we're going to do, where we give people control over this, but we're generally trying to recommend less political content. So I think you're going to see our services play less of a role in this election than they have in the past. And personally, I'm also planning on not playing a significant role in the election. I've done some stuff personally in the past. I'm not planning on doing that this time. And that includes, you know, not endorsing either of the candidates. Now, Like I mean, there's obviously a lot of crazy stuff going on in the world. I mean the historic events over the last like over the weekend, and I mean a personal note, it's you, I mean seeing Donald Trump get get up after getting shot in the face and pump is fist in the air with the American flag is one of the most badass things I've.
Ever seen in my life.
At some level as an American, it's like hard to not get kind of emotional about that spirit in that fight, and I think that that's why a lot of people like the guy. But look, I mean, we're living in a pretty crazy time, and I view our role here is to make it so everyone can express their views on this stuff, but we're going to try to manage it so that way the politics doesn't drown out the human connection and the community, which is I think the main thing that people come to our services for. And you know, we're not always going to get that right, but that's what we're going to try to do, and I think that that's probably the best role that.
We can play.
President Biden signed a bill to ban TikTok into law, but whether it happens is an open question. Former President Trump has given a reason to not ban TikTok is that he would see the market.
To you, what do you make of that logic.
I don't know.
The national security questions around whether TikTok should be allowed is obviously you know, above my pay grid as something that the folks in our government and Congress need to go figure out.
From what I see on a day to day basis, I think the competition is focusing.
It's good. I like competing with different companies. I think we're doing pretty well here. We're gaining market share. So I don't know they'll go do what they need to do, but I think you know where we're going to be fine and we're going to continue doing well in this space either, right, So.
Any thoughts on whether or not it should be.
Banned, I really think that's that's above my BIY grade.
So between looking at the broader social media ecosystem, these little apps, between TikTok and Snap and Instagram and threads and Facebook, how do you see the social media ecosystem changing in an AI world?
Do new players emerge? Is their consolidation? How do you win the battle for the young people?
And I think it's going to be all of this stuff.
There's definitely going to be an opportunity for people to build new apps. I mean, I think we've seen that it's very competitive TikTok guru from being very small to now having more than a billion people.
I'm sure there will be more more apps. There are also part of.
The reason why we want to innovate and push on that is that we also want to integrate that stuff into our experiences, right. We want to make it so that creators can create more interesting contents, that there are new ways for people to interact with the.
People that they care about.
AI has already been so important for just recommending people great content in these platforms. I think that will continue, but also there will be another wave where AI now is helping people not just get good recommendations, but also create new content. So I'm pretty optimistic about that. I hope it's not just going to be like one new format like video or photos. I think that there's going to be I would guess dozens or hundreds of new types of content formats, and some will be made by startups, and there'll be new apps built around them. Some hopefully will pioneer and popularize. But it's going to be a very dynamic space.
We're facing a crisis in mental health, especially a teenagers.
The Surgeon General is now calling for a warning.
Label on social media, saying that it's partially to blame with everything that you know?
Now, does you have point? How are you thinking about this?
Yeah? So I guess I come at this from a few directions.
One is that there's clearly an issue with mental health in the country, So I think that's like a really important thing, and for kids and teens it's especially important.
And I think the focus on that is right.
You know, I have three young girls and being a parent is hard, and you just want to make sure that they have like good lives. And from that perspective, what I aim for us to do is build our services in a way that's aligned with parents, giving them the controls that they need to basically oversee how the services work for their kids. I mean, I think different families are going to have different rules for how they want this stuff to work, and there's probably not a one size fits all thing that's right. But I think we have a role in making sure that we study the stuff, understand what's good what's not broadly, but then for individual families, giving the parents the.
Controls that they need.
What the data says today is a little bit different from what the basic meme is that's out there. I think a lot of people kind of act as if there's been this proven connection between these and I just don't think that the science supports that. Today It's obviously something that will continue to be studied over time. There's nuances. I think social media is different from phones overall. It might be that phones have an issue, right if you're getting notified for something or it's buzzing you and it's like preventing you from sleeping. I mean that is different from social media. But I think there's all these other issues too. But I just haven't seen of all the most rigorous studies that have been published, including in the most prestigious scientific journals, the link between this on kind of a causal basis has not been well established. Now that doesn't mean that there isn't work for us to do. I mean, we want to make sure we do a good job on this, but I think for whether it's the Surgeon General or other folks, kind of jumping to a conclusion that the science doesn't yet support is not the most helpful thing. But look, I mean there are clearly issues here across society, and want to make sure we're part of the solution to that.
Facebook, and you have been blamed for a lot of things, whether you like it or not, whether you agree or not, why should we trust you with AI?
Well, the loaded question we have gotten blamed for a lot of things, and I mean, look, I take our role in all this stuff seriously, and I think we've tried to handle all this as well as possible. I'm not sure that it's all been fair, but look, I mean we're in I'd like to think that we're an important and relevant company. So I think the scrutiny is generally healthy. I mean for AI, I mean look compared to the other companies. I think doing this in an open way actually increases the chance that it ends up being done safely. If you look at the history of open source, open source software counterintuitively ends up being more secure.
At the beginning of the open source movement, a.
Lot of people thought, hey, it's not going to be secure because anyone can just see who the bugs are. It's like, yeah, well, anyone can see where the bugs are is then you fix them. Whereas in closed systems you have these holes that are just out there that like governments or hackers or whatever exploiting for a long time because there's less transparency on them.
So I think that that's.
Going to be one of the defining things around open source is that anyone can scrutinize the work, and because of that, I think it just puts a lot of pressure to make sure that the quality of the work that you're doing gets better really quickly.
And so I guess that's just a big difference.
In our approach here compared to the approach that others are taking, and I think that people should take confidence in that, and look, when we get stuff wrong, then we're going to get called out on it, and I think it'll just generally create these systems that are more hardened and more secure and safer. But I know that there's the whole debate around isn't going to be safe to do open source AI. My personal view is that open source AI is going to be safer than closed for the exact same reasons that open source historically has been safer and more secure than closed source software.
Let's talk about regulation, because in the shape of AI, regulation is going to be an issue around the world for the next several years. What should this regulation look like? Do you support something like an independent agency that would oversee AI like we have for nuclear energy.
I think it's pretty early and thinking through this stuff, and I haven't seen any proposal yet that I look at and I'm like, oh, that's exactly what we should do. I think there's a time question for this, right. I think in a lot of things, if you regulate it too late, there could be too much harm. If you regulate it too early, you could stop innovation. I do think we're seeing this issue in Europe now, where they just like put in place a ton of regulations and a lot of companies are just not launching stuff there. So I mean that's an issue. So I think there's clearly going to need to be this interaction between governments and companies to go oversee the work.
I'm not sure what the right model is.
The thing that I've focused on most is the viability of open source, because I obviously believe in this deeply and it's both kind of our strategy and approach.
But I think it's the best.
Chance for creating an AI future that is positive, where the prosperity that is created can be shared by the most people, and that we end up with the safest outcome.
I think that open source will do that.
But there's a big intellectual debate around this, and I think that there's a bunch of people who think, okay, well, open sources inherently maybe like a little less controllable because there aren't just like a small number of companies that you can go choke when you want to, like go get them to do something right. So it is like a little more forward leaning on. Okay, we're going to try to enable this kind of decentralized innovation, and that's what is typically worked on the Internet, and I think it's what gives the best shot and the scrutiny around that I think is the thing that will be the most likely to produce a safe outcome. But you know, there's a real debate around this today or is like should that be the way that it goes now? I think that open sources gaining popularity with every day, and I think the LAMA three point one release, with the four or five billion model that is basically going to be kind of this teacher model where like all these different companies or institutions, universities, academics are going to use it to train different models. I think it's just going to keep on gaining more and more popularity and I would guess will eventually become sort of the industry's standard for how this stuff works. But I just think we need to be careful about not doing things that are going to prevent what is actually the best outcome over time, just because it's maybe like a little less deterministic or something like that.
You've been at this now for twenty years years. Yeah, it's been a journey.
How would you describe your leadership style before versus now?
I don't know, I probably need to think more about that. I mean, the roles have changed so much at the company.
And when I started early on, I really knew nothing about building a company. I was a kid, and I was an engineer, and I like coded the first version myself and did most of the coding for maybe like the first couple of the years of the company. And then we got all these amazing people, right. So, I mean, Cheryl, I often joke that she raised me as a manager like a parent, and I think that that's like really true, right, It's like, I mean, I literally I think it's hard to overstate how little I knew about running a company. And she's just like a really special person has played this hugely important role in the history of the company and like training me and so many of the other leaders of the company. And then you know this funny thing happened over the last fifteen years where there were all these kids who just kind of grew up and have had this context and these relationships with each other, and you know, now I think we're all a little bit more sophisticated, hopefully about managing something like this well and building stuff for the longer term and doing it more responsibly. And I don't know, there's like a lot of lessons that you learn along the way, you know, when I talk to other founders or CEOs and ask how I built the team here, that I think is really kind of like the key to how we get everything done. I mean, there's always a lot of attention placed on like the CEO or the top person, but it's never one person, right, It's always a group of people, and I don't know, it's hard to replace some of that.
I think a lot of this is like you.
Just grow up and you learn together, and you bond together, and it's like a really kind of interesting and special group.
Looking forward, will the next Mark Zuckerberg start the next Facebook or will Ai do it for him?
Yeah?
Well, I think we use technology tools for stuff. I think one of the things that's going to be interesting in the future is you know, now to build meta and the services that we have. We have tens of thousands of people working at this company. And one of the things that I think is going to be really powerful in the future is the next entrepreneur, you know, sitting in their dorm room or high school or whatever it is, is going to now have all these tools to be able to have the productivity of like a large company, but maybe it's just them doing it, or them and like a small.
Group of friends working together.
Part of why I was able to start this is because there's the saying in science that you stand on the shoulders of giants, and I think that's true with open source and technology too. And you know, like I couldn't build this myself, I was able to build it because there's all this other technology that I can build on, and I just think that that's going to keep getting better and better.
So, you know, it's like I watch my kids.
Be creative and the stuff that they can do today, like I wouldn't have been able to do when I was a kid because it just didn't exist. So I think we're going to live in a profoundly more creative future where more people can have the capacity to do like pretty amazing things than has been the case in the past.
What about Elon? You're kind of like the anti Elon now and it's working for you. Where do you see your differences?
I don't know.
I know him less well, to be honest, I think he's obviously an amazing entrepreneur and has done a lot of really great stuff, you know. I mean, it takes some courage to go out there and speak your mind as much as he does. I don't think like I would be comfortable doing that as much. Maybe I'm just like a somewhat more reserved person, but I feel like all my career people are just telling me like, oh, go out like be more yourself. And so there is something that I think you have to admire about someone who does that and maybe even takes it to an extreme.
But I don't know what that version of me would be.
Like, We're going to go.
Visit you in Tahoe and follow you on your side quests.
And you're going to surf too, well, I'm going to try.
I'm going to try.
You're going to teach me, right, I think that was what I was told, but I'm not really I don't know if a good teacher.
But either way, you're always one upping yourself. And I'm sure there's a metaphor in there for the company. You also, you know, you're doing all these things to make us more virtual, but you also love so many things about the whole world. How do you wrestle with those two ambitions.
Well, I don't know that there are odds.
I just think that people are very physical beings, and I think that we sort of mythologize intelligence. I don't know, there's like a bunch of people in the tech community you think like, oh, we'll just like separate out our consciousness and intelligence and like upload it to the cloud. And I'm like, that just sounds ridiculous to me. I mean to me, it's like part of what makes you a person is you like are active and you have energy.
We're not just minds.
Like I think the energy and like the love and all those things are probably as foundational to what makes you a person. And so I don't know, I think my life has gotten a lot better. You know, in the early days of the company, I just didn't have time to do anything else. We were like always about to die, and you know, just and it's like being a startup is really stressful, right, and obviously there's stress now too, but it's just it's a little more managed and we have like more good leaders at the company and all that. And my life has just gotten so much better since I took the time to go make sure that I can go do physical things all the time, and I think it's made me a better person. I injured my knee last year fighting, and I thought, you know, Priscilla was gonna give me a hard time about it. Like I kind of thought like she was just gonna like, you're an idiot. Why are you fighting. You're like running this company. It's like you shouldn't be doing this. But she actually was like, you know, I know it's a long recovery, but when you're done, you better go fight again. It's like just because you're like a much better person when you're like going and and doing all this physical stuff. And it's like it's just kind of like a family value. I mean, we do it. I mean I do it.
Priscilla does it with me. She actually she hits pads. That's what I was referring to, tell us, serve with you.
But you know when when she's sitting pads. You can hear it from like down the streets. So I mean she, I mean, she's like I think maybe one day we'll talk her into fighting. But but she's quite good at surfing. And we teach the kids too, and it's just like a fun thing that we do together. And I don't know, I really believe in that. I think that that, like, it just makes you like a better person.
Tell me about the necklace.
Oh, this this is something that I worked with a designer to make that has engraved on it. The prayer that I sing to my daughters every night when I put them to bed. It's a Jewish prayer called mesche Birach and it's basically a prayer for health and courage, and it says, may we have the courage to make our lives a blessing. And I just think that's like and I've sung it to them basically every night of their lives since.
They were born.
And I unless I'm out of traveling or something, but I try to be around for bedtime. That's kind of my thing and when I hang out with the kids, and yeah, I don't know, it's just it's meaningful for me.
In our family.
That's beautiful. Thank you. Thank you for sharing your time with us.
Thank you for letting us into your home, and you know, taking the time to explain all of.
This, like I think it's really important. It helps us to get to know you cool.
Thanks so much for listening to this edition of the Circuit, and please watch our video episode with Mark Zuckerberg on Bloomberg Originals. I visit his retreat in Tahoe, we hang with his wife Priscilla, and yes we go wait surfing.
You'll see how that works out. I'm Emily Chang.
You can follow me on Twitter and Instagram at Emily Chang tv and watch new episodes of the Circuit on Bloomberg Television, streaming on the Bloomberg app or on YouTube. And check out other Bloomberg podcasts on Apple podcast, the iHeartMedia app, or wherever. You listen to your shows and let us know what you think by leaving a review. Those extra reviews make a big difference. I'm your host and executive producer. Our showrunner is Lauren Ellis, our editor is Alis and Casey.
Catch you next time.