Greylock Partners Partner and LinkedIn co-founder Reid Hoffman discusses common fears around AI and how AI agents are becoming more advanced than chatbots. He speaks with Bloomberg's Francine Lacqua.
Bloomberg Audio Studios, podcasts, radio news Now.
Earlier this month, an AI model called Manus went viral for its apparent ability to act more independently than AI chatbox.
Now.
The development of the so called artificial intelligence agents have raised concerns that they will erode human ability to think. But Read Hoffman, the Lincoln co founder, Microsoft board member and Greylock partner, thinks the opposite. Now He's just published a book in which he basically argues that aisystems gain greater abilities, they will enhance human agency, hence the book's title Souperagency? What could possibly go.
Right with our AI?
Feature? And read is here with me? Thank you so much, Read Hoffen for joining us. I mean, this is the fresh of breath air because there is a lot of concern and a lot of worry that AI takes over the computers will be in charge and will basically stop using our critical thinking.
Yes, but actually, in fact, if anyone plays with AI today, it is the most amazing education technology.
We have created in human history.
If you want to learn anything, I use it to learn everything from quantum mechanics to like, huh, I wonder what cooking souved in this way looks like it's everything.
But I guess the concern is that, you know, especially people that go into a first time job, or students or like you know, college kids don't use their critical thinking anymore. Because if you just go into chat box and say write me a song, it with this, this, and this, it does it for you.
Well, it definitely can do a bunch of things for you, but that can help you elevate your game.
Right.
So it's a little bit like if you were just copying Wikipedia as handing in your essay, sure you could do that, but actually, in fact you should use it to inspire you to make you think better, to say, hey, like for example, when I was writing Superagency, I would put in sections and say, how at a history of technology specialists critique what I've said, and then I understand it and I can decide whether or not I change your editings or and therefore the book gets better.
So I read and reduce cognitive capabilities. Right, this is a big concern that we stop thinking, that we think less, that we think differently.
Well, I think it makes it just like all technology, you can approach it being lazy, and so if you just say, okay, I'm going to outsource it, just like for example, I'm going to say, whatever the first search result on Google is, that's the answer. And if you do it that way, then of course that doesn't help you extend. But if you do anything in terms of having it be a dialogue with you, having it extend your capabilities asking a question, getting an answer, asking another question, then it it greatly amplifies your capabilities.
How do you get rid of the biases, because if you have too many biases then it excus of course democracy.
Well, so all of the major AI labs are trying to get it as kind of call it unbiased as possible. Now, within human perspective and human knowledge, there's always some things a bias.
We're always learning.
Like if we kind of look at human beings fifty years ago and we are fifty years past and we look at them and say, oh, they were biased about this, I'm certain humans fifty years from now will be looking at us the same way. So it's an ongoing process with us as well as the technology.
Is there anything that worries you about AI?
The primary thing that worries me is I call AI the cognitive industrial Revolution. It's both for the upside, which is this whole society we live in, middle class education, medicine all comes from the Industrial Revolution. That same amplification is coming, but the transitions are difficult. So the thing that primarily worries me is to say, look, we're going to have to navigate this challenging transition, just like the Industrial Revolution was a challenging transition.
But that's how we have.
Our children on our future generations be prosperous and have amazing societies.
And so that's the challenge we need to rise to.
So what's the right way of either designing AI or designing safeguards for AI?
Yeah, well, part of it, there's kind of a two part audience for super agency. One is the people who are AI fearful or concerned to help them become AI curious. But it's also for technologists, which is design for human agency, designed for increasing human agency.
That should be your.
Design principle and fundamental And the book is also I hope helpful for them.
So read this is basically putting the human you know, at the center still of everything. So how they fit into the next decade?
Yeah, well, AI, I think can be amplification intelligence, not just artificial intelligence and that amplification the superpowers that we get, and it's not just part of super agency. Is if you hit a superpower, that helps me too. That's how we have superagency together, as long as.
It's democratic and everybody has it or is that a question for in a second phase?
Well, I think one of the good things about it, And that's part of the reason why the first chapter is about humanity enters.
The chat at chat GBT.
When we build technologies for hundreds of millions and billions of people, that's that's broadly inclusive. So that your uber driver has the same iPhone that Tim Cook has. That's the kind of inclusion that we're targeting.
Read I mean, I guess evolution is not necessarily progress full stop. So how do you make sure that this means progress for the majority of humans?
Well, so I think.
Look, I think as we iterate and we participate, we make progress. And for example, even though you say, well, we have a whole bunch of cars and that creates climate change, the cars also create our industrial society. And by the way, the way that we tackle climate change is we add carburetor emissions and we add you know, we new kinds of clean energy and we do evs and so you know, I tend to be very you know, as we do iterative deployment and as we bring humanity into the loop, I tend to think we do make progress. Now I think again, you make better progress by having the right kind of design principles, by accepting criticism.
By talking about it.
So, you know, I describe myself as a bloomer, which is not that technology is just great. It's technology engaging with people as great.
But it also depends on the people in charge. What do you think of Sam Altman's performance so far in leading open AI?
Well, so I think, Look, I think Sam's great contribution to humanity will be open AI. And that's what having done a number of amazing things before and done amazing investments like Confusion and all the rest. And I think that his ability to think very big and to have bet very hard on this, you know a little bit of technological thesis of scale compute and scale learning systems is what matters. And that's why open ai has brought this current revolution to us. And it's these machines learn and they learn things that we help them learn and help teach them.
Is there someone I know You've also had your differences with Elon Musk. Who's the person in the space that you, if not admire or listen to the most well.
Sam Altman is definitely one of them. Kevin Scott at Microsoft is another, Dario Ahmadi ananthropic as another, James Miyika at Google as another. I mean, I think part of the thing that's very important about making AI for humanity is people who listen to others and talk to others and accept criticism. And I think that's one of the things that all of these people are very good at.
Do you need to regulate it or is it something that you need to see how it runs and then think about regulating afterwards.
So I think what you do is you start with the absolute minimum regulation you could do for the things that could be really bad, not for oh, look it might have a biased picture or might have a bias statement, like we can iterate, we can fix those as we're going.
Really bad is what people taking over planes to crash, the things.
Like that, you know, cybercrime, et cetera. Regulate for that right and then do iterative deployment. And by the way, even though they deployment, you eventually get to fear the regulations. So, for example, if you try to make everything perfect with cars before you put them on the road, we'd never have cars. So you put them on the road and you go, oh, this for bumpers, this window wipers, this for and occasionally, like the market doesn't want seat belts, the car manufacturers all want seat belts, and then of course the regulators going to say, oh no, no seat belts is good, We're going to add those.
I mean, if you regulate for things, I mean terrorism is bad actors, bad state actors, or so it's I mean, how do you regulate it unless you have to protect yourself. So it's basically finding the technology.
That blocks them. Well, I think it's the regulation is if you're releasing the technology to the general public, which could be to the bad actors as well, you're doing red teaming and safety, you're putting the right security measures to make sure that you're not bleeding the technology to rogue states, terrorists, et cetera, that you have a safety plan that if you go, Okay, why is the technology I'm building if it does leak or anything else, why will it.
Still be safe?
And how do we continue to have the technology that makes anything that's in the wild as safe as possible.
On Nilon must Do you think he has too much power being so close to the president.
Well, so, look, I think he's a celebrated entrepreneur.
But I think that.
Governments are not companies, like, for example, risking a company like for example, to say, oh, our ten rockets blow up, who cares, it doesn't matter the financial system of a country blows up. That's the cultural revolution that's terrible. So you actually have to say we take less risk here, even at the price of some inefficiency, because it's more important for us to not have things blow up.
Do you worry that things are going too quickly with the Trump administration, Well, I.
Worry that very bad risks are being taken at speed.
Is not a problem. Risks are a problem.
And you know, for example, it's like, well, we're just going to fire a whole bunch of people. Oh oops, we fired a whole bunch of nuclear safety inspectors. Like that's the kind of thing that's taking risks that is unwarranted.
Read.
I also want to talk to you about China because Deep Sea kind of got everyone at the edge of their seat. But where do you have a good understanding about where China is on AI.
I have a reasonable understanding.
I do a fair amount of talking to various people in China in order to make sure. Part of the thing that when last year I was going around saying there was actually an economic race with AI between the West and China, people are like, oh, no, you're overblowing that because you just simply don't want.
To be regulated.
And I think with deep Seak and everything else, we see that that race is there, and the Chinese government has said that they want to be you know, AI leaders, like leading the world.
By twenty thirty. I think the race is on.
I think it's very important for US and our industries to actually in fact be winning.
Is this an r Is this the new arms race?
Well, I don't call it an arms race because it's primarily an economic race. There is arm stuff components with it, but yes, it is the economic race.
Read thank you so much for joining us. That was Reid Hoffman LinkedIn co founder Gray Luck, partner and of course author of Super Agency. It's a good book. It's well written, and it's to the point