Human 3.0—The Skills & Mental Frames Required To Thrive In An AI World

Published Oct 9, 2024, 6:00 AM

Human 3.0 is here.

In this conference for the United Nations, Daniel Miessler introduces the topic of Human 3.0 philosophy and the skills and mental frameworks needed to thrive in an AI-driven world. 

Learn about:

- The future of work and the human 3.0 economy.

- How AI will revolutionize startups and entrepreneurship.

- How one-person billion-dollar companies are becoming a reality.

- Creative expression and AI.

- The importance of personal visibility and authenticity.

- How to survive and thrive in today's rapidly evolving technological landscape.

Subscribe to the newsletter at: 
https://danielmiessler.com/subscribe

Join the UL community at:
https://danielmiessler.com/upgrade

Follow on X:
https://twitter.com/danielmiessler

Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler

See you in the next one!

Welcome to Unsupervised Learning, a security, AI and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis, and mental models to bring not just the news, but why it matters and how to respond.

So hello everyone, welcome to this AI at Ypo session. We're delighted to have Daniel Miessler here with us today to speak on the skills and mental frames required to thrive in an AI world. Daniel Miessler is the founder of Unsupervised Learning, which is a company that focuses on building products that help companies, organizations, and people identify, articulate, and execute on their purpose in the world. Daniel has over 20 years experience in cybersecurity, and has spent the last several years focused on applying AI to business and human problems. Daniel has held senior positions at Apple, Robinhood, Ioactive, HPE, and many other companies, as well as consulted for or been embedded in hundreds of others in the fortune 500 and global 1000. So thank you very much for being here with us today, Daniel. The floor is now yours.

All right. Yeah. Thank you for having me. So what I want to talk about today is what I believe is coming for us with AI, and what we can do to get ready for it. So I want to start with some background, some analysis and predictions, and then give some recommendations on how to get ready for this thing that I believe is coming. So I don't normally do intro slides because I feel like I would rather just get into the ideas, but in this case, I'm going to make a little bit of an exception, because I think it's important to see like the background of where this is coming from. So I won't be sharing any CV stuff, but just want to talk about some stuff that I think is relevant. So I've been thinking and writing online about things in the future since around 1999, and started a podcast about security called Unsupervised Learning in 2015, and I wrote a short book on what I saw as being the future in around 2016. And in that book, I basically talked about a future of digital assistants, basically helping us as humans, a world made up of APIs, and basically having those digital assistants mediate the world for us, be like a broker in between us and that API world. And I think this is already starting to happen now. So I studied ML pretty deeply in 2017, especially Andrew Ng's full course, which I still recommend. I believe he has a Coursera version now as well, but the the YouTube version is quite good. I then worked at Apple in 2018 for a few years, starting off in a machine learning group there, and ended up building another product there in the security group, and I've been continuing on this path since like the middle of 22. I basically went full time into this. So, uh, I basically see two problems in the world, which is humans losing a sense of meaning and AI making that much, much worse. So what I'm essentially doing now is trying to help solve that. So the point of all this is not to say that I'm right about everything that I'm about to share, but rather that I've been thinking about it for quite some time. Um, another good thing to mention here is, uh, anything about predictions is like, it's, uh, you shouldn't really believe them because it's really hard to predict the future. Especially, yeah, when you're talking about a thing that hasn't happened yet. Um, so I'll talk with a decent amount of confidence here, but, um, it's really because if you hedge all the time, it's not really compelling. And I do add caveats a lot. So a couple of things I won't be talking about. I have other presentations on existential risk from AI, and lots to say about AGI and superintelligence specifically, as well as some thoughts on which countries might do what or whatever, but those are big topics on their own, so I won't be talking about those here. With that out of the way, I want to jump in. So what I want to talk about is, um, how I see AI affecting companies, uh, startups, the reduction of creative friction, hiring, broadcasting oneself, and an unfortunate separation between groups of people as a result of this. And for each of these, I'm going to start with an observation and then give a prediction. So the first major concept is a new way of thinking about companies. And I think AI is going to magnify this effect quite a bit. So one of the major uses of AI will be to optimize processes within businesses. And a company I would argue is really just components of action combined with process. So what's going to happen is consultancies are about to come in and basically turn your business into a diagram similar to like what you see over here on the right. And I've got a full posting on this which you can go read. It's called companies are just a graph of algorithms. And when a company does this, when they make your company visible to you in this way, AI thrives off of that, right? Once I can understand what your business can do, it can then start making some really powerful predictions. Um, and that's that's exactly what I think is going to happen. And really, really crazy stat here. 40% of McKinsey's business in 2024 is already AI essentially doing this kind of thing. So the prediction here is that AI will find massive waste in most companies, middle management and bureaucracies will come under extreme pressure and companies will get smaller and far more efficient. And the good news is how this will affect startups and moving on to startups. A lot of people are talking about the concept of the first one person billion dollar company, which I think is a really crazy concept. Just as we saw the efficiency that you can get in a very large company, we're going to see that in tiny companies as well. In fact, it's going to be more magnified in tiny companies because you're not dealing with the the previous technical debt. So there will soon be many, many AI based services for doing every aspect of a business which will allow a single person to essentially launch the idea. The paperwork can be done, the customer service can be done, sales, marketing, analytics, optimization, all these different things can just be like AI services that they add on, and that's going to allow very small teams to do really powerful things. So that's the prediction here is the friction to creating a business falls dramatically. So there'll be a lot more 1 to 10 person companies that don't really have to grow larger than that. And I think the result of this is going to be an extraordinary multiplication of global productivity. Hard to know how much exactly that will be, but I think five X is probably safe in like the next ten years. So one of the most exciting things to me is the reduction of creative friction. So if you think about like how many Steven Spielberg's there are in the world right now, the number is tiny, right? Very, very few. But if you only factor in the combination of the genius with the fact that they're in LA and they know the people in Hollywood and they have access to all this money and they have access to all the connections. That's the reason the number is low. So the question is how many more Spielbergs are there on the entire planet with as good or better ideas who simply aren't in LA or don't have access to these resources? Or maybe they don't know how to draw, or they don't know how to write a script or something like that. And I think there's a lot of opportunity there. So what I see happening, starting very soon is basically an explosion of new Spielberg's. So I will be able to help the people with the ideas, but they have one or more bottlenecks. You know, maybe they're in sub-Saharan Africa, maybe they're in, you know, some apartment in Scotland, and they just don't have access to this stuff. And they didn't really get formally trained in these different skills. So we're talking about an explosion of books, film, poetry. Artists will just be able to talk to their AI and say, hey, you know, I have this thing, here's the story and go off into extreme detail about the story, and the AI is basically taking notes, and you combine that with the ability to generate art, the ability to generate video like we're seeing. And pretty soon the AI is going to be able to come back and say, oh, do you mean like this? And they're like, no, no, no more of this influence, more of that influence, and basically have this interactive conversation with an AI which results in an art output that I think is going to rival the best people in the world. And just imagine that for 8 billion people on the planet. So the next one here is hiring. I think hiring is going to change significantly. And there's a number of pitfalls here that we need to watch out for. But we've known for a long time that interviews are not super predictive of performance. So I think it's absolutely ripe for disruption. And what I see happening here, and I've already seen multiple companies working on this, is basically AI goes out and does full collection of everything about this person, this candidate, and it's basically looking at all their blog posts. It's looking at the papers that they've written, it's looking at their GitHub, it's looking at commentary, it's looking at basically anything they've put anywhere that's available to the eye, basically in public to read. And it's basically turning that into like a, a profile or a, a psychological profile or a work history profile. And then it can also do interviews with the person or will be able to very soon. So it first, it knows everything you've ever written online that's public. And then it can have a conversation with you about the pertinent skills of what that company is looking for. And it also is going to be doing ML magic to say, okay, here are the things we actually know that are predictive of success in this in this job. So this is going to be really powerful because it's then going to result in a score and a recommendation of like, yes, we should hire this person or yes or no, we shouldn't. Now one thing to mention here is that anytime you have ML involved and AI involved, it's a little bit scary to have a thumbs up or thumbs down or a score when you're not exactly sure what the criteria are. And we absolutely need to protect against bias in this case. Um, so transparent AI is going to really help here. The next thing I see happening is a surging requirement to be visible in the world, to be resilient in this in this world that's coming up. And authenticity essentially becomes signal. So ideas, ideas and personality become the things that people care about. And it becomes more important than medium. It becomes more important than, you know, being being on a TV show. As we're already seeing like CNN is not really competing with YouTube anymore. So ideas and personality become the things that people care about. And it's more important than medium, and it's becoming more important because the creator content is going to become more direct. It's going to be like an influencer, create something and it goes directly to the consumer, and they're not even really going to care what medium it was on. And that's going to be even more true when it's actually a person's eye that's going to collect the stuff and bring it back to the principle. So the prediction here is that I will work for people to find the best content for them to read. So, for example, my eye will be going out while I'm sleeping to collect the coolest new ideas, the coolest new stories, the coolest new opinions. And in order to find those for a given person like, say, somebody who's listening to this, you will have had to have put that out there. So you have to get your ideas out there into the world. So basically, if you're not visible, you're vulnerable. And this brings me to something I'm quite concerned with about AI, which is we already have a major divide in terms of like overall success in life around being a voracious reader or not. And there's there's a decent amount of data on this which I don't have here, but I can include later. But essentially there's lots of anecdote anecdote around this as well. So Charlie Munger has this great quote. In my whole life I have known no wise people who didn't read all the time, none, zero. And I think that really resonates. I think it's a powerful concept, and I'm worried that AI is going to be a massive magnifier of this. It's going to be just like reading, but much, much worse. And here's the problem. The the problem is we're basically going to have two groups, right? One group that has say, 10,000 AI assistants working for them. So they learn the AI, they learn the programming, they deep dive into this whole world, and they hire all these very, very cheap AIS to do all these different things for them. So they're constantly modifying their their finances. They're constantly finding better books. They're constantly summarizing books for them and basically turning this person that they're working for into, like a super knowledgeable, like almost like a superhuman brain. And some top percentage of the world is going to function in this way, whether that's 1% or 5% or 10%, whatever you want to call that. But another group is going to essentially use it to find entertainment, or they're going to be influenced by, uh, potentially malicious actors through that medium, but they're not going to be doing this thing that the first group is doing, and this is going to separate groups more than ever. So with that, I want to talk about what I think we can do to make ourselves resilient to this. So I want to talk about a concept called human 3.0. And my first recommendation here is that you essentially want to combine all this together to understand your full authentic self, to know what you want and to share it with the world. We used to essentially be our resumes or we kind of still are now. I consider what we're in now to be human 2.0, where the value of your you as a human is what you can put in a CV or a resume. And I think the value of this human 3.0 concept is that you are your full spectrum self. You know, your your authenticity. Your personality is now not a thing that you never include in your CV. It's now the thing that is able to cut through and make people want to listen. So if you don't broadcast your full self, you won't be able to differentiate. And this starts with knowing really who you are. What is your mission? What are your goals? Really, really importantly, what do you think are the most important problems in the world and what are your metrics? How do you know if you're doing well as a as a person or not? And if you can't define yourself in this way, you're going to be more replaceable from I. So the next piece of this is getting in the habit. And this goes back to the the broadcasting of self. This is essentially how you do it. You got to get in the habit of being able to articulate yourself clearly. So one way to think about this is a great quote that I found from Paul Graham, but it's actually from Leslie Lamport. If you think without writing, you only think you're thinking. If you're thinking without writing, you only think that you're thinking, I really, really love this. And it essentially the way I encapsulate this is you've got to get good at thinking, writing and presenting. Um, because it's no longer for special people, for authors, this is the only way that you're going to be able to become visible and stay visible to the world, especially when I is the one crawling and looking for signal and content. So you've got to get really good at articulating what you're about, what you're working on, and why you're working on those things. And these are the primary skills that I recommend that you cultivate. So this is really an encapsulation of the whole thing. It's clear articulation of self. It's being insatiably curious. This is another big thing that Charlie Munger talks about. And that's the reason he reads so much. It's continuous learning, deep integration of AI to accelerate that learning, discipline and passion towards chasing your curiosity, focusing on useful work, and staying focused on the problems that you're solving, and using those problems as your your guiding light. And in terms of practical skills being good at programming, you don't have to be an expert programmer. You just have to know the concepts and be able to interact with AI, because AI does a lot of programming. Uh, but you still have to be able to tell it and understand the concepts. So I would say programming is important AI tool integration. You want to learn AI and most importantly, thinking, writing and presenting and all those really are writing and presenting really are just ways to ensure that your thinking is clear. And so bringing it all together, this is what I recommend we do tactically and immediately. So that's capturing yourself in terms of purpose, goals, metrics, mission, that sort of thing. Get really good at explaining your encapsulation of that to other people. So that's writing, speaking, presenting that sort of thing. Learn to program and learn to use. I probably read 2 to 4 hours a day. High quality content. Build your presence online. So get your website going. Essentially talk about it doesn't matter. A lot of people say, look, I have nothing to say because I'm still learning. Be visible, learn in public. And that will allow you to do the next piece, which is to connect with others who are doing the same. Because you can have ten different people who are talking about the same stuff, but if they are being their full selves, those ten people will look and sound very different and they will find different audiences. And most importantly, you want to do this authentically as your full self and share this with the world. Thanks for your time.

Well, thank you so much, Daniel. That was incredibly profound and a lot deeper than I was expecting, which was amazing. Yeah, I have a few questions myself, but I think I'll start by handing it over to my colleague Daniel, who is Wipo's information security engineer. And yeah, he will cover some questions that were pre-submitted from yeah, people who initially registered for this talk. So Daniel Jeremiah, the talk is now yours. Sorry. The floor is now yours.

All right. Thank you so much for the eye opening presentation. I don't know about you, Olivia, but I know what I'm going to be busy with for the next couple of years.

Yeah.

We have received many, many questions. Colleagues, thank you for your interest. And I believe that several of them could probably be grouped under different headings, different buckets related to I and our jobs. The integration at Waco and to what level we can trust. I. So here's the first one. Will I replace human labor in the near future?

Yeah. Um, I think there's a couple of interesting, um, caveats or sort of pivot points inside of the question. Uh, replace implies a one or a zero that it's just yes, it's replaced or no, it's not replaced. That's going to be gradual. And when near future is also like that as well. It's like that. What does that mean. Does it mean one year? Does it mean three years? The the way I would answer that is to say that will AI replace human jobs? The answer is yes. The question is how much and how quickly and what types of jobs. And I would say that in general, what it's going to replace first is things that can be done easily that don't have human authenticity and human Uniqueness inside of the work. And this is why this whole presentation is oriented around getting that into what you do, so that it is harder to replace. So I think there are millions of jobs that will be replaced in the next 5 to 10 years and many, many more over the course of ten years versus five. The good news is it's also going as we talked about in the presentation. It's also going to create lots more jobs. In fact, I believe that it's going to kind of shut down the human 2.0 economy where we're basically doing raw tasks. And it's going to move us towards this human 3.0 economy, which is more human to human interaction. So the answer is yes, it will replace a lot of jobs, but I don't think that's all bad news.

Well, let's hope for the enhancement of the.

Yes.

Thanks. All right. Thank you. The next one would be, uh, if you have any advice, what advice would you have for Ypo other organizations looking to go looking to integrate AI in our current work? Because we know, you know, we have several initiatives and ideas and explorations, but actually the the substance of the work is not yet touched. Yes.

Yeah. So really good question. I think the answer is to do what I was talking about. The AI consultancies are going to do to your company. So you essentially want to go to a very large whiteboard and write, document your company, capture it as a giant graph, and break out every single little piece. Okay. The customer sends this thing in it does this following thing is done with Sarah's team that sends over to Chris's team. And here's what the teams look like. And here's exactly what they do. And here's how many people are doing that. You basically want to break that out and understand your business perfectly the way that it is today before I. And then you want to say, okay, now what parts of this look inefficient? What parts of this should we not be doing at all? That's the first question I is going to ask is what can we cut? What are the extra layers. So you want to think of the I as basically just a very harsh lens that's going to look at everything you're doing. So the very first step is understanding exactly what you are doing. And that's going to make it clear where you should apply.

I think it's uh, well it's concerning, but a lot of challenges ahead. Yeah. Um, so, so the for the next we have a package is, uh, regarding the security. Well, it's close to hard. To what level can we trust AI? Is cybersecurity really possible in the face of artificial intelligence? And mainly for me at least, how do we ensure that the information we receive has come from a good source? And how can we best validate that information?

Yeah, it's a great, great question. I am not too concerned about it, actually. Not because I trust the raw content coming out of an LLM that is not a good idea, but essentially what's going to happen and is already happening is that content being created from an AI will go through a pipeline of steps depending on the importance of the task. So if the task is to say no, you do not have this disease. That is such a powerful thing to say. It must be checked in multiple places. So what you're going to have is you're going to have multiple little eyes which are doing fact checking. So they're going to be checking Google. They're going to be checking official medicine publications. It's going to be maybe consulting with a human expert for a final check. But imagine a a result coming out of an AI being evaluated by 15 other AIS. And they all have to have a green check mark before the answer is actually returned to the user. So I don't think we have to worry about a world in which we just randomly trust. I think that might happen for something that doesn't matter. Like tell me a funny joke, but it won't necessarily it won't happen for the things that really matter. Like, should we turn on this, um, nuclear control system or should we turn it off? Should we raise the firewall, lower the firewall? Those sorts of things are going to have lots of extra scrutiny added on.

Right. So that's that's the most interesting application of democracy I heard recently if if we have agents voting on the answers, that's, uh.

Yes, absolutely. And different kinds. This is really important. You'll have someone who's like an academic, and they have this one type of scrutiny, and you can have this other industry expert kind of type of scrutiny, but they can have different backgrounds, but all of them must say yes for this kid to get an actual s o.

Great. So thank you very much. I believe that's that's all I have.

Uh.

Thank you. It's. It was amazing. And a lot of lessons for me, and I hope for for our colleagues. So I will hand back to, to Olivia to to wrap up. Thank you.

Perfect. Thanks. Um, also, maybe Daniel. Do. Daniel, do you have any questions from your side? Because I've got 1 or 2 I'd like to ask. I think we have a few more minutes. So.

Uh, so.

My questions were, you know, security and this thing of trustworthiness and, uh, explainability, but I think I have my answers, so. So I believe I'm good.

Okay, perfect. I have a quick question about what you mentioned, about how personalities and ideas are most important and that we should be online. But what about AI influencers? What are your thoughts on those?

Yeah, it's, uh, it depends how good they get. Uh, if they get really, really good, um, and they're allowed to continue, uh, because it could be that, um, some governments decide that they're too good and they're disrupting human capabilities to actually be creative. At which point they could be shut down. But if they are not shut down and they are really, really good, the bad news there is that they might quickly become much, much better than any human like. They'll just be more attractive. They will speak more attractively. The ideas will be better, they will be funnier. And it's like the human creators, first of all, they have to sleep so they won't be creating as much, and they might move down in the rankings. And the most popular influencers and the most popular shows and everything. It might be all I created. I do believe that that is the type of thing. It's like the last. It's the last stand for humanity is to be this creative thing. And if if they take that away, I'm not sure what else we have. So I could honestly see that being the place that regulation happens to say we these must be validated humans to be creators.

Well, yeah. It's interesting. That's a scary thought. Um, and also, yeah, on the line of being out there and being visible online, what are your thoughts when it comes to data privacy risks? For example, if I put all this information of me out there and cybersecurity risks as well.

Yeah, it's a great question. I believe the answer is it won't matter that much. Uh, unfortunately or fortunately, I think, um, the way this is going to move is people are going to give so much data to their AI, their personal AI. And because those are largely startup companies, they're all going to get hacked. Um, so these startup companies are going to get hacked. And rather than just having your financial information, it's going to be essentially like your soul. It's going to be like, I had these traumatic experiences. Here are all my relationships. It's going to know all your health data. When that gets hacked, it's going to be unbelievably traumatic. But watch this only for the first few people, only for the first year or two, because we're not used to having that level of personal exposure online. But soon we will. And then we're just going to know that we're all the same. We're all equally flawed. We all have these equally weird things about us. And there's one possible way for it to go is that it just becomes so, uh, regular for this to happen, that it becomes the new baseline and nobody cares. Yeah.

Well, so, yeah, if everything's out there for everyone to see, then they can't use it as a weapon. That makes sense.

Exactly.

Okay, well, that was it from my side. Um, thank you very much, Daniel. This was fantastic. And thank you for your time today. Um, to everyone watching, we do have an AI at Ypo Viper Teams channel. So if you'd like to join the community, please do so. Um, yeah. You can find out about upcoming AI repo sessions and other un AI events there, as well as well as industry news, and you can join general AI discussions. And please feel free to reach out or consult the AI repo intranet page if you'd like to learn more. So thank you again, Daniel Mazia. Thank you Daniel Jeremiah and have a good day everyone.

Unsupervised learning is produced and edited by Daniel Miessler on a Neumann U87 AI microphone using Hindenburg. Intro and outro music is by zombie with a Y. And to get the text and links from this episode, sign up for the newsletter version of the show at Daniel Missler Comm Slash newsletter. We'll see you next time.