In this episode, Daniel Miessler explores how AI can transform our understanding of the present and create actionable paths for a better future.
He talks about:
The Current State, Desired State, and Transition in AI Applications:
How AI frameworks can analyze the current state, define a desired state, and propose action plans to address challenges in education, climate, health, and beyond.
The Infrastructure and Scale of AI:
Why we’re only at the beginning of building the AI infrastructure required for future demands, from GPUs and data centers to startups pushing the boundaries of what’s possible.
The Role of AI in Human and Organizational Development:
How AI can revolutionize personal lives, enhance businesses, and solve societal issues by gathering and analyzing massive amounts of contextual data to provide actionable insights.
Subscribe to the newsletter at:
https://danielmiessler.com/subscribe
Join the UL community at:
https://danielmiessler.com/upgrade
Follow on X:
https://twitter.com/danielmiessler
Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler
See you in the next one!
Chapters:
0:00 - Introduction to Unsupervised Learning Podcast
1:10 - Concept: Predicting AI Infrastructure Needs
3:45 - The Challenge of Predicting Technology vs Human Desires
6:20 - Exploring AI Infrastructure Metrics (GPUs, Data Centers, Startups)
8:55 - Philosophical Insight: Current State vs Desired State
12:15 - AI’s Role in Learning from the Past and Anticipating the Future
14:50 - Addressing Global Issues with AI (Education, Poverty, Climate)
18:30 - Transitioning from Current State to Desired State
22:05 - Context Gathering: Granularity and Technology Limitations
25:40 - AI's Impact on Individual and Family Contexts
29:10 - AI’s Potential in Business: Granularity and Cost
32:50 - Vision of Life OS and Personalized Assistance
36:15 - AI in Society: Predicting and Preventing Problems
40:00 - Infinite Context and the Scaling of AI Capabilities
44:30 - Predictions on AI Context Size and Infrastructure Demand
48:20 - The Importance of Understanding the Current State
52:10 - Conclusion
Welcome to Unsupervised Learning, a security, AI and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis, and mental models to bring not just the news, but why it matters and how to respond. Hey what's up? It's Daniel with Unsupervised learning. I'm building AI to upgrade humans, and today I want to talk about a cool idea. Basically trying to figure out how to predict how much AI infrastructure we actually need and how far along we are along that path. And I want to say something about predictions real quick, which I've talked about before in other videos, but I don't think it's possible to predict tech really, the way that tech happens, who wins, who loses, like the timelines? Like this stuff is like impossible to predict, basically. So I wouldn't want you to think that I would try to do that because I think that's foolish. What I believe we can predict related to tech are the human things that are related to human desires. So the question is not whether we can predict tech itself, but can we predict what we want from tech. And I think that is a powerful way to kind of see what might be happening and what could unfold in the next 1 to 3 years or whatever. So I want to specifically look at like, how far along are we to building the infrastructure that we need for AI, right. So, um, I own a bunch of, uh, Nvidia, um, I think I've got some TSMC like, um, I've got stock related to AI. I'm all in on AI, have been since late 2022. Uh, I went independent to work in this field, um, after being in security for like 25 years. So I'm very, I guess, religious about the whole thing. So I want you to understand that which you probably already know if you've seen any of my videos? Um, but I don't think that affects this particular thing that we're talking about. I'm sure it does in some sense, because that's the way bias works. But I think this argument I'm about to make stands independent of that. So essentially what I'm looking to figure out is how much infrastructure do we need? How many GPUs do we need? How many data centers do we need? How many AI companies and startups do we need? And some of that depends on the actual technical implementations, which we can't predict. So we don't need to go too deep into that. But I'm just trying to figure out, are we at like 13% of how much I. We need a lot of people think we're at 85%. For example, they're like, yeah, that was pretty much it in 2023 and 2024. And now it's mostly hype and it's just going to die down. So they think it's like 87% and it's pretty much done. It was a cycle. Whatever other people are like no we're just getting started. We're at like 3%, but it's going to grow over the next five years or whatever. And I guess I'll just spoil what I think the answer is. I think the answer is we're at like 0.000. I don't know how many zeros, like eight zeros and a 1% of where we need to be or where's more specifically where humans will demand that we get. Okay. So that's that's the point I want to make here. And the way I want to get there is by asking a question, what are we actually trying to do with AI? What are we trying to do with AI? So I want to walk through a few ideas there. So one cool idea here is learning from the past and anticipating the future. So we are sitting in the middle here in the current present moment, and we could use AI to look into the past, find things that went wrong for the purpose of adjusting our current behavior right on the other side. We could anticipate, try to anticipate Paint based on everything we currently know. Try to anticipate what's going to happen in the future. For what reason? Same exact reason. Adjust our current behavior. So I think this is a paradigm. This is an idea or a concept that's extremely powerful. And so the question is how much more of this do we need? How much more of this do we need? Or how bad of a job are we currently doing at this? And how much more can we gain if we were doing it much, much better? So I won't even give my answer, but probably kind of rhetorical. All right, next idea I would say that for any given situation, for any given operation or thing you're trying to do, there is a current state and there is a desired state, and there is a delta between those two. And I think this is a really powerful concept. It automatically asks a question. It demands a question how do we get from the current state to the desired state? So I ask you again, how many situations do we have worldwide, human civilization wide, where we have a current situation that we're not really happy with, and we either need help coming up with a desired state, or we already have a desired state clearly in mind. Or maybe we have thoughts about a desired state, but we could use some help articulating it. But either way, we want to get to this desired state, which I could potentially help us articulate the desired state. But either way, how are we doing on understanding our current state perfectly, articulating our desired state, and figuring out the delta between those two? Think education. Poverty. Climate. Health. Aging. Politics. War. Just just think all the whole scope of human problems over the course of history. How are we doing on this? How much potential is there to do this better? That's the second one. And the second one really brings up this third one. What are the actions that we should take to make the transition? This this is incredibly powerful. Can we use AI to understand the current state? Can we use AI to articulate and structure and build and communicate the desired state? And most importantly, what's the plan? How do we transition from one to the other? How much do we need this middle piece? How good of a job are we as humans doing at executing on this middle piece? I would argue not very well at all. And we haven't been for centuries. Well, I mean, you could argue it could go way worse. Totally. Absolutely could go way worse. So sure, we're not all dead yet, so that's good. But if you look at the course of history, it's just like folly after folly. And we're currently in a whole bunch of it right now. And it's a bad situation to be in. So I would argue that this yellow piece right here in the middle is extraordinary. It is. It has extraordinary potential for what we wish we could do with that yellow to transition to this desired state. So let's break these three down even more. When you look at the current state, you talk about, okay, let's understand the current state. How granular are we talking about? Right. Because you could go from like a coffee cup with like an amount of hot liquid inside of it. And we could ask the question, what is the current state of this liquid inside this coffee cup? What's the current state of the coffee? Well, you could ask lots of different people about this. You could ask a philosopher, you could ask whatever. It's like, oh, half full, half empty. Um, you could ask like a chemist or a physicist. And it's like, well, um, what do you want to know? Like how many atoms, like, how excited are they? What the the location of all the electrons. Certain parts of this question for a given object are just not knowable. Like, what is the state of all atoms and subatomic particles in, say, a pen or in a cup of coffee or the state of a human, for example, you say, how is he doing? How is she doing? Well, what do you mean? I can't give you a full rundown of every atom in their body. So the question is, what level of depth do we need to be able to answer that question? And that comes down to how much telemetry can we get, how much data can we get coming off of this situation. So that's I've got an aura ring on, I've got an Apple Watch on. This is telemetry coming off of me so that you can ask an AI very soon or actually now in my opinion, Apple is building life OS essentially, in case you guys didn't know that. So you will be able to ask yourself, how am I doing? Um, and it will know. Okay, you're talking about health. You're talking about financially, you're talking about education. What are you talking about? But you have to have that telemetry coming in, right? And you have to know what the limits are right for. For a given object type, you have to know certain questions and certain types of metrics that are just attainable. Okay. Can we get someone's current heart rate? No, not 15 years ago. Not automatically, not every moment of the day. We couldn't do that before. We can now. So that's a technology change that enabled that level of metric to happen. Well, Apple is currently gathering things like mood as well. How are you feeling today. And it's associating those things with other things like how much you've exercised and stuff like that. Right. So you kind of see where this is all going with life. OS now that's for a person. Okay. Now let's talk about a family. What's the state of the family? Well, what does that mean? Do we need all of those metrics for each individual person? Sure. But you also want to know the dynamics between the people. You also want to know how the family as a whole is doing. Um, what's the quality of life? What is the upward trajectory look like? Right? This is the state of a family. Now let's talk about business companies. What is the state of a company? Well, what are we talking about? Its IT infrastructure with thousands of Kubernetes pods running. What's the current state of a Kubernetes pod right now? And also right now and also right now, how often are we polling these things both for a human and for a company? Right. And a lot of these questions are going to come down to, sure, we could poll the current state of every single Kubernetes pod at, say, I don't know, Google, right? Every single second, but that might cost billions of dollars because it takes, you know, it's just expensive. And where would you store the stuff in all these different questions. So you've got like how granular, what is the update frequency and how expensive is that to gather based on the current tech. Right. And this is Tremendously interesting and tremendously powerful, especially for something like a human. The most important driver for all of this AI stuff one is going to be business, no question. Right? And that's going to have the money behind it or whatever. But ultimately, the bigger thing is going to be humans using this to feel better and be better and improve themselves. That's that's the entire game here. What are people struggling with right now? Job happiness. Can't get a job. Loneliness. A lack of meaning in their lives. Okay. How do you solve how how do you solve the current state desired state situation for somebody who's lonely? And I got another example of that coming up. But think about that. Think about what it takes to sort of solve these problems. Go from current state, desired state for a human, for a company, and what level of context you need to be able to answer those sorts of questions. now, I would argue just like the previous ones, we are nowhere near gathering enough context on the current state of anything. Of anything. Okay, a park bench, a tree and at the park, the state of a human that you care about. We don't have nearly enough telemetry on that. We want to change a business. We have no idea what's going on in business. Most people who work there and actually run the business, they have no idea what's going on in the business. They don't know the level of mood and happiness and the state. How much waste is happening at any given time? Bottom line here is things like context size, things like Rag technologies like these are absolutely in their infancy because they have to be because of the size of the scope of the problem that they actually are being signed up to solve. Okay. Uh, Altman talked about this a while back. He was like, look, at some point, context is basically going to be infinite, and obviously nothing's infinite. So that doesn't really mean infinite. It means functionally infinite. So there's going to be this competition between context size going into models versus Rag, going into models versus whatever, whatever comes out later and makes maybe those not as important. But either way, what you have to do is you have to get this current state for a thing that matters to humans. Forget tech is about a thing that matters to humans into the brain of the AI, so that all it's understanding of the world can be applied to this larger situation. Now think about this as a human. Imagine somebody who's 40 years old and has had this amazing life and, you know, the hardships and people died and they fought in wars and they traveled to Costa Rica and they did ayahuasca, and they learned all these things. They know thousands of people and they've had this wonderful life, and they're trying to craft even a better life. And you want to ask an AI, you know, tell me what I can work on. Tell me about myself. What do you suggest for me? I'm looking at doing this career or this career. I'm looking at moving here. I'm looking at marrying this girl or not marrying the squirrel. Um. What should I do? What does the. I need to know about that person to make that decision? Or to offer advice in making that decision? Ideally, it has everything. It's got the genome. It's it's got all your medical records growing up. Ideally it has journal entries. Ideally, it's been recording you at a deep level of state gathering for your entire life. So imagine this is 200 years from now and this person is 40 years old. But every moment of their life has been captured. And the very moment that you ask a question, what should I do? Which career? Which country do I move to? Which guy do I marry? Um, the entirety of their life up to that moment would be the context for answering that question. This is the key point for this entire thing. Think of the AI infrastructure that is required to have the entire context of the moment, right up to the point when the question is asked to be jammed into that AI's mind. But don't just imagine it for this 40 year old guy in Costa Rica. Imagine it for a giant business with 10,000 employees who's been in business for 120 years, how much context, at a deep level, can we grab from that entire 120 years and put it into what should I do now that is big. That is massive. And here's the crazy part. It can be summarized to a ten page text document. It can be summarized to a 1010 page text document. And guess what? It would be good if you could summarize it to a 200 page report. It would be really good. It could also be terabytes of data. Petabytes of data. How quickly can you get petabytes of data into an AI's brain to snap and answer the question in a really, really powerful way for a business, for a country, for a city, for a human, for a family. Think of the infrastructure that is required to do that, and think about the fact that this is what humans want. This is what humans will demand. This is why I'm saying forget the tech. Forget the tech does not matter. What matters is what humans want. Humans want the most amazing answer ever to that question. Do I do this merger with this company? Do I hire this person as my CFO? And the amount of data that is needed to get a better and better and better answer to that question just keeps going up along with the capabilities of the tech. Right. So this is what's so powerful about this paradigm of thinking about this is that there's kind of no end. There's no foreseeable end coming to how much better it gets when you give it more. And that's why okay, 200 K tokens right now. So that's a lot. No it's not it's not a lot. It's not a lot because watch this. The thing a human wants to do is to say hey, I've got an idea for a concept for this really cool, um, mini series that's going to be on Netflix. It's got anime, it's got Harry Potter stuff, but it's in this stylized thing, kind of like The Last Samurai or, um, or what was that? It was called Shogun, the recent animated thing. Anyway, you give it all these things, you're like, I want this art. I want the concept of Star Wars, but it's got to be Harry Potter. But I also want like deep romance. But I want it to be really, really gritty. And it definitely, you know, got to be over 21 to even watch this thing. And there's also violence and there's also sex and there's also all these different things. And you basically hit it with that and you're like, yeah. But I also want it to be kind of like Tolstoy. I really like Tolstoy. And it's going to go and write you and build you a complete movie, a complete series, and do all these different things with agents, all the stuff that's coming out. That's kind of inevitable, right? How much does it need to know to be able to do that perfectly? It's got to be able to read into everything you're saying. It's got to have all the capabilities of actually doing the video and the art creation and all this stuff, but the more it knows, the better this thing gets. And that doesn't really stop at any time soon. Right? So this is needed for business. This is needed for creativity. This is needed for human thriving at an individual and a society level. There is no end to how much we will demand this thing. I'll give you another example here. Because of this assassination that happened with the head of United, UnitedHealth and I talked about this in in my book, The Real Internet of Things in like 20, 2016. It came out and I've recently done a whole series on like modernizing it. And it's got video and everything, so you should check it out. But the point is, what I talked about in that book was walking down the street and having sensors all around you. This is the vision that I'm seeing for everything. You have sensors all around you, the current state of the world. This thing on the left here, this current state of the world, is the thing that your eye is always monitoring for you. Your Da, your digital assistant. Right? It could see behind you. You could see above you. Why is it seeing above you? This is dystopian, but this is real. Because of drones, that's why. Why is it seeing behind you? Because someone might walk up behind you with a big backpack on and a mask and pull out a gun and shoot you. So what your eye has to be doing is watching everything all the time. And guess what? If you're a VIP or you just have some money, you're going to have little drones flying around looking at everything, looking down. Hey, why is that car moving like that? Hey, moved. Move to the left across the street. I don't want you over here. Your little ear piece is going to be giving you these little guidance things. It hears your stomach rumble. It's like, hey, there's Thai food up there. It's your favorite restaurant. I already pinged Jeremiah. He's got your favorite table. I turned on table tennis. So when you get there, table tennis is on. It is monitoring the state of the world and changing the state of that world in preemptively watching for the state of the world to make sure it doesn't turn into an undesired state for you 24 over seven like every second. Boom, boom, boom boom. It's reading every API. It's looking at every person. It's pulling up their daemon to see if there if there's information on them, like should we go introduce ourselves to them? This is the parsing of the current state of the world around us at all times. That once again only gets better and better with more tech, right? You could do this a little bit in 2025 and 2026 and 27. That'll be pretty good. But wait till you see it in five years. Wait till you see it. In ten years it will be absolute sci fi stuff. And again, you don't need to think about the tech. You only need to think about the fact that humans wish they could see around themselves at all times. They wish they had that tech on all of their kids at all times. So when some white kid jumps out of a jumps out of a pickup in the front of the school and runs into the thing, and he looks like he's over 18, but probably 19 or 20, and he's got a backpack. Suddenly the earpiece in your kid's ear and in the ear of the principal and everyone else. Boom! There's data happening. Why? Because there's drones flying over. This is a this is a monitored state of the world that I parse and hand to the people who care about that state. And going back to the first point, there is so much you can gather every everything's going to have an API. You're going to be able to pull all this data. But okay, why am I pulling gigabytes of data all the time? Can I even process it? The answer is no, you can't, which means we won't pull that yet. But that will push us to be able to process it. Oh, now we can. Now we'll pull more data. This will just keep ratcheting up and keep ratcheting up. And until until you eventually have something like Neuralink and you just kind of know the state the same way you see color. You will know the state of danger. Right? And I believe all this is inevitable, but it's so far in the future. Like you don't need to worry about it. But what I'm trying to do is show you that it's obvious that that sort of push is happening from humans, because that's what we demand, right? So that's kind of where we're going, um, and how it will happen and who will do it or whatever. Who knows? Nobody knows. Could be a giant single corporation that does everything. And or it could be a million different startups. Who knows? It's impossible to predict. The point is, we will demand this and we will demand it gets better and better. And the deeper you go on how granular and the more often the updates to the frequency are demanded, and the more the cost goes down, the more we will have. It's this thing just goes crazy. Okay, now we talk about what's the desired state. Now we talk about how do we get from here to there? What's the planning? Right? How do we make a I'm reading good to great right now. How do we make this business from good to great. What are the changes we make. Right. That's also an AI eye piece. All of these, all of these combined are going to push this thing from us barely starting this year, last year, year before to it's not a hockey stick. It is straight up into the sky in how much context we need and how much memory we need and how much processing we need right now. It could be that new processors come out so we could do a 100,000 times as much processing, with 100,000 times less effort and resources and stuff like that. Doesn't matter. We'll just ask for 100,000 times more. We'll be like, oh, cool. So so you can pull, um, 40,000 metrics on me as a human every second. Cool. Can you get the state of every molecule in my body? And they'll be like, no, that's impossible. Cool. Someone will go work on it. And suddenly the current hardware is not good enough. And we could make better predictions if we knew the state of every molecule in the body. And guess what? It just keeps going. It just keeps going. That's the point of this. So these are the three basic pieces current state, desired state, and the actions, processes and SOPs. SOP is a standard operating procedure. I think this is a really, really powerful concept. I think these three combined are going to be like a framework for making improvements to things. I did a piece in early 2023, like March of 2023, called Pspca State Policy Questions and Actions. State is state of the world. Exactly like this policy is the desired state of your company or your life, or whatever questions is like what are we asking to this I body of knowledge and then action is what do we do as a result of learning the answer. Right. So I think that's a really powerful structure for thinking about how to improve anything, and it's all based on the stuff we've been talking about. So some examples of this. A lonely human wants to be a happy human. So the recommendation is going to be to connect. And all of these are going to be extremely granular with lots of different recommendations going all the way down. Right. And of course this will all be managed by a Da AI of some sort. You've got a dying business. You wish you had a thriving business that was actually making money. What are you going to do? You're going to increase revenue by doing the following things. This flow here of understand the current thing, understand where you want to be and have AI figure out the stuff in the middle. At first it's going to be recommending things to us right as it is now, but very soon and already starting. It's going to just be like, do you want me to do that for you? And the businesses are going to be like, yes, absolutely. That means I don't have to hire anyone. In fact, that means I can get rid of a lot of people because you could just do that for me. So what are the predictions that we can make based on this? I think of the three. The state piece is the most important. Just because you can't really do anything unless you understand what's currently happening, right? It's hard to it's hard to talk about improvement if you don't know what's wrong. Um, so state of the human state of a company. State of a family. Cup of coffee. We've talked about this infinitely complex changing constantly, and that's why the stuff is going to scale so crazily, in my opinion. And who knows if Nvidia is going to win. I think they'll probably win for at least a couple of years, but I have no idea they could crash tomorrow. Or they could whatever, go to the stratosphere in six months? No idea. So this this brings us to the question how much context size do we need? How many tokens do we need to be able to put into context? How big are Rag systems going to get? How many GPUs do we need? How many startups do we need in the AI space? How much I do we actually need? That is ultimately what we're trying to answer here. And my answer to that is enough to constantly poll the state of everything we care about, and then take actions to fix it to get it into the desired state. And I think that is a metric crap ton of AI. And that's why I think we're at like, like I said, point ten zeros and a one on where it's actually going. See you in the next one. Unsupervised learning is produced and edited by Daniel Miessler on a Neumann U87 AI microphone using Hindenburg. Intro and outro music is by Zomby with a Y, and to get the text and links from this episode, sign up for the newsletter version of the show at Daniel missler.com/newsletter. We'll see you next time.