Twillio API Dump, North Korea Russia, Funny AI Memes, and more…
➡ Check out Vanta and get $1000 off:
vanta.com/unsupervised
Subscribe to the newsletter at:
https://danielmiessler.com/subscribe
Join the UL community at:
https://danielmiessler.com/upgrade
Follow on X:
https://twitter.com/danielmiessler
Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler
See you in the next one!
Discussed on this episode:
Intro (00:00:01)
Rebranding of Al Qaeda (00:01:06)
Reading Alex Ramos's Business Books (00:01:14)
Supporting a Struggling Army Veteran (00:01:54)
In-person Sessions with Entrepreneurs (00:02:36)
Real World AI Definitions Resource (00:03:00)
Defining AI (00:04:08)
Machine Learning Definition (00:06:15)
Prompt Engineering (00:08:18)
Retrieval Augmented Generation (00:09:22)
AI Agent Definition (00:10:20)
Chain of Thought (00:11:30)
Prompt Injection vs. Jailbreaking (00:12:23)
Artificial General Intelligence (00:13:24)
Sample Efficient AI (00:14:26)
Levels of AGI (00:15:25)
Artificial Super Intelligence (00:17:21)
The Levels of AI (00:19:40)
Real World AI Definitions (00:21:49)
Cloudflare's New Tool (00:23:51)
Emerging AI Capabilities (00:26:00)
Technology and Business Updates (00:27:54)
Impact of AI on Society (00:32:12)
Ground News Recommendation (00:34:29)
Aphorism of the Week (00:34:29)
Whether you're starting or scaling your company's security program, demonstrating top notch security practices and establishing trust is more important than ever. Vanta automates compliance for Soc2, ISO 27,001 and more, saving you time and money while helping you build customer trust. Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer facing trust center, all powered by advanced AI. Over 7000 global companies like Atlassian, Flow Health and Quora use Vanta to manage risk and prove security in real time. Get $1,000 off Vanta when you go to Vanta comm slash unsupervised. That's vanta.com/supervised for $1,000 off. Welcome to Unsupervised Learning, a security, AI, and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis, and mental models to bring not just the news, but why it matters and how to respond. All right, welcome to unsupervised learning. This is Daniel. All right. What do we have here? All right. So saw this funny thing. So basically, um, some terrorist funding news. Al Qaeda rebrands as I. Qaeda raises a $1 billion. Cool. I am rereading Alex Hormozi. Two business books. Uh, really? I think Alex Hormozi is, like the best business content generator right now, at least for my type of business, although he's like a gym person. Um, so it's not the same type of business, but he's just extraordinary at, like, user acquisition, like all sorts of stuff. And I love the format of his content. Like he's doing a whole bunch of shorts now. He's just making content probably, I don't know, it's got to be making 4 or 5 pieces a day. It's just so much and it's really good. My buddy Jacoby is an Army veteran, really cool security guy, and he's struggling really hard right now and he's got a gofund me. So if you could, uh, add to that, I would really appreciate it. Fabric made it to the front page of Hacker News. Uh, next run of augmented is on July 26th. You want to go sign up there? Got to spend some in-person time with a couple of fellow entrepreneurs creators over the weekend. Doing this stuff in person is invaluable. Like, we're talking about wanting to do a digital version of The thing that we built, and I'm reticent to do that because I. I want it to happen only in person, but we can make it digitally and then just interact with it in person. So that's what we're going to do, because it's too valuable not to be digital, but highly recommend setting up some similar sessions with your friends because it's really, really powerful. And my friend Monica Verma is running our CSO masterclass soon and she also has a newsletter, so go check those out. My work just finished a resource I've been working on for over a week called Raid. Real world AI definitions, not redundant array of inexpensive disks, which is the first thing I think of whenever I hear Raid. Either that or Bug Killer. But yeah, real world AI definitions. And I'm actually going to go into it and talk about it. So I'm going to go ahead and jump in here. Okay. So I see a lot of definitions for different AI terms out there. And I want to put my own thoughts into a long form version. And that's basically why I did this. And it's mostly for me because I love this quote. I forget the name of the woman. Uh, I need to memorize that. But she basically says, um, if you are thinking without writing, you only think that you're thinking. And I think that is such a powerful statement, because when you are writing and this is, um, this is why I call this a Feynman approach to learning, it's basically Feynman said, if you think you know something, try to teach it and you will figure out pretty quickly that you probably don't know it as well as you thought you did. And I've always known this for a very long time. So starting back in 1999, I started writing tutorials to myself to explain things. So when I was learning tech, I literally would try to learn the tech from textbooks and then it wouldn't make sense to me. So I would go write my own explanation and then I would just use that whenever I wanted to remember something. And that turned into my tutorials page, which is how I got started on the internet. So table of contents here, the expanded definitions table, the one liner definitions table, and then I break out AGI and ASI into two different sections. So we're going to start with the one liner AI definitions table. And this is coming off of Hamel Hussein's I bullshit knife which gave violently short definitions for a bunch of I terms. And this is my expanded version of that. Okay. So this is made to like crystallize in your brain what these things are in very, very crisp conversational form. And we're going to go into more detail in the further ones. But I tech that does cognitive tasks that only humans could do before. And the reason I have that only humans could do before is because that's a moving bar. So before there was a chess playing robot or a chess playing AI, everyone said, well, yeah, but it can't play chess. It never will be able to play chess because that's a human thing. And of course you could do that now. Image identification. Being able to tell the difference between a dog and a cat, that was a very, very hard problem for a very long time. You have to think in terms of like what currently is a thing that only humans can do that I cannot do. Yet that is the barrier which determines AI or not, because the moment you cross over into that and I suddenly becomes available to do that, that's what's currently called I. Now, what's interesting is as it moves further and further out, people forget that that's I like image recognition. A lot of people are like, that's not AI, that's ML. Well, it's still AI, first of all, because machine learning is a subset of AI, but also because it's doing something that only humans could do before. So I, I love that definition. Very flexible machine learning AI that can improve just by seeing more data. And we're going to talk about this more, uh, later on prompting clearly articulate what you want from an AI rag. Provide context to an AI that's too big or expensive to fit in a prompt, an agent, an AI component that does more than just LM call to respond. So it's not just call and response, that's just regular. I, an agent, does more than that, taking on more of the work chain of thought. Tell the I to walk through its thinking and steps. Zero shot ask an AI to do something without any examples. Multi-shot. Ask an AI to do something and provide multiple examples. Prompt injection tricking an AI into doing something bad. Jailbreaking, bypassing security controls to get full execution ability, or at least as full as possible, doesn't necessarily need to be full AGI general AI smart enough to replace an $80,000 white collar worker, and ASI, which is superintelligence. General AI that's smarter and or more capable than any human. So I think that's a pretty clean list for thinking about these concepts. And now I want to go into more detail for each. So AI. AI is technology that does cognitive tasks or work that could previously only be done by humans. So cognitive tasks or work, and I might clean that up at some point and just make it work or cognitive tasks. Anyway, lots of different ways to define AI. So this is going to be controversial. And this is what I talked about. Well yeah of course I can do this. But it still can't do this and it probably never will be able to. And then the narrator voice says, yeah, that happened seven months later. And this is why I like the definition is because it moves over time. Machine learning a subset of AI that enables a system to learn from data alone, rather than needing to be explicitly programmed. This is a super exciting to me as a definition. Actually, as a concept. I can't believe the program is the same, but you change the data and the program gets smarter. That blows my mind. And it's like the most important. In fact, if you want to go tighter learning from data alone or a technology system that's able to learn from data alone, like lots of different ways you can skin this thing. Prompt engineering. Prompt engineering is the art and science of using language, usually text, to get AI to do precisely what you want it to do. Yeah, a lot of people think prompt engineering is like this really special thing. Others think it's not important at all. I think it's absolutely an art and a science because it's more about clear thinking than about text itself. Just like writing, the best prompt engineer is the same. It comes from deeply understanding the problem and be able to break out your instructions to the AI in a very methodical and clear way, So the clearer you can think and the clearer you can think about problems and describe methodologies. That's that's what's going to make you really good at prompt engineering. Don't think of it as an AI thing. Think of it as a thinking thing and a writing thing. Retrieval. Augmented generation Rag is the process of taking large quantities of data, which are either too large or too expensive to put in a prompt directly, and making that data usable as vectorized embeddings to AI at runtime. And I say it's important to understand that Rag is a hack that solves a specific problem, which is that people and companies have way too much data gigabytes, terabytes, petabytes of data that they want their AI to be aware of when performing tasks. But I can only handle small amounts of that data per interaction. And that's what Rag is for. So the solution we come up with is to use embeddings and vector databases to encode relevant information and send that along with the prompts. And it's not really clear what the successor is going to be. But one option is to just have massive context windows and put the raw content in there. But then you have cost problems, so you would need that to be very cheap for that to work. Agents. An AI agent is an AI component that interprets instructions and takes on more of the work in a total AI workflow than just LLM response, for example, executing functions, performing data lookups, etc. before passing on results. Yeah, so there are lots of different definitions for agent. I think the trick is to go back to the Latin, the agents or Arjun's, which is to do or to act. So I think eventually this is going to be like an AI component that has its own mission and goals and can use its resources and capabilities to accomplish them in a self-directed way. That is the true nature of agent or agents from the Latin. But I think in this current version, I think the best way to say it is that anything that acts on behalf of the mission and something that performs multiple steps towards the final goal, and that's why I'm using this definition for agent chain of thought, a way of interacting with AI in which you don't just say what you want, but you give the steps that you would take to accomplish the task. So you're basically teaching it how you think, which which is essentially teaching it how to think like a human. And then it uses that template to sort of do that every time it's going to solve the same type of problem. And to me, it's kind of an example of what we talked about in prompt engineering, clear thinking. It's walking the AI through how you think when you're solving a problem yourself. Prompt injection a method of using language, usually text, to get AI to do something it's not supposed to do. And a lot of people confuse prompt injection with jailbreaking. And I think the best way to think about this. And thanks to Jason Haddox for talking through this with me, because he had a very similar definition, uh, is to say prompt injection is a method. It's an attack avenue. It's not a payload, whereas jailbreaking is a goal. We're trying to bypass the security on a system. Prompt injection is not bypassing the security on a system. Well, it's a method for doing that. It's a way of tricking the eye into doing something it's not supposed to do. But that could be telling a joke that it's not supposed to tell. That could be showing a picture it's not supposed to show. It doesn't mean it's bypassing the security on a system so you can execute commands. So it could be data exfil, it could be code execution, it could be image generation. It could be anything. So easiest, cleanest way, at least in my mind, to think about this. Prompt injection is a method. Jailbreaking is a goal. All right. We talked about jailbreaking. Going to pass that one. Artificial general intelligence. The ability for an AI system, whether a model, a product or a system to fully do the job of an average US based knowledge worker. In other words, it's not just competent at doing a specific thing, which is called narrow AI oftentimes, but many different things, which is why it's called general. So basically it's some combination of sufficiently general and sufficiently competent, and the amounts of which are, you know, going to be debated so generally competent at many other cognitive tasks, sample efficient so we can learn quickly from like one example or a low number of examples. And the system should be able to apply new information, concepts and skills that learns to additional new problems. So it's not just generally intelligent from its training, but when it learns new things, it's generally intelligent at applying those new things that learn to everything else in the future. And I like this definition focusing on the worker replacing a worker, because I think this is arguably what matters most to humans when we talk about AI discussion, which is the future of humans in a world of AI, regular people don't care about processing speed or model size or model weights or any of that crap. What they care about is if and when any of this is going to actually affect them. And that means job replacement mostly, or at least that's the most important thing. So these are the levels that I see within that job replacement of this average white collar worker. So first level is better but with significant drawbacks. And I'm not going to go through all of these. But it's like the interface the language, the confusion, the errors, flexibility, basically. Basically, real world work environments are kludgy and lame and they change all the time. And it's basically a giant mess. This level one is basically saying, yes, we brought in an AI worker to replace that person, or we didn't hire a person. We used an AI worker instead for for growth if you want to be nice. So that's what happened. We brought in this level one AGI one. And here's the problem. It's kludgy. So so the problem is that they're doing the work. But you have to clean up after them. So the interface proprietary cumbersome the language that you have to use. It's still kind of like AI language. It's almost like you're prompting. You're not just normally talking to them. Um, it gets confused a decent amount of the time. There are frequent mistakes which need to be fixed by humans, and if you want to change what this thing is doing, it needs to be largely retooled. Not not like in a major way, but you've got to have a conversation with it. You've got to change some documentation, you've got to do some stuff. So going back to the top level of this one better but with significant drawbacks. So it is right at the line of being too much work and not good enough to replace a human right. That's the trick. It's just barely better. Barely. And sometimes you can't tell if it's better or not. It might be too much work. Next one. It removes a lot of those problems. So it's like in the middle. It's like decent, competent but imperfect. So mostly normal workflows, mostly human with exceptions. Doesn't get confused very often. Fewer mistakes and the mistakes aren't as bad, and it just decently well to new direction from leaders. So that's like the middle level. And then AGI three is full worker replacement. So this can replace this average 80 K worker in 2022. White collar worker in the US. It can fully replace them. So when you talk to them, it's you talk to them just like any other employee. You could text them, you could voice them, you could do video with them. It's just like an employee. You talk to them exactly like an employee. They're confused the same amount or less than the $80,000 employee. Keep in mind, humans have all these problems too, right? Humans also get confused and need to be retasked or whatever. So it's not like we're competing with perfection here. Same or fewer mistakes than an $80,000 employee. Employees also make mistakes and just as flexible or more than $80,000 employee employees also have trouble like completely being retests. It's like I thought it was working on that. What are you doing? Like, so this is basically saying it's just as good as them or or better. So this is the top level of complete worker replacement. I think this is going to take a decent amount of time. And then the next question is like okay, but what if it's like that? It's a full replacement for that worker. So it's AGI three, but with an IQ of 150. Well that's that's really powerful. And I'm not working on that axis. I'm only working on how good of a worker are they. And that's the three levels for AGI okay. Let's do AC a level of general AI that's smarter and more capable than any human that's ever lived. So this is interesting for a number of reasons. One, it's a threshold that sits above AGI and people don't really agree on that definition. And the second thing is, at least as I'm defining it, it has a massive range. And third, it blends with AGI because AGI just really means general incompetent. And ASI is also general incompetent. So really ASI is also AGI. It's just uh, at a level beyond any human. So AGI is replacement for a human worker making SDK in the US in 2022, which is pretty AI, prision AI and ASI is more smart and more competent than any human. And the model I'm using there is like John von Neumann, because unlike Einstein and Newton, he was, uh, very broad. He was a generalist, right. So he did game theory, physics, computing, lots of other stuff. But I don't think being smarter is the only thing that matters, because I think it comes down to multiple things. So if I look at all the different things that I'm putting into ASI, it's like your ability to model abstractions of reality at these different levels. It's the ability to take action, to perceive, understand, improve, solve, create, destroy. And then it's like, okay, what fields are we innovating in? What problems are we able to solve and what scale are we actually working? You know, thinking about, I guess I should add universe or whatever, but all the way from Quark to Galaxy is like the scale. So what's really cool about this is you can now turn these into functional phrases. So you could say, like an AI capable of curing aging by creating new chemicals that affect DNA. Okay. So we have a problem here creating new chemicals. That's like a net new thing, right? So that you're seeing this, you know, verbs and nouns here, maintaining a full city by monitoring and adjusting all public resources in real time. And the AI capable of taking over a country by manufacturing a drone army and a new energy based weapon. So this is like new science here, and an AI capable of faster than light travel by discovering completely new physics. So that's like a level above, right? So let's look at the different levels here. AC one superior, an AI capable of making incremental improvements in multiple fields and managing up to city size entities on its own. So here are the things that it has smarter and more capable than any human on many topics. Able to move progress forward in multiple scientific fields. Able to recommend novel solutions to many of our main problems. Able to copy and surpass the creativity of many top artists. Able to fully manage a large company or city itself and keep, keep in mind, copy and surpass some of the best artists. Now, what this really means is like in a superhuman way, right? So it's not just like better in one area. It's like, okay, it's Eminem, but it's way better than Eminem. It's Kendrick Lamar, but five times better, right? Same with these types of things. Novel solutions, moving progress forward, which obviously no human has done. So they're doing it better, but it doesn't have these. It can't create. Net new physics or net new material science. It's unable to fundamentally improve itself by orders of magnitude. So next phase dominant okay. First one was superior. Next one is dominance. This one has all of AC one. But it's also able to completely change how we see multiple fields able to completely solve most of our current problems. Able to fully manage a country by itself. So this is a huge jump over the first one. Able to fundamentally improve itself by orders of magnitude, but still can't create. Net new physics or run an entire planet. And then you have AC three, another massive jump AI capable of creating net new physics, completely new materials, manipulation of near fundamental reality or fundamental reality. If it's possible to do that and can run an entire planet. And keep in mind, I might add AC for um, if I have a like a clear picture of what that looks like. Um, Yoshi. No. Rishi, my buddy Rishi mentioned, um, what about someone? An AI that's smarter than all of humanity put together? And I was like, well, isn't that confusing knowledge with intelligence and reason? But I haven't heard back from him yet. Other things to consider here. So this one adds the ability to modify reality at a fundamental or near fundamental level, able to manage an entire planet simultaneously. And maybe its primary concerns become like sun expansion, which is going to like eat up the earth or populating the galaxy and beyond, or the heat death of the universe, which is like escaping this reality. Yeah, or breaking out of the simulation. I should add that. So as a summary, I terms are confusing. It's nice to have simple, practical versions. It's useful to crystallize your own definitions on paper, both for your own reference and to see if your definitions are consistent with each other. And I think these eye definitions work best when they're human focused and practically worded, because it is very easy to try to be technical with these definitions, and you instantly open up like this giant mess the moment you go a level down in technicality. And what you'll find is if you get 20 AI experts in in a room and a bunch of ML experts, they've been doing this 30 years and you ask them their opinions. They don't agree. The textbooks don't agree, the experts don't agree. The moment you go a layer down, it's a giant mess. And that giant mess stops conversations from happening because they end up, they'll get two hours into a podcast and be like, oh, that's what you meant by I. That's not what I mean by I. And I'm like, okay, let's stop messing with let's stop wasting two hours in a conversation because we're not talking about the same thing. Let's level up to an abstraction level where it's useful in a conversation. That's why it's called real world AI definition. All right. So that's that one. All right. Stories. Cloudflare has a new free tool to stop bots from scraping websites. Remember I told you about Cloudflare how they just do like the coolest stuff and they just Cloudflare is a beast. I am telling you. They just they see a problem that is annoying everybody and they go and just pour a little bit of like cement in that seam or that crack or that hole or whatever, and they're just like, we're going to solve this. And if no one else does it, they're just going to be the solution for this. Like they're doing this with Captcha. They're doing this now with AI scraping like somebody over there is going, this is going to be a problem in the future or this is already becoming a problem. Let's get there first. And I massively respect it. And before too long, the whole whole internet's going to be Cloudflare, because over the course of time, they've made like 500 of these tools at some point. Right. And it's like all those random little tools that they made become the structure of the internet. Twilio says, uh, somebody's got phone numbers because of an unsecured auth API endpoint, 33 million phone numbers. Russian experts say they fully analyzed the structure of the American attack Miss Atacms missile, and they believe that's going to help them fighting Ukrainian missiles, which is not good. Thanks to tines for sponsoring. North Korea has switched its state TV broadcast from a Chinese satellite to a Russian one. And now fewer people can watch Hezbollah launch over 200 rockets and drones at Israel after Israel killed a senior Hezbollah official. Thanks to nudge security for sponsoring. US intelligence community is diving into generative AI to enhance intelligence operations, using it for search, discovery, counterargument generation. Uh, I'm going to make so much Intel stuff, I cannot wait to build that product in substrate. I cannot wait. There's some analysis saying I cannot be funny from Anya, Jeremy Greenwald, and I think I disagree. And here's an example. My buddy Joseph posted this one. It's a glyph app and it makes memes. Look at this bug bounty, bro. Like I legitimately laughed at some of these. Show me show you my leet skills. Use the inspect element I only hack ethically dos school website in eighth grade. I mean, first of all, these are real. They're good in terms of like they actually apply to the field that they're making fun of. And, uh, my POC is flawless. It's a Rick roll link. I generated this. Okay. I've been saying for a while that being funny is a major milestone in human AI, in AI becoming real. And just like a lot of other things in AI, there are so many people who are like, oh yeah, it can. It could tell a cat from a dog, but it's never going to make me laugh. Whatever. The stuff's been out for like 18 minutes, this just happened at the end of 2022. Like this is still day one. I'm telling you right now, there will be funny I very soon and this is the first version of it. I mean, this is already funny. I technically the real standard is going to be a stand up routine, a deepfake of a face, or actually a deepfake. Okay, I am watching a stage. There is a deepfake, uh, person standing there. It doesn't even have to convince me. It could be like uncanny valley. But if there's a deepfake person standing there talking, doing a stand up routine and it makes me laugh the way it makes me laugh when I watch other stand ups, that is a major milestone. That is extraordinary. I don't know how long that's going to. Oh, I should do that on a prediction market. Yeah, I'm curious about that. I'm guessing like a year and a half to two years. I think it's going to be a little while because that one's really, really difficult. But it could be, honestly, it could be opus, it could be opus 35 or GPT five. It could happen in this next iteration, which is going to be around the end of this year or beginning of next year. Okay. Nvidia is set to make 12 billion by selling over a million HG, H20 GPUs. That sounded impressive. I thought it was like, how are you going to send those? I thought we had export controls, but these are within the export controls, so they must be like pretty nerfed. Apple might soon announce a deal to bring Google Gemini to iOS 18, so that means they're working with all three. They already announced OpenAI in the announcement. They mentioned meta a while back and now Google. And by the way, I'm bothered with Apple and also with OpenAI because they both did demos of things that aren't in the betas. Some reporting is saying this Siri stuff, and all the AI stuff won't even be until next spring. And I'm like, stop showing me demos of stuff that won't be out for months, if ever. But that's just me being annoyed. Greece is moving to a six day working week. Weird. Oh, this is actually for a different story. That's a mistake here. Do not like, um, this was about somebody, uh, Apple putting cameras on AirPods, which I was hoping was going to be actual cameras. But of course, that's going to be more headgear that's going to require like, uh, like a collar or like a neck piece or like a heavy earphone or headphone because you need to power this, uh, this camera that's running all the time. So you're going to have battery issues, plus lenses and stuff like that. But what I really want is I want the AirPods or this headset or whatever it is, but it's got to be every day. But I want it to see behind me. That's super critical. If you look at my big eye piece, I want it to see in front of me. I want it to see behind me. It'd be nice if it saw to the sides and it could connect them for 360. That would be nice. But most importantly, in front and behind And then that all being processed and it's like, hey, there's a dangerous thing in front of you. Hey, watch out for this. This car is coming in your direction. Whatever. Someone's sneaking up behind you. Be great for women. Like walking at night. It's like this person is following you. Yeah, but no, the thing that actually is getting shipped, it's for gesture controls. So you get like, you know, go like this and you could do things with your AirPods. That's the rumor. David Brooks sat down with Steve Bannon to understand his vision for the global populist movement. Um, number of U.S. high school graduates is expected to peak in 25 and then decline for years. I don't understand this. Schools and colleges are closing. Faculty members are being laid off. I, I what I don't understand why there's going to be fewer people graduating. Is there a few people going into the system? Is the population declining? I'm not sure exactly what's going on here. US dollar just hit a 38 year peak against the yen. Peoples whose eyes dilated more performed better on tests of working memory, suggesting that pupil size is linked to how well we can process and remember information. And this is from Scientific American. The FDA approved Eli Lilly's Alzheimer's drug Kazuna. Why was that hard to pronounce? Kazuna. Donanemab is the real name which shows disease progression, slows disease progression by about a third. These drug companies are killing it with Ozempic and now Alzheimer's. High work in progress is killing your business, innovation and morale. The more tasks you juggle, the slower everything moves. Yeah, tell me about it. I'm struggling with this right now. Ideas. The I class gap. There's a separation that is really freaking me out about AI right now, which is dramatically increased separation between social groups or classes, socioeconomic classes because of AI. So basically one small group at the top of like 1% or whatever, how you want to call it 5%, we'll use AI to have a staff of 10,000 executive assistants, tutors and analysts working constantly to run their businesses, optimize their entire lives, accountants doing their paperwork, filing their taxes, optimizing everything. Just think about that. A lot of people, this top 1%, are going to have like a team of 10 or 50 or 100 or 10,000 executive assistants, which are basically independent eyes, all working towards the same goals, optimizing everything in your life. That's the top 1%. And a lot of other people either won't be using AI at all, or they'll only be using it for gaming, watching media porn entertainment, watching videos, watching Netflix, just wasting time like slightly more efficiently using AI. And this is largely the same split as between like voracious readers and everyone else today. But it's going to be way worse with AI, because AI is going to lift the top 1% way more. Even more than being a heavy reader. Discovery self publishing a tech book mma AI. This guy is predicting MMA fight outcomes with very high accuracy just using AI, and all he's doing is he's looking at the, um, the people that they fought in the past. Sam Parr launched Sam's List, a database of CPAs, accountants and tax strategists, and he's already made $32,000 in just two months. And illustrated transformer visual intuitive guide to understanding how Transformers work. 11 labs has a new voice isolator. This thing's scary. Rachel Toback, my buddy, talked about this, uh, talking. I forgot how to pronounce this. Um, talking. Yeah, I feel like I should know how to pronounce that. Anyway, Ursula K Le Guin did a translation of this, which I've not read this thing yet. I really need to read this thing in full. And the recommendation of the week is ground news. Ground news. This is like the best website right now for news, especially going into this, uh, political election series, uh, situation in the US. So it shows you bias in how stories are covered. In fact, let me just pull this up. It's the whole purpose of being on, uh, on video. Look at this thing. Okay. I'm logging in, maybe and doxing myself while I log in. That's cool. Okay, look at these. You see these outlines? It's got an outline for the type of coverage that it's mostly getting. Look at this bar. Left 17% center 16%. Right 67% okay. And you have these bars all over the place. And what's really cool blind spot. Look at this blind spot thing for the left. This is exposing you to things that the right is talking about, that the left is not. And same for over here, for the right exposing you to things that the left is talking about, that the right is not. So this this helps you basically balance yourself, or at least have more of an option of balancing yourself. Highly recommended. Especially if you know somebody who claims to be wanting to be less biased. And they're like, well, I'm just looking for a good source or whatever. Send them this. Okay. And the aphorism of the week, clear writing and clear speaking are simply outputs of clear thinking. Naval clear writing and clear speaking are simply outputs of clear thinking. Naval Unsupervised Learning is produced and edited by Daniel Miller on a Neumann U87 AI microphone using Hindenburg. Intro and outro music is by Zomby with the Y, and to get the text and links from this episode, sign up for the newsletter version of the show at Daniel miessler.com/newsletter. We'll see you next time.