UL NO. 449: China Hits US ISPs, NIST CSF 2.0, Russian Intel Attacks, Stagnant Companies...

Published Sep 16, 2024, 2:51 PM

Life changing books, defining your core problems, the Apple updates, and much more...

Subscribe to the newsletter at: 
https://danielmiessler.com/subscribe

Join the UL community at:
https://danielmiessler.com/upgrade

Follow on X:
https://twitter.com/danielmiessler

Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler

See you in the next one!

Welcome to Unsupervised Learning, a security, AI and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis, and mental models to bring not just the news, but why it matters and how to respond. All right, welcome to unsupervised learning. This is Daniel. Okay. I'm going to start off with something that just happened. So strawberry just launched. It is being called zero one, which I assume the O might mean Orion because people were saying that it might have been called Orion. So this is the new model from OpenAI. And I've been messing with it for a couple hours already. So, uh, first thing is I gave it a task of building a business plan for something I'm working on, and it produced output that was far and above better than Ford or Sonnet 3.5. Yeah, it was really quite, quite good. Very detailed. It took quite a while. There's no streaming in the API, so it feels a little rough compared to the current models. But whatever that that will come with time. It's quite expensive. So basically I did a couple of conversation analysis analyses by passing in, um, you know, conversations like transcripts from podcasts. And I think I did 2 or 3 of those and it was almost a dollar. And there's also a mini version which is way less expensive, but I'm trying to test the capabilities, so I'm using the full model. But yeah, a few requests for a dollar, whereas I would say probably many dozen or a couple of hundred requests are normally like a few dollars. So it's many factors more expensive. So just something to consider. As with most models, you don't need the biggest, best or latest. This is a tweet I just put out, so I'm going through it. So this does one particular thing well, which is in better than anything else, which is pausing to think and actually going step by step. That's kind of like the magic sauce. Here is the chain of thought reasoning. So if you don't need that for what you're trying to do, you definitely shouldn't use this because it's more expensive, takes longer to run. All those sorts of reasons, this type of model and similar ones going forward are going to massively benefit from high quality prompting. So things like we use with fabric, which is open source on GitHub, if you're not familiar, but you probably are if you're listening to this. But essentially, the more you know what you want and the better you can articulate that, the better this is going to perform, because it is a chain of thought sort of concept. So the more you give it to help with that, the better. Okay, sorry about that. I was just checking to make sure I wasn't doxxing Doxing anyone by showing my messages, but I was not, so I don't have to rerecord. Okay, so, um, continuing on here and going to expand this window fully. Okay. So, um, yeah, the better you can articulate all of this. And by the way, I want to do an edit there for the team. So the better you can articulate this stuff in exactly what you want, the better things are. That's the bottom line here. So a lot of people are going to question is this AGI or not? Uh, Sam Altman already responded. He's like, yeah, this absolutely is not. So that that should end it in terms of the actual creator of this thing saying it's not. I also don't think it is either, whatever that matters for. But bottom line is, anyone who's making the claim of like this is or isn't AGI. Here's my request to the internet. Basically, anyone claiming something is or is not should also provide a concise and achievable definition of what that means. And I have one here, of course, which is I've talked about before, whether the ability of an AI, whether a model or a product or a system to perform the work of an average US based knowledge worker in 2002, and I say 2002 because that's pre GPT four, right. So basically pre AI in these terms anyway. So yeah anyone who's talking about AGI make sure they have a definition. Otherwise you're just wasting your time because the entire conversation will be about definitions. And you might not even figure that out until fucking two hours later. Sorry for the cussing. All right. One of the most important changes to me with this model. This. This is massive. Okay. This is the first model that does this. It's the first model of its kind to do this. Very, very interesting. It's actually spending tokens to think okay, before you had input and you had output and you were being charged in, the amount of work that was being done was based on the number of tokens coming in and the number of tokens coming out, and that that was the extent of it. What's happening now is you have tokens coming in and you have tokens coming out, but there are tokens being spent while it's thinking. It's actually thinking and reasoning through how to solve the problem. And what's really fascinating about this is that you now have multiple factors here. Okay. So you can do better prompting. This is the next piece here. Number seven. You could do better prompting. You could use a smarter model. Or you could have the model think harder on the problem. And these are all going to be levers and knobs that we have to get better results from AI. And this is the first time we have this third level lever of like actually having it think right. So at inference time, more effort being spent. And they actually say in the blog post, they're like, hey, look, right now it's taking, you know, a few seconds to think or whatever, and it's going to get back great results. But we're thinking, what if it thinks for minutes? What if it thinks for hours? What if it thinks for days or weeks? And not only that, but we give it more compute power to think. And the example they gave, I think this was an OpenAI post. The example they gave here was how much do you want to solve cancer? What if you could build a data center? What if you had one data center just for working on cancer and one data center just for working on aging and so on? Okay. And you basically have models like this that scale with the inference difficulty based on the amount of difficulty of the, of the thinking. And then of course, you have a smart model and a good neural net and all that. Right. Scalability of the of the neural net. So maybe that's GPT five, GPT six, whatever. Combined with the good prompting combined with this thinking capability and combined with, you know, all those things unified into the combined with having that giant infrastructure to run it so that that's insane and the scales all the way down to like the smallest stupid problem where it's just like whatever, GPT three and you get back the answer almost instantaneously. In fact, forget GPT three. It's some local model that only does one thing well. You're spending almost no resources whatsoever. It just goes to your phone, bounces back immediately, doesn't go anywhere, barely costs any cycles of a GPU or a CPU because you don't need those resources to run because it's just an easy thing to answer. So now we're talking about AI that scales with the difficulty of the problem, right? With, you know, cancer, aging, getting out of the solar system, escaping the sun, expanding, ultimately heat, death of the universe. That's a big one, right? Because entropy kills everything. So ultimately we're going to need a way out of here at some point assuming we survive that long. Not happening anytime soon. I wouldn't worry about that. But these are the types of things that are really exciting. You know, the size of the problem being being a factor, for which I you point at it with lots and lots of different knobs and levers controlling that decision. So I think that's really cool. Another important thing to mention is that the innovation seems independent of what we were waiting for for GPT five. So based on all I read, all the releases from OpenAI, and I've seen all the rumors and, you know, talked to a bunch of people who've been speculating about this. And this seems completely independent from, oh, is this GPT four oh? Is it for zero? Is it five? Is it an early version of five? Doesn't really matter. It's like a separate axis. This is like a capability. This is like thinking Capability, which is on a separate axis from how big or smart is the neural net, right? Or how big or smart is the is the model. So really, really cool to think about those being. Two separate things because now we can start thinking about okay, well if GPT five is still going. To come out, you know, later this year, beginning in next year or whenever it's going to come out. And whatever they're going to call it, well, imagine GPT five with this thinking capability. That's cool. So presumably this is just a feature that you can add onto any model, which is what we're just talking about. And I think this is okay. This is really, really crucial here. I've been talking for a long time about slack in the rope and tricks that we're going to use to jump ahead in, um, advancement of AI, so so check this out. A lot of people are like, oh, we're running into a data wall. Neural nets are only so good they can only get so good. We've already hit a thing. I mean, so many, so many people are saying things like this that just sound absolutely ridiculous to me. First of all, they were the ones saying we wouldn't be here. And so now we are here and everyone's surprised and they're like, well, here's what we know for sure is we're not going to get any better. How can I believe you if you didn't predict any of this and you were absolutely certain back then, and now you're absolutely certain it's not going to jump ahead again, right? Leopold talks about this in his paper. There's lots of different ways to get better. There's the architecture of the model. There's the size of the model. I forget what all levers he had, but it's the architecture of the model, the size of the model. And I think it was hobbling was the other one, which is what I called like a year ago. Slack in the rope or tricks we're going to. This is what I told a friend of mine who's really smart in this stuff. I said, watch this. We're going to find multiple tricks where we're messing around in percentage points, and then we find a thing and it jumps us 2 or 3 or 5 or 10 x or 100 x ahead. And I actually learned this from him. Uh, I actually learned this from him. He was like, hey, you know, there are things that jump you ahead. Um, and I think he gave me example from some public paper or whatever. And it was an example of like a big jump. And my natural intuition was, there's going to be a lot more of those. And they're not coming from pursuing along this axis, which is difficult. They are actually just hanging off to the side. It's like, oh, did you know if you just changed the color of this? Hey, did you know if you just orient the data backward instead of forward? Hey, did you know if you just prune the data in this way or if you add this particular data set or. And I'm just making up these examples, but simple things that you wouldn't think would work. And this is why Leopold talks about if you automate an AI engineer or an AI researcher is what he called it. That's when it gets completely silly, because they have the ability to now go and try a whole bunch of these things, including these tricks. All this to say that the slack in the rope or this series of tricks is going to keep multiplying our advances. And that's at the same time that we're working on the algorithms. Oh, that was the other. That was the other factor is algorithms. That was this is going to happen. At the same time we're working on the algorithms to make those better. We're also working on the size of the neural net and the quality and the structure. And everything about the neural net is going to get bigger and more powerful, but mostly just a matter of size, number of parameters. But all those things are changing at the same time as we're finding all these tricks. Right? So we're talking about this is just begun. And this is what people don't realize. This is just now starting. We're going to look back in two years and be like, what was that? That was silly. Right. And so I really want to warn people against thinking we're hitting some kind of a wall. Think of it this way. We just found alien technology. We have no idea how it works. And we're, like, poking it with a stick, and it's already spitting out amazing things. So think about that. Okay, we got a glowy ball. We don't know how it floats. We don't know how it's doing. Anti-Gravity, right? We don't know how it's doing this. We don't know how it's reflecting its surface. We don't know how it's coming up with these answers. We don't know how it got here from the other solar system. We don't know anything about it. You poke it with a stick and it tells this magic stuff and we're like, Holy crap, that's amazing. Somebody walks up, sees you poke it with a stick and goes, yeah, that's I mean, that's that's all it's ever going to be able to do. I mean, I've seen you poke it with a stick twice, and it gave you kind of a similar answer, which means that's all we could learn from this alien ball. That's their conclusion. I am certain that since you poked it with a stick while I was standing here three times, and it kind of gave you a similar answer. One it must be stupid. Two, it's not as smart as us. And three, this is as as smart as it's ever going to be. This is the most it has to offer. That is the claim that's being made by these kind of like denialists, in my view. And that doesn't mean the current shiny ball is better than humans, or it should replace humans, or it could do everything we could do. Like, this is not a competition. Okay, here's a better way to think about this. This is not like a rock that we have animated. Think of it this way. If an alien comes here because someone else was like, hey, this is not thinking, this is processing. And I'm like, come on, come on. If you if an alien comes here, let's assume we know how our brain works. An alien comes here and we look at its brain, or it shows us its brain, and it looks different. And we're like, oh, you guys do neurons and synapses different than us? Who's going to walk over and be like, well, since they're doing neurons and synapses different than us, they're not thinking. Only humans can think. And I'm like, they got here. They got here, didn't they? It's a little shiny ball. And they got here from whatever part of the galaxy or universe that they came from. They're obviously doing something right. And AI is obviously doing something right too. So I think it's a little bit specious. Is that is that the name of the word? It's like specious to just magically assume that we are the best. Only we are thinking only we are special. Instead of thinking like we might have this nascent alien intelligence thing going on that actually is doing things that are very much analogous to us. It reminds me of the first time that I clicked around inside of Linux. This is like late 90s. I was messing with Linux. This must have been like 9798 or something. I'm messing with Linux and I'm clicking around because I had started with windows and I'm like, oh, it opens windows and it opens things that I could click and navigate. Then I'm like, it's it's just like on Windows Explorer. And this like, blew me away. It absolutely blew me away that this was just a different way of doing the same thing. And that underneath this, there's a universal thing of you need to be able to browse files, you need to be able to open windows, you need to be able to close windows. And that clicked for me. And I'm like, oh, I guess like all operating systems are going to do this differently. It's the same with aliens. It's the same with like they might think differently, but whatever. They have to think, right. So why would we expect this synthetic intelligence that we've birthed to do it exactly the same way that we way that we do? We should not expect that we got here accidentally stumbling through time due to evolution. And we've got this version that we have and it's awesome, obviously. But like, that's way different than we invented this thing five years ago or whenever that was 2017, six years ago. And I know it goes further back than that. But you know what I'm saying? Transformers. All right. So that's that. And this this is becoming a long thing. But whatever we'll go with it. So yeah, basically we have no idea how early all of this is. We're likely to find ten, 20 or 200 more of these holy crap optimizations like this thinking thing before we start hitting any limits for neural network architecture or the transformer transformer like. Plus we could just find something better than a transformer. You realize how how lucky we were to find the transformer like the people who made that paper, they're like, hey, this is this is a cool way. We think this is a cool way of doing something. They didn't know what they had. Okay, you should watch a Karpathy talk about the transformer. He's like, this thing is a general purpose computer. This thing is insanely good at learning. He talks about different ways that it's better than humans at learning. Okay, some some people randomly found this thing and it shot us off. Okay. So so check this out. This is another example of finding tricks or slack in the rope just lying on the ground. So we stumble through AI for decades and decades and decades. And then someone's like, hey, this is kind of cool about this attention mechanism. Hey, what do you think about this architecture for a neural net? Boom, now we have this take off. There's nothing saying somebody isn't going to be like, I like what you did with that transformer architecture. What if it looked like this instead, it might be 20 times better. It might be 2000 times better. It might be 4% better. It doesn't matter. Like we have only just begun. We have only just begun. I can absolutely guarantee you that assuming we don't kill ourselves off as a result of this, like that would set things back. But I'm trying to get you to think about things in this way, because it's insane what's about to happen. And yeah, I'm gonna I'm gonna have more examples here. I'm working on an example right here on this other screen. Uh, pretty cool thing I'm building with it. Um, okay, so that was that. All right, now back to the show. Okay. We are adding a whole bunch more content to the podcast. So if you have not been listening to the podcast, I used to do this really dry thing. Um, and sometimes I'm still dry, so whatever. I'm an intellectual guy. Sometimes I just like, talk about the idea. I'm not overly excited, like the way I was just now, and it could be a little boring. I mean, I started this thing back in 2015 and I was literally like, welcome to the Take One Security podcast. The podcast where I do everything and take 1 in 1 take. That's all I do. I just read the news. Okay. First news. Now that was fire. The podcast blew up. Why? Because no one had a podcast. Well, there was Risky Biz. There was like GRC. What's his name? Steve Gibson. There was that there was a couple, but very, very few. And actually mine was dry and to the point. So it was like super exciting. Anyway, that that is the old version that is no longer interesting. People got tired of that, and I lost interest in it because I just didn't want to talk about the news anymore. I wanted to move on to other things. So it is now what it is. The point is, the thing that I just did about AI, Outcry about the release of strawberry. That is now a piece which will be cut out by the team. The team will put that on YouTube, and the team will put out that out on the podcast. And when I do little clips like that, I don't know that I'm in a clip in the moment. But what happens is they take that and put it into its own piece. They name it, they do a thumbnail maybe. I'm not sure about that, but they put it out there and now that's a piece of content and oftentimes it's way easier to listen to those. I would prefer to listen to those. Again, I'm building the thing that I believe needs to exist because it's what I wish I was getting from other people. Right? So Alex Hormozi, Chris Williamson, all these different people, I love the fact that I can get a single discrete piece of content, and I don't mean like a YouTube short, like 30s or a minute. I mean something like, I don't care if it's 30s or if it's like 3 or 4 minutes. I just want a discrete idea labeled as that discrete idea. So that's what I've been doing. I've been taking content either making it independently or if it happens to come out like it just did. Talking about strawberry inside of a big episode that is then brought out and turned into a full thing. Now, the advantage here is that the podcast is now going to be full of content. There's now going to be three, four, five, six pieces of hopefully decent pieces of content inside the podcast feed. So if you're a subscriber to the podcast, which you should definitely go do on Apple or Spotify or whatever your client, um, you will not only have this big episode, which, when I mess around and start ranting like this, could end up being 30 minutes long, right? But in addition to those, you're going to have these really clean little pieces of content that's like a single idea. So bottom line is get back on the podcast, whether that's on YouTube or on, um, audio. Definitely. Please go subscribe on the audio version. So Apple Podcasts, Spotify, whatever, Android. And uh, yeah, tune in because you're probably going to like it a lot. Okay, new keyboard I'm using. Okay. How do I type without messing anything up? Um, let's see here. Okay, I'll just do. Okay. I'm going to type fast. I don't know if you can hear this. I got a Sennheiser here listening. Okay. This keyboard is called the Orlev F 75. I love this keyboard. And it's actually super cheap. It's like 70 bucks. I've purchased so many expensive keyboards that are sitting over there and I don't use them. I was using this one for a while, but I heard this sound on a YouTube video in a review and I was like, ooh, I like that sound. Hope it sounds like that in person. And yeah, it does. It sounds like that in person. Hopefully the post-production will not take out that sound, but I love the sound of this thing. This particular one has no key. Uh, has no labels on the top of the keys. They're actually on the bottom of the keys and they're backlit, so it's pretty cool. So the whole keyboard looks kind of gray. It's really heavy. I like it, but mostly I like the sound and the feel. So combine that with Neovim config text based world. I'm happy. I'm a happy person. Okay, continue to be blown away by the idea of encapsulating what people think the biggest problem in the world is using extract. Primary problem I love this thing. So echo Viktor Frankl's work, pipe that into extract primary problem and look what it brings back. The lack of meaning in life leads to suffering and existential despair. That is beautiful, that is beautiful. And I'm taking someone's entire body of work and pulling out what they think the biggest problem in the world is. That's just astounding to me. Yeah. So the keynote at Sans went really well. Almost 30 minutes of questions afterwards was really cool for them to have me appreciate them. Yeah, I'm experimenting with some micro art fiction. And Tim from community was like, hey, what are you working on? You're working on a book? Uh, not really. Kind of. That's all I'll say about that. Not actively and fiercely, I will say that. But yeah, some ideas out there working on a bunch of flagship content. Right now, I'm mentioning more and more about human 3.0. So I'm going to put out like a description of that, like the structure and what I'm working on there. Uh, a new piece on security asset management in AI, how to write fiction using AI. That one's going to be insane and a whole bunch of other ones. And all right, let's get into security. Chinese government backed hackers have been infiltrating US internet service providers to spy on users. According to a bunch of researchers, Halliburton confirmed a cyber attack where intruders access an exfiltrated data and they blame the Ransom hub group, who also claimed responsibility. Predator spyware predator spyware is back with new features that make it even harder to track, and it looks like it's showing up in Democratic Republic of Congo and Angola, and there's a lot more anonymity built into it. So the trick with this kind of malware is that it's not just the people who make it, it's the people who sell it. And the fact that there's a market for governments who want to control their populations. And that's what's especially scary about, like the CCP way of viewing the world. And if you combine that with AI, that can kind of control things and see everything and like, determine who's breaking the rules and monitor and surveil and things like that. And you combine those things together and you do have a recipe for controlling populations. Latest version of NIST, CSF, which is two, introduces governance as a new step and focuses on continuous improvement to adapt to emerging threats. And it also they also released a continuous threat exposure management framework, CDM framework. Thanks to Hyper Proof for sponsoring Maltese security, researchers have been charged after discovering a flaw in an application called Free Hour. And I think they are in jail. Yeah, they're in jail waiting for trial, I believe, because they don't have like, oh, disclosure, public disclosure or, you know, good citizen or anything like that. It's like, nope, you did a bad thing. You're in jail. US Space Force is gearing up for potential conflicts in space with countries like China and Russia, and they're trying to protect the stuff in space. Obviously satellites and other space assets. And thanks to dropzone for sponsoring as well. In the US, is offering a $10 million reward for information on the Russian hacking group called Cadet Blizzard, linked to Gru's unit 29,155, which has been particularly focused on disrupting aid to Ukraine. NASA is launching a new podcast called No Such Podcast. Good name. That's a good name. Evidently, lots of people use the I forgot My password feature as a de facto login method. Human behavior A Starlink satellite dish was used on a US Navy ship for an illicit Wi-Fi network called stinky, which was used for streaming and civilian communication. The Navy demoted the senior enlisted leader responsible for being awesome. Uh, so I didn't write that. I wrote that right. Obviously. Uh, and I got some pushback. Honestly, I got some pushback from some military people and some Navy people, and they're like, hey, this is not a laughing matter. I know it's not a laughing matter. I know you shouldn't be allowed to set up satellite receivers on a warship, but I know this. I'm ex-military too. I just thought it was funny. Like they're trying to give internet to the people. They're a hero. I know it's bad. Like, sorry. Not sorry. Um. All right, I tech Apple released their September updates yesterday and they were decent. So I'm going to jump ahead. I'm basically getting probably I don't think I'm going to get the new AirPods. I saw a thing that basically said that the AirPods Pro two are still better because they have the silicone tip part, and I'm going to get the black 16 Pro, but not the max. I can't remember if there was some. Oh, I might get the watch as well. I might either get a black Ultra Series two or I might get the series ten because it's got a bigger screen then the ultra. I don't know which one I'm going to get. I'm going to wait and see it in person, because I will be camping on the night of the 19th in Burlingame. So if anyone wants to come hang out in person, we can have like an impromptu UL meetup because I've been there for the last let's see since ten, 2010. Holy crap. I've been camping at that store for 14 years and I camped the three previous times in other places. So I've been camping for the iPhone since 2007, but I've been doing it at that store like first in line for last 14 years, and I will be there again on the 19th. If anyone wants to do an impromptu UL meetup, which it won't be an actual UL meetup, it'll just be us hanging out because there's coffee shops, there's a Phil's right there so we can hang out. We could talk. Um, I usually walk a lot through the course of the night, maybe try to take a nap or whatever at night. But, uh, anyway, I'll be there. Nvidia's RTX 40 series GPUs, including the 5080 and 5090, are getting ready to be finalized and potentially come out end of the year or beginning of next year. They're saying these things are going to be massively power hungry. I've got two 49 seconds in the room next to me here, and they are beasts. Um, I'm going to have some FOMO when the 5090 comes out, but I don't think I'm going to be upgrading my system because, um, that would be expensive and I'm not sure I need that, especially when a lot of the AI models are probably going to run better. I'd rather wait and upgrade my main desktop on Apple and see what, like the giant M4 or giant M5 systems have. When I have, let's say, half a terabyte of system on a chip memory and all of that can be used for GPU operations. I'm not sure I'm going to need a dedicated. Well, maybe I'll need a dedicated one of those boxes to do I, but I'm not sure I'm going to build another dedicated PC with Nvidia chips in it, which is weird because I've got a lot of Nvidia stock. Anyway, that's a separate talk show. Uh, okay. Trump is launching a crypto project, but there are concerns because 70% of the tokens are being allocated to insiders. Uh, Ilya, it has a new startup, SSI, and they raised $1 billion. So a lot of people have $1 billion valuations. Ilya did it differently. He has a $1 billion seed round, a $1 billion seed round. Insane visa is set to launch a new account to account a to a payment service in Europe. So you could make direct bank to bank transfers without using credit cards. I wonder if that's a competitor to Swift. I imagine it is. Or maybe it rides on top of Swift, I don't know. Engineers from Cornell and Florence University have developed a biohybrid robot that uses electric signals from a king trumpet mushroom to move and sense its environment. Reminds me of Neri Oxman. Neri Oxman doing her, uh, networking of nature type stuff for 2024. Annual Work Trend Index from Microsoft and LinkedIn reveals a shift in employer preferences 71% of leaders favoring candidates with AI skill over those with industry experience. Yeah, I found this one kind of interesting. Um, Wall Street Journal is highlighting a trend where small startups are increasingly influencing the US economy. So I've been thinking about this for a while, is getting ready to write a piece on it. But I want to I want to state this more forcefully. I think people are about to realize that the most medium to large companies, like the size companies, have become ineffective. They lack vision and focus, too much bureaucracy. They have giant workforces that are hired for like a worker bee mentality, rather than to have them be exceptional or innovative or challenging to the structure. And I think this is another part of like the essay that I wrote called The End of Work, where much of the innovation in the world moves away from big companies and towards individuals and small, dynamic startups. And this is also what Mark Andreessen talked about in his conversation with Huberman, which I thought just came out. But it turns out that was a year ago. But that was a fantastic episode. And related to this, Paul Graham's piece, uh, his latest piece called Founder Mode, looks at how bigger companies make the mistakes talked about above and how this founder mode, which he didn't even define it, he just said it like it needs to be defined because it's really interesting. But, um, yeah, it basically he says it allows you to stay in a more innovation focused mindset. Uh, great read. All this stuff is a great read. I just wish he, uh I wish he locked it down. I understand it's not locked down yet, but I thought he might have tried to do that. Oakland police are using Tesla's Sentry Mode footage to aid crime investigations. And if they can't find the owner, they just tow the car. They just tow the Tesla because it has the evidence. Waymo is tackling the skepticism around its autonomous vehicles by launching a new safety hub with a whole bunch of data and charts showing that they're more safe than human drivers. Oh, related story that just came out is that most of the accidents with Waymo are from human drivers hitting the Waymo. Not surprising to me. Joshua Austin's A manifesto manifesto for Radical Simplicity argues for a streamlined approach to software delivery, ditching subjective metrics like story points in favor of focusing on real dependencies and outcomes. Bluetooth 6.0 is here. Oh, by the way, the new iPhones have Wi-Fi seven. I am upgrading my whole house to Wi-Fi seven. They say it's about five times faster, So if I'm getting like 800 down right now, eight times 544 gigabits, it would be nice if I got four gigabits. I bet you I'm going to get like two gigabits streaming to my phone and then to my laptop once laptops have Wi-Fi seven, I'm not sure if my current ones do. A Caritas phone was snatched in London and despite tracking it with Find My iPhone, he watched it basically travel around the city and ended up in Shenzhen. Humans. President XI Jinping has pledged to create over 1 million jobs in Africa. I cannot stand seeing Africa become an extension of China. I do not like it, but it's pretty hard for the West and anyone in the West to even notice this, given our history, which is not great. The question is, how long will we let that guilt be an obstacle to opposing China wherever they go and basically do colonialism, which, is bad, but hard to talk about how bad it is when you already did it and got scolded for it. And now you feel bad about it, and now you're pointing the finger at someone else doing it. It feels kind of weird, but at the same time, it also feels weird and cowardly to not do anything about it just because you feel guilty. A whole bunch of right wing influencers received millions of dollars from Russia in return for promoting pro-Russian talking points, which is hilarious to me since their whole narrative is like, I don't believe anything. I'm skeptical. I'm discerning. Except when it comes to obvious Russian propaganda. Here's another way to think about it. And I did some Intel in the Army. I'm not, like heavily Intel trained, but I did have a decent amount of exposure to it and worked in like the Intel Group or whatever. But think about this. Here are probably two unrelated phenomenon. One. We know for absolute certain that Russia is trying to use its significant propaganda capabilities to influence the right wing in the United States to be pro-Russian into Ukraine that we know for absolute certain. Right. That's not even debatable. They are really good at this, and they are spending at least tens of millions of dollars, probably hundreds of millions of dollars, probably not billions. So I'm guessing tens of millions to hundreds of millions of dollars they are spending on doing this. And they're really good at it because it's essentially like the evolution of the KGB. Second point, which is probably unrelated. The right wing in the United States is now completely, almost completely pro-Russian Anti-ukraine. And some people reached out to me and they're like, that's not true. That's not true. I should have went and found some data and linked to it. I read the data all the time. I read the data all the time in polling, where it's like, how much of the Republican Party, when polled, are actually pro-Putin? The numbers are ridiculous. I'm sorry, I don't have the link here. I should have put it in there, but it's it's pretty easy to Google and you'll find it yourself. It's actually better if you find it yourself than if I feed you the link. Just Google that. Look for your look for the stats, the polling. It's ridiculous. And a brief political aside. So this is an example of a little breakout piece. So I already know I'm going to get hate mail about the point above, because I'm a crazy liberal and I post a lot of other stuff that's like anti-left far left and their idiocracy or idiocy. And I get tons of comments about being way too right from the left because I attack the left and whatever, and I just now attack the right and they're going to be like, oh, you're too left wing. So I ask you to consider another possibility, which is I'm actively considering each position from first principles, not perfect. I could be wrong, but I put a lot of effort into having my own opinions that are not part of a tribe of pre-approved options. So perhaps the best way to sum me up right now is that I'm liberal in my goals and somewhat conservative in my approach for how to achieve them. So meaning, I want a planet full of lots of different colors and ethnicities of people all thriving together. A secular society that encourages any religion but doesn't allow any of them to infringe on government or the ideals listed here. Gender identity, private sexual behavior between consenting adults, all personal choices and nobody's business. Basically, the freedom for everyone to strive to be the best versions of themselves that they can be, and a society that sees that as simultaneously a matter of personal responsibility, but also helps people along that path. So free speech, the ability to offend people with difficult ideas and have those debates and be controversial and be offensive, right. The concept of meritocracy. The emphasis on personal responsibility. Okay. And those are kind of like currently those are considered like right wing. Right. But also the acknowledgement that some people in groups need help getting to this point where their personal responsibility can actually take root and help them thrive, and that it is society's responsibility to give that to them. And that's where it's liberal. So in other words, if everyone had the same opportunity, I'd be fiercely all about meritocracy. But not everyone has the same opportunity. And that's the role of society and charity and kindness to help them get to the place where their hard work can actually benefit them. So the problem right now, in my mind, is that the far right and the far left are in opposition to that model. I just laid out the far right because they actually want the wrong things, and the far left because they are so confused about how the world works that they're causing more harm than good. So anyway, that's a short version of where I currently stand. And if you ever get confused about like if I'm crazy right or crazy left, just go read that. And here's a really important point. I also consider you to make your own North Star paragraph or set of bullet points like I just did, and have that be your thing and then realize that this this, the left narrative or the right narrative or centrist narrative with all these different narratives, they don't really matter. What matters is what is your North Star look like? Like I just laid out. And then what you'll find is, like all these different narratives and tribes and everything they might have. Venn diagram overlaps with some part of your thing, but they shouldn't be your thing. If you have a perfect Venn diagram, overlap with the extreme right or the extreme left, you don't have your own opinions. You are using someone else's opinions and that's a problem. So North Star plus first principles is far better than picking a tribe and endorsing everything they say. All right. Sweden's health authority has issued new guidelines advising the children under two should have no screen time, while teenagers should be limited to three hours a day. A lot of people are starting to say, and this is supported by a bunch of different studies, that exercise could be the most potent medical intervention that we know of. And I got a buddy who's a cardiologist named Jonathan, and he agrees with this. David Brooks discusses TEDx Goias Gioia's essay on the decline of American culture, where art is overshadowed by entertainment, and now even entertainment is being consumed by distraction from platforms like TikTok and Instagram. Got a photographer documenting life and beauty of America's last old growth forests? Got an article here that explores the belief that there's a place for everyone, suggesting that every person has a unique purpose and value. Marco Giancotti argues that with millions of books available, only select a few what he calls damn good books, which are truly life changing. And these are the books that transform you and not to get bogged down with, like, every book out there. And this goes along with the idea of like, you should be willing to put down bad books, just put it down. Go on to a better one. Phoenix just hit 100 consecutive days of 100 degree heat, which beats the record that was set in 1993. And discovery lmao swsh a bash wrapper around Python's ML whisper. It's basically a really fast whisper for Mac. NN term lets you browse Hacker News from your terminal and dungeon dash, a remote know a command line RPG where you dive into dungeons, battle enemies, and collect loot to up level and become the ultimate hero. I want to play this a whole bunch more, and I actually want to make some of these. These these just seem super fun and I love the terminal. If you didn't already know that. NSA's national Cryptographic school television catalog from 1991 has surfaced, listing around 600 training videos on CommSec and Sigint. I love these treasure troves of old things. When I was in the Army, I got to work in a library for a certain amount of time, and I got all the Special Forces manuals and all the CommSec and Sigint and stuff like that, and I would just sit and read these things, like voraciously. Okay. List of ideas. Extraordinarily simple template you could use to orient your life. Okay. I love this thing. I believe one of the biggest issues in the world is this, which is why I'm looking to solve it using this, which is why I'm doing these projects and I'm measuring my success using these metrics. This is how to introduce yourself at a party. This is potentially how to introduce yourself to a mate. This is how to explain yourself in an interview. It's your LinkedIn profile. Look at this. I believe this is the biggest problem which I think we should solve by doing this, which is why I'm doing this work which I am measuring using these metrics. That is purpose, that is direction, and it explains it to others. And it's done using a simple sentence like, you could turn this into one complex sentence. And if you can answer this, I would argue you're better off than most people in the workforce and most people on the planet. The more I think about it, the more I think major career for creators going forward will be building entire realities for people to live inside of. So think post, AGI or ASI and post UBI and where games are extraordinarily immersive. So I think there will be a huge market for creative people, creative people building the storylines and stat systems that look and feel like entire worlds that people will live inside of. So imagine how everyone loved Game of Thrones for all those years, except like you're actually one of the characters. In fact, you're one of the heroes in your navigating through the the difficulty, right? Or it could be like a superhero themed thing or like a romance themed thing, or like fantasy or whatever the structure is. So imagine basically your favorite book or TV show or series or movie or whatever. We basically build fully immersive realities based on those for people to spend their time inside of. And the problem is, it can't all be entertainment because people need a purpose. They need some sort of drive. I think we're going to find ways to create that purpose inside of these realities. I have a piece from I need to go dig this thing up. It's from like 2002, and I basically said that the games of the future will be people going in and doing the work and the jobs inside there. So you'll go in there and you'll be a cop, okay? And you'll see someone coming down the thing and the music is too loud, or they're beating somebody up, and you'll go and actually engage in a fight with this person to try to restrain them or whatever. Or maybe you're the bad guy and you're just going to go beat somebody up, and it's someone else's job to be the good guy, and they're actually paid or rewarded in some sort of way. So other people inside the game can give them something, or the system can reward them in a way that's good on the outside as well. Now, in most scenarios, your house and your game subscription and your game gear and everything is being paid by UBI in this world that I'm thinking of, most of the jobs are gone and people are being paid essentially to live inside these realities, which is massively dystopian. I want to be very clear about that, if that is the only option. Okay. What I'm saying is that when the jobs go away, there will be a lot of people who want to live these lives because we are creative. We read these books, we see these movies we watch. We know that that's on offer. That's the thing we could potentially have. So if games make it where we can actually have it, we can actually be that hero. We will want to do it, but we won't want to do it. It will be empty and weak and shallow and lame and horribly destructive. If it's nothing but just like violence and porn, which there definitely will be that right? There will be games that are just nothing but that. But it's all entertainment. It's empty, it's hollow, it's nothing. The better version of this, which I hope happens and definitely could happen with, like if we're wielding AC or if AC takes over and it's doing this in a benign way, it would build meaning into the system, but it wouldn't require that you do it. You could also take off the gear, put it aside, go outside, start a community, start a farm. Like, uh, till the land toe the land. What's the name? What's the verb? Um, you know, till I think it's till. Anyway, you tend to the land, you grow crops, you make food, people eat the food. You have families, you have kids like. So imagine this. There are societies that's kind of like Amish. They don't play the games, so they're kind of Amish. And maybe there's like more advanced technological versions of Amish where it's like, I don't know, the 1950s or something, right? In this ideal kind of world, except for it's everywhere and everyone can participate. I would argue that the actual reality is going to look like there's going to be some communities like that, some communities that are super Amish, they're like Ted Kaczynski, like tech is bad and they just do their own thing. Government should leave them alone. They are productive members of society. They have their own schools. They have their own religion. They have their own stuff. Like they're making their own food. Leave them alone. Everything's fine. They don't have to play the games. And a bunch of people like maybe some of their kids, and that'll be bad, but a bunch of people are going to be like, are you kidding me? I want to be a superhero. I want to be a princess. I want to be, you know, I want to take over the world. I want to do all these things. Fine. Go find your meaning inside the game system. Importantly here, there is going to be a period of time in which we're going to need something like this as a transition period, because so many people are going to lose their meaning, they're going to have to go into this thing. I think there's going to be a requirement for an ASI, for a government, for something to be like, look, we need to give out money and we need to give them something awesome. I think game companies, whoever it is, Ubisoft, Microsoft, Sony, all these people, they're actually going to get hired by the government to create these giant scale games that people can live inside of. There's going to be haptic gear they think of like a military push to invent the bomb or whatever, but the military push is going to be to invent really immersive technology because we need it, because guess what? It is actually a military concern. It is actually a military level requirement that we do not have hundreds of millions of people, also known as billions of people with no meaning and no money, because they will in fact light everything on fire and that will be bad. So there will be a period where we actually need to invent something like this to get us through this period. That kind of calms everyone down. But if you don't do it well, they will be satiated and they will be not violent, but it will start to rot. It will rot the human soul. Because what are they doing for creativity? What are they making? What are they building? Are they building? Are they making families? Who's making the kids? Right? I mean, the games didn't solve aging. So a bunch of people hooked into the games, they don't meet anyone. And two generations later, we don't have any people. Or actually, technically, I think everyone dies in one generation, right? If you're not reproducing. So like that is only a temporary solution to not have the world blow up. But ultimately you have to solve the meaning problem, which either happens outside the game, which I think that is going to happen and should happen, or inside the game. And I think what's most likely in a benign situation is that you'll have both recommendation of the week. I've been a bit obsessed with the problem definition lately, like I talked about with the fabric pattern. And here's my recommendation get really good at articulating and prioritizing your problems, like write them out in vast detail. Make yourself an expert in your problems. It takes away their power. Kind of like staring directly at anger when you're meditating. Like it takes the spin off the anger. If you look away, the anger hurts you again. You look directly at the anger and you're like, mm, that's not so bad. Happens the same with your problems. They become approachable. And this also happens to be the key to brilliant I prompting because it's an extension of like know yourself and then you can articulate yourself and the aphorism of the week when I have one week to solve a seemingly impossible problem, I spend six days defining it, and then the solution becomes obvious. When I have one week to solve a seemingly impossible problem, I spend six days defining it, and then the solution becomes obvious. Albert Einstein. Unsupervised learning is produced and edited by Daniel Miessler on a Neumann U87 AI microphone using Hindenburg. Intro and outro music is by zombie with a Y. And to get the text and links from this episode, sign up for the newsletter version of the show at Daniel miessler.com/newsletter.

We'll see you next time.