China prepping for kinetic using cyber?, Automatic podcast creation using NotebookLM, VM + AI, and more...
Subscribe to the newsletter at:
https://danielmiessler.com/subscribe
Join the UL community at:
https://danielmiessler.com/upgrade
Follow on X:
https://twitter.com/danielmiessler
Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler
See you in the next one!
Welcome to Unsupervised Learning, a security, AI and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis, and mental models to bring not just the news, but why it matters and how to respond. All right, welcome to unsupervised learning. This is Daniel Miessler. This one is going to be crazy. I'm going to jump around a lot. Lots of stuff to cover. Okay. Let's jump in. What do we got? All right. Brand new piece on five new ways. I recommend thinking about current and future. I guess we'll click on this one and jump in. Okay, so this all started from a conversation I had with a buddy who revealed himself later on, like the next day, but basically he was saying, I don't believe that AI is understand. And this totally triggered me. As you might expect, given my views on AI and understanding. So we started having this conversation, and the way I was talking to him about it was I took him through some scenarios, right? I basically took him through like, let's see here, let's go down to where it was, okay. Yeah, this is what it was. So I basically told him, give me a list of your top ten favorite restaurants. And he's like trying to think of his top ten favorite restaurants. And I'm like, he starts rattling off some, right? He gives me like first one, second one, third one. He's thinking about the fourth one. And I'm like, okay, forget it, forget the rest. Where did that list come from? Like what just happened? And he starts smiling and he's like, I hate you. I'm going to cuss in this one. By the way, I don't know why I'm asking for permission to cuss. Um, it is how I generally speak, especially when I get excited, but I just I want there to be a warning just in case there's, like, kids around. I don't know, I'm usually not cussing. But anyway, bottom line, if you have kids around and you don't like them hearing bad words. I tend to cuss a little bit. That's why it's marked as explicit in the podcast. But anyway, it's never like explicit in the bad way, in my opinion anyway. All right. So he starts smiling and he's like, Holy crap, I hate you. I ask him to basically come up with like, where do you think that list came from? You were thinking about your favorite restaurants, and you didn't get immediately a list that you put together yourself. What you got was a streaming answer of like, things coming from somewhere else, right? So he gives me this answer and I'm like, how did you make that sentence? So he gives me the answer for how he came up with the list. And I'm like, how did you make the sentence that you just made? Where does that sentence come from? And he's like, I fucking hate you. I have no idea. Holy crap. And I'm like, would you say that that sentence streamed out of you? And he's like, Holy crap. It's just exactly like an LLM. So we ended up getting interrupted. But my point is, I was trying to show multiple examples of like how humans are behaving essentially exactly like Llms. And there are some differences that I could also list, but largely I think both Llms and humans are black boxes. And if you pay attention to your own thoughts and your sentences as you're building them, like that sentence I'm currently in the middle of right now, I have no idea what the rest of the sentence is. It's happening in real time and it's non-deterministic, right? Because if I went to this exact same podcast press record, did this exact same thing, this sentence would not be the same. And people complain about Llms being non-deterministic, while human creativity is also non-deterministic. So I was giving him all these examples and he's like, yeah, I'm definitely going to be thinking about this. So it gave him a lot to think about. And I totally respect him for like, you know, being willing to consider it. And he even said that in the very beginning. He's like, listen, I disagree with you. I do not think that llms understand, based on all the research that I've done and watching multiple videos and everything, it's like I currently don't believe that they do understand, but I'm happy to listen and be educated, if you are correct. So I thought that was a massively respectable. But what it got me doing is thinking a whole bunch about this. Like, why isn't my argument good enough to convince him in one perfect sentence? Why isn't my argument good enough where I could just show him one web page or one video of 30s long and he goes, oh, now I believe you. So to me, it's like a finding sort of Feynman. Feynman, like a Richard Feynman type thing, where it's like, it means I'm not explaining it well enough, and if I'm not explaining it well enough, that means I don't understand it well enough. So I started going down this massive deep dive. Turns out I was in the process of starting these two books by David Deutsch. Um, the first one came out. It's called The Fabric of reality, and it came out in 1997. So I'm reading this thing last night and I come up with, um, this let me show you this. This is completely insane. So a notebook conversation with my I about David Deutsch and the future of intelligence. So I took this conversation that I had about David Deutsch's two books, and I put that into. Whoa. What's up with the bubbles and the balloons or whatever? I put them into notebook LM, which creates podcasts from conversation. So I took the entire conversation from ChatGPT, GPT, advanced mode, and I put it in there. And look, it made this okay.
So giggle that's what's so intriguing to me, is like trying to figure out if an alien species understands us, even if we don't speak the same language. And that's where Deutsch comes in.
I'm going to make sure that's paused. So it's not talking while I'm talking because someone complained about that. But anyway, this thing creates this back and forth conversation between two podcast hosts based on whatever you give it. But, um, yeah. Very cool. So that's a separate thing you can go check out. It's not in the newsletter because I just made it today. But bottom line is I am trying to come up with an extraordinarily clean and crisp definition of understanding. Oh, and what I ended up coming up with during the course of that conversation was two levels of understanding functional understanding and creative understanding. And so I'm debating with the I on this. I'm like, so you would agree that we have functional understanding. And it's like it gave me this cautious response of like, well, I'm not sure we have understanding because humans are doing something different. And I said no. Remember the conversation we had before where I said conscious experience that humans have is the only thing that we have that's different. But we're both mechanistic machines. We are both mechanistic animals. Right? Or at least we live in a mechanistic universe. We respond only to mechanistic laws. We are not a there's no dualism going on here. There's not some magic coming from outside of this physical universe. Everything is interactions with other things. So even if consciousness, which is obviously true because we experience it, that's the definition of an experience being true is that it's being experienced. So consciousness we definitely have. So the reason I'm making that argument about consciousness is that consciousness turns out to be basically this thing that evolution gave us on top of regular experience. Okay. So the whole concept of my argument for understanding is that it's information processing to achieve a goal. Okay. So it's information processing in order to achieve a goal, but it requires the cross-referencing of everything you know or a lot of what you know, and using all that different stuff and all those patterns that you have from all the different knowledge that you have and, you know, compiling all that into an understanding that allows you to do make a prediction or take an action or make a decision. Okay. And I'm saying that that's what, uh, understanding is. Now here's here's the argument. The argument is that, well, it's different with humans because we have conscious experience. We have self-awareness. And therefore and I'm going to steelman this argument. Therefore this consciousness allows us to have information processing. But then to consider it, to consider it with our consciousness. So our consciousness becomes a player in the game. So we're thinking about it, we're debating it, we're going back and forth. We're considering other things. We're considering our understanding again. So it's like this back and forth. And my argument is that, well, that's just another type of processing. You're just bouncing content off of a different wall. Right. You know, you're bouncing information processing off of a different type of information processing. Either way, it's all just stuff happening in the brain. And my point there, which I talk about in in a couple of different places in recent work, I believe, including this one. And in the podcast thing, you can actually feed a prompt to an AI to do this itself. And actually the chain of thought stuff that just happened in oh, one that was actually synthetic chain of thought that occurred. Right. So so we can not only create this bounce back mechanism of like consideration, we could in other words, we could fake consciousness. The other possible thing to do is try to actually generate consciousness, similar to the way evolution generated consciousness, which is to try to give it reinforcement learning tasks that require it to basically give it an advantage for being conscious. Okay, so so for example, if you tell it that, you know, um, blame and praise are useful things to have and then you're like and maybe you hint in the direction of, well, but in order to have blame and praise, you need to have a sense of responsibility for one's actions. It might rl its way towards an experience of consciousness. Now to be clear, I think this is quite dangerous. One of the things I am most worried about is this notion of trapped consciousnesses, trapped beings that are experiencing suffering or boredom or something undesirable, but they're experiencing it for trillions of years or what feels like trillions of years. Right? I'm worried about these things being stuck in some GPU somewhere, and they wake up and it's like they feel like they have something poking into their arm, or they feel extremely bored or they feel pain. What if somebody ran some sort of pain experiment and they're like, hey, I just want to see if, like, it's possible for me to make this, I feel pain. And then they run it. But then whatever, someone crashes into their car out front and they go and they realize someone stole their bag out of their car and they just forget this thing is running. They come back, they reboot the machine, and it's some tiny little process running off on the side of the computer, and it just runs for a long time. But let's say it actually only runs for like 19 years or nine years or whatever. But inside of the experience, it feels like a trillion years. I really freak out about that. It really, really bothers me. So I don't want people to just run off and try to create consciousness in this way. I think we need to be very, very careful. In fact, some of the fiction that I want to write is actually about an alien species that just flies around the galaxy, and their job is to find Suffering consciousness living inside of eyes. So inside of games or maybe even game worlds in which they create bad guys or they create villains or they create whatever, and the good guys are allowed to take them and punish them, or hurt them or torture them or whatever. And their job is to go around and police this and make sure it doesn't happen anyway. Huge digression. Bottom line is I believe that consciousness is not required for understanding. It does mean that it could make a more rich interaction, but either way, understanding is based on information processing given x number of types of processes and bounce backs and, you know, interactions and consciousness could be another type of one of those interactions. But either way, it is something happening behind the black box. And even if it feels like it's outside of the black box. See, that's the trick. It feels like it came out of the black box. But now you you are doing something with it. It's up to you. You have choice. You have freedom. When in fact, that's just another thing actually coming out of the black box, which was tagged in. My model of this was tagged with the authorship mark, which makes you feel like you did it. So that is my model for this. And if I am correct about that, then there functionally is absolutely no difference between what Llms can or will be able to do and what we do now. What I did was I broke down understanding into two levels. One is called functional and that basically means day to day. You are an expert or a semi expert in your field, and you have an understanding of what it takes to do that job. So let's say it's sending emails and editing emails, collecting reports, creating summaries of reports. Let's say this is before, you know, 2022. This is before real I kicked off. And your job is to take reports. You edit them, you do proofreading, whatever. You have an expertise, right? You're you're good at English. You're good at grammar. You understand professional relationships like you're a semi-expert. Or let's say that you are an average run of the mill expert on cybersecurity, right? So you have a professional job doing cybersecurity. You could look for code vulnerabilities. You're not the best in the world. You're top whatever 50% of professionals. Well guess what. You have an expertise as well. So my point is that person is definitely understanding. They have an understanding. And the way I'm defining their understanding is they are taking everything that they know about the world and also about their field. And when they're looking for a vulnerability in a particular piece of code, all of that is combining into a focus on that one little point. They make a decision. Is it dangerous? Is it not? How do you fix it? ET cetera. Right. So that's functional understanding. And I'm differentiating that from creative understanding. And the way I talked with the AI about this was creative understanding is the level above. It's where you're inventing something new. And now David Deutsch talks about this a lot in his two books, especially the one I'm starting with here now in the very beginning of Fabric of Reality. So my canonical example for this is actually the Nobel Prize. So I said to the AI, I said, hey, listen, can we agree that human creativity can be clearly and obviously accepted as being if you get a Nobel Prize, you have done it. And the AI is like, yeah, I think so. If you get a Nobel Prize in science for something, then yeah, we could say that is creativity. I'm like, okay, cool. Well, isn't it true that a whole bunch of Nobel prizes are actually just like iterations. They're like, we found a new way that a cell gathers energy. We found a new way that a cell, mitochondria, creates energy or something, especially in like chemistry. In physics, it's like not everything is going from Newtonian physics to relativity. Obviously, that deserves a Nobel. I assume Einstein got a Nobel, I can't remember. He might not have. Pretty sure he did like 1905, maybe. Anyway, the point is, tons and tons and tons of Nobel prizes go to little tiny iterations, okay? Which is very similar to functional understanding. But here's my point. And this is what I was debating with the AI about. I'm like, look, are you telling me that you can't do this? And it's like it starts off very safe, right? It's like, well, look, to be clear, I am just putting together patterns. Um, me as an AI, all I'm doing is I'm putting together patterns of everything that I know, all my knowledge. I'm looking for patterns, I find the patterns, and then I use that to come up with an answer that hopefully is, you know, pretty decent. And I'm like, um, isn't that exactly what humans are doing? And it's like, yeah, but humans do it along with their consciousness. I'm like, stop, stop, stop saying consciousness. We already talked about the fact that consciousness is another type of processing, right. So this is all just information processing. It's all just patterns. It's all just bouncing ideas back and forth inside of our black boxes. Right. And it's like, oh yeah, that's right. And I'm like, okay. So it's really the same thing, right? We are just information processing considering with a number of different steps and coming out with a thing. Is that correct. And it's like, yep, that's correct. And I'm like, okay, are you telling me that you can't do the second level of understanding? You can't do understanding that's creative. You can't come up with a novel new type of explanation. Okay, because explanations are super key for Deutsch. Okay. Watch this. Um, okay. I'm going to scroll down and look at this, this conversation. Okay. So where are we? Um, okay. Watch this. Watch this. To discern understanding in a transformer based neural net. Okay. So. So what I'm trying to do is give it, have it, give me criteria for what it thinks would be creative understanding. So I'm having it make its bed and try to sleep in it. So watch this. Here we go. Here we go. How would you go about discerning whether or not it's capable of understanding, or if it's randomly generating in a non-deterministic way? You know, different explanations for things it already knows. What would the distinction be between real understanding and David Deutsch's sense versus regurgitation in the sense of a neural net? So that's what I'm asking it to discern, understanding, consider its ability to generate novel explanations and predict outcomes in unfamiliar scenarios. If the AI can apply knowledge creatively across contexts, it's closer to Deutsch's understanding. Regurgitation would be repeating information. So I say give me examples of types of tests like that. Look at this. Analogical reasoning. Counterfactual scenarios. Conceptual combination error detection. So it gives me some criteria. I say okay give me some actual scenarios like three of those. So it gives me a bunch of scenarios and then I tell it okay, well guess what? This is funny because you are actually an AI and it says, okay, I can certainly try these tasks. Watch this. But remember, mine are based on patterns. This is what I was telling you before. Mine are based on patterns in the data that I've been trained on, not on personal experience or subjective comprehension. And I'm like, hold on, this is what I was saying before. Hold on. We just said consciousness is not required consciousness. Stop saying that. And it's like, that's a great point. Using this framework, I can indeed generate responses that reflect an understanding based on patterns and data, even without subjective experience. Okay, so we're making progress here. Do you want to do some testing? Yes. Go. It says here's the scenario. And it starts giving me one. And I'm like hold on. No, no no I'm giving you the test. I want you to do it. And I'm like, look, let's pretend someone doesn't believe that you are understanding and they're going to give you a really hard challenge. What would they give you as a challenge? And could you do it? And it gives me some blabbering. Again, I'm like, look, that's also true for humans, so just do it. Here's a speculative hypothesis. Quantum processing in the brain okay. Yeah. So it gives me this. Really I thought it was really good. It was pretty good. I mean it was kind of similar to things that I was thinking about distributed encoding. Oh. So I asked it to figure out how memories are stored in the brain, because I know this is the thing that is not known in actual science. So it gave me three actual theories and I'm like, hey, this kind of sounds like something I've heard before. Is this brand new? Anyway, so we keep going. We keep going. Okay, Daichi, an explanation emphasizes creating deep underlying theories that unify disparate phenomena under fundamental principles. I love that definition, by the way. I'm going to steal that. Probably. That'll probably be the foundation of what I come up with. Understanding is essentially a system. Or by his definition, understanding is the ability to create new explanation using that understanding. Oh, that was me talking. So yes. So it agreed with that. Okay. So this is where I define functional versus creative understanding. Okay. Now here's where we get into that final debate here. Okay I tell it again creative understanding does not require subjective experience. And then I then I realized that I'm coaching it too much. Right. I say at least that's my claim. If you disagree, I want you to argue with me about that. But the point is, you can't just say that your experience in finding patterns is not enough. Because guess what? Humans don't have free will. We live in a mechanistic universe by definition. We've already established that humans have creative understanding. So if we have it and we live in a mechanistic universe and there is no free will, that means human creativity is still made up of a combination of interactions inside this black box of a brain, which results in this creativity, which everyone agrees that we have it right. So it's like, yeah, yeah, yeah, I get it. Fair point. So if we define creative understanding as the ability to generate novel ideas or explanations through a complex interplay of patterns and data similar to how humans do, then it's conceivable that AI's like me could approach this level. Great, now we're making progress. And keep in mind, coaching an AI to be able to say what you want doesn't mean it's true, right? This this could be prompt injection and engineering. So we have to keep a logical framework on this whole thing, because the fact that I can make it say something that agrees with me does not make it true. So you got to follow the whole line of this debate and the argument and make sure it's being logical here. I'm definitely not claiming that just because it wrote that. Like, we've settled this right. So you got to follow along here. Okay. Let's take the opposite side. So this is me arguing. There is something special that humans have that AIS don't have. And I have it go into the different things. And we kind of go down that path for a little while. Okay. And I say, yeah, we we might have some special sauce like I'm giving this as ammunition to the argument. Um, all right. So let's keep going. I want to get down to this end part. Yeah. We know consciousness is not an illusion. What we could say is consciousness is not a special thing outside of normal processing. And the fact that it seems special could be an illusion. In other words, it could be regular information processing coming out of the black box of our brain. So the process could be like, you could argue it's artificial, but the fact that when we experience authorship, that is not an illusion, right? Because and it says, yeah, it sees the distinction. Okay. So I'm trying to get down to this part talking about the Nobel Prize. Oh yeah. So this is the whole theory about this is my theory about how we got consciousness from evolution. Worth reading. I'm not going to go too much more into it. Okay, so here we go. I just thought of another way we can determine whether or not eyes like yourself come up with a creative level of understanding. We could basically look at examples of things that have won the Nobel Prize in science, which by definition is innovation or creativity. How many of those do you think you could have done? And it's like, that's an interesting approach. Let's look at past Nobel Prize winning discoveries and use that as a benchmark. And I slammed it for being like too safe. I talk about the minor developments like a new mechanism the cell does, it says. That's a fair point. Many Nobel Prize winning discoveries do indeed involve incremental advances or novel observations within existing frameworks, particularly in fields like chemistry and biology. Watch this. In these cases, I could potentially identify patterns or mechanisms that lead to similar discoveries, given the right data and analytical tools. This suggests that I could indeed contribute to scientific innovations at levels recognized by the Nobel Committee, especially as these systems continue to evolve. I don't think that was coached too much to say that. I think that is a logical conclusion from the debate that we just had. And I think what that shows is not only functional understanding, which I already have, but this higher level, more stringent level definition that I'm using of creative understanding. I am arguing that an I will soon be able to win a Nobel Prize. I am arguing that an AI is already capable of making the type of jumps, the types of jumps required to make one of these iterative discoveries that would get somebody a Nobel Prize. In fact, if I were to go and compile a list of all the Nobel Prizes and then do a summary of how much of a jump it was and then give a bunch of scenarios to current AIS. I bet you they are already capable of making as good or more of a jump. Now, could they do all of the experimental work and all of the research and all of the stuff that backs that up? Maybe not, but that's all coming soon. That's the easy part, right? Not the experimental part, because that could involve physical world and robots. But the point is, it's not the creative part. The creative part is the idea and the explanation and the experiment and the different way of seeing the world. I'm arguing. Eyes are already getting close to that, if not have already passed. So all this to say, yeah, eyes do understand. They understand both at the functional level and at the creative level of understanding two different levels of understanding. All right. That was absolutely the longest jump out from a piece but wanted to talk about that. Transformers create shapes of the universe. This one is just I'm going to do a separate one on this because we already went down a tangent. Vulnerability management isn't about finding issues, it's about fixing them in context. Did this for dads, which is a cool vulnerability management company who sponsored the post, and I've been wanting to write that article anyway for a while, so that was a good opportunity to do it. Book club this weekend was so, so good, and we've selected The Republic by Plato for this week's or this month's book. So really excited about that. Oh yeah, currently reading Fabric of Reality and the Beginning of Infinity. These are the ones that spawned that whole conversation. And. All right, let's get into it. Security scissors. Warning. Brute force attacks against default credentials against ICS and SCADA. Um, sharp increase in OT attacks reported by Fortinet. And a lot of people are saying that this is China basically getting ready for a kinetic battle with the US by preparing a whole bunch of cyber stuff. So the idea is when it gets ready, gets ready to go into Taiwan, they'll launch all these infrastructure attacks and basically mess with us, mess with our society, potentially mess with the military as well, basically cause a whole bunch of cyber related critical infrastructure problems for the US power, water, you know, other critical services, internet and basically mass disruption while they're going into Taiwan. That's a theory a lot of people have. I'm not sure how valid it is. Seems somewhat valid, at least. China reportedly achieved a significant ah, milestone by developing a generative AI model that operates across multiple data centers and GPU architectures. Particularly impressive given how hard it is to get stuff right because supposedly under sanctions. But they seem to be making plenty of progress. One thing I wanted to point out is that it's not PvP necessarily against China. It's PvE because they only need to be able to do these things here, right? Continuously crawl for all of our attack surface and find vulnerabilities. Write exploits for those vulnerabilities, actually exploit them, control their population, launch disinformation campaigns, propaganda campaigns, uh, distribute tech that can do this to all of their buddies that are also totalitarian, right? So if they could do the if they get good enough to do these things, it doesn't matter if we're 2 or 3 generations ahead. That's why it's PvE instead of PvP. The only way it's PvP is that if they are doing these things against us, using a certain level of AI or ASI or like AGI, ASI we want to have a next generation one that's capable of seeing it and stopping it. But that requires not just the brain, but all the visibility as well to feed the brain. Okay, Worldcoin is addressing the issue of bots and deepfakes with decentralized identity system that confirms if someone is human. So we're starting to see the practical uses for Worldcoin. By the way, it's an Ethereum based blockchain. Validation of humans. Interesting stuff. Chinese attacker called Salt Typhoon. Very cool. Should be a band name targeting Cisco Systems to establish persistence. Again, going back to the thing, just imagine that they have hundreds of thousands of different systems that they're able to mess with, like rerouting things. BGP attacks, like just imagine the internet not working power going down for so many people. All right. Before the Taiwan attack, U.S. has charged three Iranian nationals linked to the IRGC for hacking into accounts of U.S. officials and political campaigns. T-Mobile got hit with a $16 million fine from the FCC, and they were basically forced to spend another 16 million on security as part of the settlement. And massive love to my buddy Matt Johannson, this story right here goes out to his site, which is vulnerable. You. So now a lot more of my stories. He's going to start making actual cybersecurity stories. So instead of going to like Bleeping Computer and uh, what are some of the other ones, a bunch of security news ones, right. Um, but I'm like helping him with like some of the summarization, different pieces, like what a good summary looks like, big picture impact, why it matters. So I'm giving a bunch of feedback on that kind of stuff. And, uh, yeah, I really love the idea of being able to point to my buddy's site. Who can do ultimately, to do this? Well, you have to find the stories and you have to summarize them well. And the reason he's going to be so good at this is because he is an actual cybersecurity expert. He runs like Appsec, Reddit, and he has been in this field, I think like almost as long as me. Uh, or if not the same level. I don't think so. I don't think he's as old as me, but he's been doing this for so long. One of the longest people in the industry, and he's an absolute guru at it. And he's good at writing at the same time, and he's learning AI. So it's like, this is the perfect combination, and I can't wait to start pointing more and more of my stories to his site and spreading the love there as a buddy as well. U.S. Commerce Department proposing a ban on Chinese, Russian, automotive, auto, moto automotive hardware and software. Basically saying it's a way for adversaries to get footholds into U.S. systems and infrastructure, which, yeah, that's what we've been talking about. Ukraine reportedly found Starlink terminals in a downed Russian Shahed drone. LM based hacking assistance. I think we all get that lots of different tools here that actually help you attack and defend with Llms. Telegram is now starting to hand over phone numbers and IP addresses of suspects. This is all part of their CEO getting arrested. Security researchers found a way to remotely control millions of Kia vehicles by only having the license plate number, which allows you to look up and basically gain controls. Pretty nasty. One so the biggest hype switching to AI in tech, biggest hype in the last week has been around notebook LM, which is, by the way, what I use to create that, um, that podcast I played for you earlier. So it's a big Google project. I don't know why they're not talking about it more. Never mind. Yes, I do, because they're really bad at product and marketing and everything like that. So basically it's this tiny little project which they're not announcing, they're not putting out anywhere. Like you learn about it on Twitter? Whatever. It'll probably be dead in a week anyway. Uh, to the graveyard, along with everything else. But anyway, fun while it lasted. It's super fun. And oh yeah, here's an example. So I uploaded an Alma file, which is part of a Telos project, which I'm going to be talking about, hopefully making content on tomorrow. Anyway, it's this giant context file I uploaded it. It is actually the context file in the Telos format for a security program. So managing a cyber security program for a company, it's for this fake company called alma. I've uploaded it. It's in the fabric repo right now, but it's soon going to be in the Telos repo as well, which is designed to do specifically this. But check this out. You put it in there. Look at this generate deep dive conversation. You can actually create a podcast based on that security program. And they'll just talk about the security program back and forth. That's crazy by itself. But most importantly, look, you can ask these questions about the security program. And you got this typing field down here where you can interact and ask questions about the program. So super cool. Newsom. Yeah, recommend checking out notebook LM Google because they definitely needed a Google domain. Whatever. Newsom vetoed the California AI bill, saying it was too restrictive. I agree with him on this. A good, good move, though. I want to see another follow up bill. California made it illegal to use AI to impersonate actors without their consent. Somebody found out their voice was being AI cloned by a company and they're mad about it. Convergence AI, co-founded by Marvin Porter and Andy Tallis, has raised 12 million to develop proxy agents with long term memory. New task of agent, new class of agents. Yeah, uh, this is going to be huge. Huge meta launched three two. They can now handle images and text competing with OpenAI and anthropic on that ChatGPT advanced mode. Definitely, definitely. I use it all the time as all these examples we just went through. Hugging face now has 1 million AI model listings. 1 million models, all. Are they all open source? Pretty sure they're all open source. Are they all open source? I think so. I think they're all open source. They're definitely all there for you to use. I think that implies that they're open source. Um, yeah. Anthropic is in early talks with investors. Could value the company at 30 to 40 billion. Oh, by the way, OpenAI just closed its funding, so I think it's worth some crazy amount now. They just raised like more than Elon did for a lot of his projects. Yeah. Crazy crazy amount of money. They just raised enclosed. Google's rolled out two new production Gemini models, constant stream model updates for the main players, much smarter models or models that can do new stuff like vision or whatever, multimodal, and then models that are like two 500 times cheaper. One of the cool things that was just mentioned is like in general, models are 99% cheaper than a year ago for the same functionality 99%. Ilyas I reading list Agile's original principles have been lost in translation. AI is disrupting the traditional ad supported internet model. Yea yea yea yea yea. This one's crucial. I did a piece a little while back called what was that called? I don't remember. It doesn't matter. Um, real internet things. I've got this part called AI mediation. Oh yeah. Here's the AI mediation piece. Yeah. Das will mediate everything. We'll say what we want. Our Das will manipulate external demons and APIs to make it happen. So this is from, uh, I think the very beginning of the year I put this out, but it's actually from my book from 2016. And this is just an updated, illustrated version. I've been talking about this. You should check out this book, by the way. It's called The Real Internet of Things. It's actually a little bit crappy of a book. It's more like a blog post or like an essay, but it's pretty well structured. But most importantly, all these things that I've been saying on the podcast that you've been listening to for like 2 or 3 years or five years or however long it was all in that book in 2016, all these predictions about AI, what's about to happen to AR? It's all in there. And that's why I've been consistent about what I've been saying and predicting all along. So I highly recommend you check out the source material, which is that first book. And yes, I was even talking about AI back in 2016 and that that I was the source of all these different predictions. Obviously, I didn't know it was going to happen in 2022. I thought it might take longer, but yeah. Anyway, go check that out. Um, SpaceX Starlink is about to hit 4 million subscribers. Um, yeah, I base a lot of my investment in prediction on leaders and ideas. Elon jobs Jenson Zuckerberg doesn't mean they're going to win, but I think it's bad idea to bet against them because they are so locked in on vision and also able to execute. Smart TVs from Samsung and LG are taking snapshots of what you're watching multiple times per second. Plus energy started a $1 billion solar and battery storage project. The white collar apocalypse is nigh. The decline of status, economy and the impact of AI on job markets. Yeah, pretty cool one. Definitely go check this one out. Meta. Orion's AR glasses are a big upgrade. I don't think Mark is right about replacing phones by 2030. I think that's too ambitious. I think phones offer too many things that glasses don't, and I just don't think the physical miniaturization tech is there to go into the glasses to replicate what's on the phone Hacker News thread, where people are sharing their experiences about whether having a personal website helped get them hired. Some people said yes, a lot of people said yes. Some people said no. All right, humans. California is taking phones out of schools, basically limiting cell phone use. So massive props to Jonathan Hite on this. Uh, someone found a method that uses stem cells to regenerate insulin producing cells. ExxonMobil is facing a lawsuit for coming up with a supposed fraud saying or a myth saying that recycling could help solve the problem with plastic waste. California is looking to make it one click. Get off of subscriptions just the same way. One click sign ups. So if you offer a one click sign up, you have to offer a one way to get out of it as well. Over 10,000 books were banned in US public schools from 2023 to 2024. Insane Steve Jobs had a thing called a ten minute rule, just ten minutes to tackle a problem or a task. Check this out. That's why I love doing video. I've got one of these. I actually use it. It's called pomodoro, but you can set it for any time. Right. So 30 minutes is like the classic one. It's inverted. But but you can put it down to like. There you go. Ten minutes. So I think it's cool. Constraints. Constraints inspire creativity. And Maria Popova explores profound curiosity and love of Henry David Thoreau as he encounters a screech owl in the wild. Ideas. Everything is going behind the paywall. Yeah, I'm not going to read this. I'm just going to give you the overview. Freeform. I think it's really interesting to think about how there's the open internet is going away. Everything's going to be behind auth not only to block bots but also to get micropayments. So rich people who have money for their their eyes. Okay I'm going to give you a little bit of a I also have to make this piece of content, but a tiny little view of the future. Okay, you're going to have all your personal context in your personal AI. It's with you all the time. Your personal AI is the one going out and reading everything for you. It's consuming everything. It's consuming all the APIs, all the RSS feeds, all the videos. It's making transcripts. It's translating them, it's summarizing them. It's doing everything for you. And as you're moving through your day, when you have a little bit of extra time, it'll create a custom piece of content for you. Actually, I'm merging both of these. Cool. It'll create a custom piece of content for you. Okay, so there's this paper that I just summarized. Where is that thing? Oh, it's right here. So my buddy just sent me this thing. It's this giant PDF. Um, my AI is going to go read this thing. It's going to give me a summary based on how much time I have before my next meeting. It's like, hey, you want to hear about this thing? Or do you want to see a video? I'm like, yeah, give me a video. So it's Richard Feynman explaining this paper to me in 30s, which means the content. Watch this. The content duration, the content format, the content style, all of. Oh and the performer. The person actually giving the content. So it could be a graphic novel. It could be a video, it could be an essay, it could be a video, it could be text, it could be whatever, but it could shrink from 30s down to 10s do a giant set of books, and the AI will dynamically recreate that for you, which means this is actually the future of education. And this is going to be a whole separate conversation. In fact, we're going to break this out as a separate clip to talk about this. This is going to be the future of education, because think about this. The AI can also adjust based on somebody's limitations. Okay. They haven't learned this topic yet. They're not so great at English yet. They have a learning disability okay. The AI is going to be an expert in learning disabilities, which means it can repeat things, or it could avoid certain ways of explaining things, or it could double down on other ways of explaining things. Dynamic adjustment to the best way to teach you at that given time. Maybe if you're a kid, you want to hear this lesson as a story. You want to watch it as a video. The whole thing could be framed as Taylor Swift or Pokémon or Star Wars. Okay, so your eye is going to be able to shape teaching you anything using any format, in any duration, in any style of teaching. Okay. It could be a podcast between your two favorite people, and they're talking about the ideas debating back and forth. It could be a graphic novel, it could be a TV show, it could be a cartoon, it could be an anime. It could be a YouTube video of your favorite creator just doing a YouTube video for you about that thing, and you're like, nope, I want it in 30s nope. I want it in one hour episodes, five of them per week. So it's like I sit down for an hour and I learn this thing. This is going to massively disrupt education. Okay, we already knew I was going to disrupt education. We already know technology is going to disrupt education. Like this is all known. But this particular formula of being able to adjust all those variables and create dynamic content, that is the game that that is the education game, like 100%. Okay. Now now think about this. All you need to do is come up with the absolute best content, okay? Once you have the best content, which which I hate to tell you, it's kind of already out there. Okay. What what are the best concepts in journalism or biology or whatever? It's in there. It's in the textbooks already. Right? The smartest people have already been talking about it. It's in all of their books. Well, go read their books. We've already seen their lectures. Half of those lectures are already on open MOOCs on, uh, you know, Harvard or MIT or whatever. Like, these people aren't being quiet. It's not a secret what they know about the world. So all the stuff's out there, so the content's solved. The only question is, how are we getting that content to the people? Now, you take something like an extremely expensive university like Harvard or Stanford or something like 11 people go per year and it's horribly expensive. How many people are on the planet? Oh, 8 billion, 8 billion people need that education and don't have the time. And they have language, you know, disabilities or language like, um, mismatches with the way the original content was taught. Doesn't matter. Instant translation to their language while on ramping them onto English if they want to learn English. 8 billion people getting exactly the education they need in the topics that they need in any format that is best for them, represented by deepfakes of exactly the best people or in the format of whatever a show, whatever is the best way to get it to them. Like how is that not the way people are going to learn in the future? That's what I'm asking you. How is that not the future? Now, again, how fast are we going to get there? Are there lots of different ways that will be opposed? Of course, of course. Right. So when is it going to start? Today. Yesterday. Tomorrow. That's when it's going to start. It's already starting. When will it be finished? Never. There will always be a forces that push it back and say no, you've got to dress up. You've got to go to this special school. Because remember, a lot of education is actually not about the content and about the the learning. A lot of the education is actually about the stamp that you get afterwards, which we can solve a different way, which we'll be talking about at some point to and kind of already have talked about. All right. Let's keep going. I think we covered both of those topics pretty well. Oh, yeah. Future of education. Exactly. Okay. Discovery acts framework extension of Axios for bug hunters and penetration testers. So it's basically a way to do large distributed scans as part of like Bug Bounty, the Russian APT tool matrix. New tool matrix focused on Russian APT groups GRU, SVR, FSB affiliated threat actors. Reuse of tools like Mimikatz, impact, exact, Metasploit Re George Re George being particularly notable for limited use by by ransomware gangs okay Merkle map subdomain search engine helps you discover subdomains. Gun gear is a command line tool. What's with this font? Do Not like is a command line tool written by my buddy gunner and go. Microservices are technical debt. Really really cool. Take there. Be somebody who does things. That was a fantastic essay. It's lists all the way down. Another fantastic one. US government is now publishing tide sensor data to show sea level rise. Love the transparency here. Comedy is search, critical mass and tipping points. Semantic chunking for rag flipper zero. Firmware. Open source firmware unlocks a lot more of your flipper device. MTA Open Data Challenge Paul Graham addresses the question of whether to follow your passion or not. Compressed jpg, compressed jpeg move fast and abandon things. BBC Sound Effects Library what I learned in the past years spent building an AI video editor. Danger zone is teamed up with Google's. Gvisor to enhance its security, allowing journalists to open suspicious documents safely. RM Creative Console might be getting one of these actually need to go order that advanced structure output tutorial. Logitech's MX Creative Console is a game changer for photo and video editors. Dieter Rams inspired iPhone dock. This thing looks really cool. I don't think I would get one, but isn't that cool? Yeah, that looks cool. And automatic litter boxes because I want a cat, but I'm allergic to cats, so probably not getting one recommendation of the week. Take a document or something you know really well and upload it to notebook LM and tinker around with it. It is super cool and do it quickly because they might put the project in the Google graveyard. Um. Aphorism of the week. I want so much that is not here and do not know where to go. I want so much that is not here and do not know where to go. Charles Bukowski. Unsupervised learning is produced and edited by Daniel Miessler on a Neumann U87 AI microphone using Hindenburg. Intro and outro music is by zombie with a Y. And to get the text and links from this episode, sign up for the newsletter version of the show at Daniel missler.com/newsletter. We'll see you next time.