HackerCamp Approaches, Introducing Substrate, Kaspersky--, Exim/Gitlab Vulns, Personal/Business Branding, and more…
➡ Check out the Autonomous IT Podcast:
https://community.automox.com/autonomous-it-podcasts-144
Subscribe to the newsletter at:
https://danielmiessler.com/subscribe
Join the UL community at:
https://danielmiessler.com/upgrade
Follow on X:
https://twitter.com/danielmiessler
Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler
See you in the next one!
Discussed on this episode:
Intro (00:00:00)
AGI Definitions (00:01:29)
Pinnacle Human Employees (00:02:36)
Transition to ASI (00:03:18)
Dynamic Content Summaries (00:03:48)
Deepfakes in Education (00:04:43)
AI and Disinformation (00:09:04)
Manipulation and Inequality (00:11:01)
Internet Trust and Content Verification (00:11:01)
Concerns Over AI's Impact on Society (00:12:09)
OpenAI's AGI Levels (00:13:08)
AI Startups and Future Predictions (00:15:24)
Technological Innovations (00:16:24)
Literacy Crisis in the U.S. (00:17:29)
Public Reaction to Health Risks (00:18:35)
Search for Extraterrestrial Life (00:19:19)
Exoplanets and the Drake Equation (00:19:37)
VCs in Medical Practices (00:19:57)
Conspiracies and Failures (00:20:48)
Therapy and Rumination (00:20:57)
Discovery Fluff on Lambda (00:21:12)
Securing Workflows with GitHub Actions (00:21:22)
Employee Disposability (00:21:32)
Correlation of Smoking and Lung Cancer (00:21:48)
AI in Satellite Imagery (00:22:04)
Git Commits Insights (00:22:49)
Check on Friends (00:23:12)
Judgment as a Key Skill (00:23:22)
Are you prepared for whatever shitstorm may hit your desk during the workday? Otto Max has your back. Check out the brand new autonomous IT podcast. Listen in as various IT experts discuss the latest Patch Tuesday releases, mitigation tips, and custom automations to help with CVE remediations make new work friends. Listen now to the autonomous IT podcast on Spotify, Apple, or wherever you tune in to podcasts. Welcome to Unsupervised Learning, a security, AI, and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis, and mental models to bring not just the news, but why it matters and how to respond. All right, welcome to unsupervised learning. This is Daniel Miessler. All right. Episode 441 441 of these. That's pretty insane. Okay. Friend of UL Ray ulnar is looking for a new position as a systems engineer. You should reach out to Ray directly and got his LinkedIn there. He's one of the best members of UL and, uh, super, super smart, super kind and very competent as well as you could see from his LinkedIn. And, uh, he just came onto the market. So I would grab him before he goes, added a couple of new levels to the AGI definitions. So if we click through there and we go down here, here are the definitions or the levels. So AGI one is better but with significant drawbacks. Two is competent but imperfect. Three is a full $80,000 a year white collar worker replacement four okay, this is where it gets cool. for is a world class employee such as Andre Karpathy or Jeff Dean. So imagine 1% of 1% and vision creativity programming, execution ability. This is like top top tier. So imagine there's only like 100 or 1000 of these employees on the whole entire planet at any given time. Wherever you want to set that bar, it's something like that. Level five is a pinnacle human employee. Okay, so this is somebody like the smartest person who's ever lived. So a John von Neumann or an Isaac Newton or a Richard Feynman or a Claude Shannon. So what this offers over tier four is the ability to invent completely new things when they don't exist, or see and explain the world in a completely new way. So Newton didn't have calculus, so he invented it. That's always like, astounded me. Feynman was a supreme teacher, like a lot of people consider him to be one of the best teachers ever. Von Neumann innovated in mathematics, physics, engineering, game theory, like tons of stuff, and Claude Shannon gave us information theory, the foundations of cryptography, and a million other things. So basically, at this level, you not only have the creativity and execution of a top, like, whatever, only 100 or 1000 in the entire world at any given moment. But you also have the once in a generation level innovation capabilities. And once you're at Pinnacle Human, okay, the level after that is where I cross over into ASI, which is defined logically as a level of general AI that's smarter and more capable than any human that's ever lived. And that's the reason I added these, is so we have a smoother transition from the end of AGI right into the beginning of ASI one. Right. And that's where we have right here. ASI one. So that was that. Uh, I feel like Apple Notes is my actual operating system and Mac OS is just the window manager, just, uh, stupid jokes. Um, how many notes do I have in Apple Notes? I'm going to look right now to try to emphasize this point. 3867 Apple notes my operating system. Not really, but you see what I'm saying? All right, let's get into it. Okay. So substrate I've been thinking about and working on this one for months now and finally announced it. And actually there's a separate whole substrate video coming out very soon you should check out. So yeah, go look at that when it comes out. I'm not going to go into detail here because I just did in that other piece, got a new piece on dynamic content summaries and how I think they're going to be the way we view content in the future. Essentially, the idea here is that rather than get raw content from different places, I will produce content for us dynamic. So imagine that we love to see Richard Feynman explain things to us. Well, anytime anything complex happens in the world, we're going to have a video. It's going to be a YouTube video or something like a YouTube video, and it's going to be Richard Feynman fully deepfaked, fully Deepfaked. It looks exactly like Richard Feynman. And he's actually got a piece of chalk, and he's actually in his Caltech classroom with a giant whiteboard, and he's dynamically whiteboarding out this concept to you and explaining with his voice and his jokes and his humor or whatever. And he's explaining everything to you. In fact, it might be like if it's science related or complexity related or whatever, something Feynman ish, then use Feynman. But maybe you've got this other like cute influencer girl who you like to explain, like modern cultural topics or whatever, and she is deepfaked to do that for you there. And of course, there's going to be all sorts of considerations around like, well, do they have permission to do that? And a lot of people will say, yeah, go ahead, do it. As long as it falls within these criteria. Some people won't care. They'll just deepfake anybody and do that. Um, other times it'll be like the avatar of your actual Da. So your Da is going to your digital assistant, your personal AI is going to have an avatar as well. So maybe you like that avatar to tell it to you, right? Because maybe that one is of a coach, or honestly, it's just going to become like one of your best friends. It knows you the closest. It's very, you know, intimately connected with you. So maybe it's the one teaching you. Maybe it's you and your own voice teaching you. Okay. It's going to be very easy to deepfake yourself, so maybe you're explaining it to yourself like you created the content. That would be insane. Point is, if there is a four hour video explaining this very complex topic, okay, it's a new type of machine learning. It's a new type of AI. It's a new hacking technique, but it's an hour long. It's three hours long. It's four hours long. It's five hours long. It's a 20 video course. The AI will go and watch it, and then it'll be like, hey, um, I created a thing. I know you don't have much time this week, but I created a 14 minute summary of this 20 video series, and I'm going to show it to you. It's actually done by Richard Feynman. So now Richard Feynman explains that entire 20 videos to you in 14 minutes, perfectly taught by Richard Feynman. So this is the concept of dynamic content summaries that the raw version of a thing is usually not going to be ideal for a person. The ideal will be to have it pitched in the way they like to receive things. Okay, because it won't always be a video. Sometimes it will be an audio book. It'll just be audio in your ear. Other times it'll be like a visual story, and it won't actually have an avatar or a person teaching it on a whiteboard. It'll just be like animation, and maybe people will learn better from that. Maybe people will learn better from a story, whatever they prefer. That content will be dynamically created by eye to best give you the content from the raw version. And of course, I'm a huge fan of slow, slow learning, and there's a whole bunch of stuff in fabric already for this. Like sometimes you don't want that summary, and sometimes you should stop watching those summaries and say in the AI should say, you know what? I created you a 14 minute summary of this thing, but this you need to go watch the full thing because oftentimes because I've been using these summaries with fabric for almost two years now and I often, well, pretty much always I always get something more from doing the slow version, from watching the four hour version, from watching their facial expressions. I get value from the summary itself, but I get more value from watching it happen slowly and watching it seep in slowly. So maybe you do both, maybe you do one or the other, but don't give up the slow just because you could do the fast. Exploring the idea of personal and business brands. Yeah, that was another short piece. Okay. Kaspersky is shut down US operations because of banning. AT&T says nearly all cellular customers and some landline users have had their data stolen. But Kim Zetter at Wired said somebody got paid like $300,000 from the hacking team by AT&T to delete what was supposedly a copy of all the data, and supposedly the only copy of all the data. So we'll see how that plays out. But good reporting from Kim Zetter. Russia is using AI enhanced software called Meliorator to create fake online personas for disinformation campaigns. Ooh, love this, because the better this gets, the more it's going to look like a real human. And then if the AI starts acting like a human with like clicking on things, being able to tell the difference is going to get a whole lot more difficult. And this is actually one of the things I'm most worried about from AI is disinformation bots, not just the sheer number of them, but also their how advanced they are, how sophisticated they are. Basically, the better AI gets, and especially from agent frameworks, the more it's going to be exactly the same, or almost exactly the same, as if our enemies have millions of employees in their intelligence agencies. Okay, let me say that again. That was that was too convoluted. One of the things I'm most worried about is that the better AI gets, it'll be like our foreign intelligence agencies for our enemies China, Russia, Iran, North Korea. Instead of having a few hundred elite people, they're going to have a few hundred elite people and 40,000 right below them, elite people. And then the better the AI gets, those move up into the elite, and then the elite humans become augmented with that AI and they become like super elite. And the better the AI gets, I mean, it just keeps moving up that chain. So if you think it's bad to have ten, 50, 100 elite people in these different countries who are both Intel trained and manipulation trained, but they're also like super hackers, and they're good at all these different skills. But there's I mean, there's only so many in the world. Well, now imagine them with a thousand of them, 10,000 of them, 100,000, 100 million. And they're working consistently with no sleep to go after all your targets. That that's what I'm personally worried about right now. And if they are producing content and the content is all manipulative, the internet is going to have to switch from block list to an allow list. I mean, think about that. The internet currently is an allow list. I'm sorry, it's a block list. Everything is available, and you have to specifically go into a place and say, I don't want to see this. I don't want to see this. I don't want to see this because of this problem right here that I'm talking about. We're going to have to switch. We're going to have to switch to the other method, which is to say my I, my internet platform, my mobile OS, okay, Android, macOS, iOS, they're not going to allow anything in unless you have the green check mark. What is the green check mark? Green check mark is an agreement between the the tech platform, the mobile platform, the OS and trusted providers like a YouTube, which has approved sources from approved individuals and approved organizations. So your news is going to have to come that way because it's going to have to be vetted. So it gets the green checkmark. This might be something that's required. It's going to happen anyway, but it might be something that's required within settings and within operating systems because it's simply too hard to drown out everything. Because this problem right here, the amount of disinformation bots, pretty soon it's going to be like, whatever, 50% of all content is going to be manipulation, then it's going to be like 75%, then it's going to be 95%. So if you are naive and you just walk onto the internet, especially as like a regular person or like a grandma or something, you're going to be screwed. You're going to be just like pulled around, Like all your money will disappear because you had to save your kids from some horrible accident. And all of a sudden there was an asteroid that hit the thing and aliens came. And pretty soon you're in a bunker and you have no money, and it's like none of it was real. The amount of and quality of manipulation. Think about that as well. The quality of the pitches and the manipulation and the deepfakes and the trickery is going to get so good. It's going to trick us to it's going to trick experts, not just regular people. There's a new XM vulnerability with a Cvss of 9.1. Google is now offering Passkeys for high risk users who join Advanced Protection. Previously, they had to get a physical token foreign influence campaigns analysis from US intelligence. Basically, they say Russia is backing Trump, probably mostly because of Ukraine. Iran is acting as a chaos agent in its campaigns, focusing on exploiting US political and social tensions rather than backing one side or another. And China's mostly staying out of the elections just trying to data collect for future influence operations. Think OPM, uh, Marriott. Yeah, though don't get me started on those. Thanks to auto Max for sponsoring. GitLab has a critical flaw in its CI CD pipelines. Thanks to Project Discovery for sponsoring and OpenAI's AGI levels, OpenAI has published their five tier ladder for AI progress. I'm honestly not a fan. Other than level five, I find level five really interesting. But the problem is they're going from like chat bots to human level reasoners and then to AI agents, then to innovators that can aid in invention. So level two and level four are way too close. Okay. So human level reasoners to agents. It's not a big enough jump. Uh, and both are already possible. That's the other problem. Level two and level four. They're kind of already possible. Then you have this really interesting jump at level five to something that can do the work of an organization. So for OpenAI level five, AGI is something that could do the work of an entire organization, but that's not super helpful. Okay, there are lots of very dumb organizations, okay? An organization could be for people. An organization can be Apple, an organization could be OpenAI in the year 2027. So that is not super descriptive in my opinion. And even worse, the problem is they're mixing criteria here. Right? So one is I don't know what what criteria number one is Reasoners is about thinking quality agents is just an attribute, which means the thing can take an action. It doesn't say anything about the quality of the action. So that's not necessarily above one or below four or above four. It's it doesn't put it anywhere because it's apples to oranges. Innovators is just a descriptive output. So it aids in invention. Here's the problem with this chat bots already aid with invention and that's what they have as their level. One is chat bots. The chat bots already help inventors invent better things. Then you have level five, which is actually about scale more than quality. Um, yeah. And like I said, scale isn't super important because you don't know if you're scaling like an ant colony. They're just kind of doing the same thing. Or if you're scaling and the scale itself leads to smarter things. So just not overall happy with it. I'm happy they put it out and I feel bad because I'm sure someone put some effort into it. And I'm sure they're human and I don't want you to feel bad. Just hopefully take this feedback and maybe the next one. Just think about these ideas, okay? AI startups raising $100 million in 2024. Got a full list there. Anthropic has more new features, uh, to help automate prompt engineering. And I see a lot of my friends switching to cloud over ChatGPT right now, especially sonnet 35. I'm actually using sonnet 35 as my default within fabric right now, and it's a leapfrog game because soon we're about to have opus 35. Cannot wait for that llama 3.0 300 billion and eventually GPT five, I'm guessing fall to right at the very beginning of January. So I'm going to say August. September. I'm going to say between October and February, 90% chance of GPT five or the next GPT to come out, whether they call it 4 or 5 or they call it five, whatever, or some new name. And yeah, 2025 is going to be nuts. So we're going gonna we're gonna have a new president. Well, maybe a new president, but either way, it's going to be a new whatever. There's going to be an election and an outcome. So that's going to be crazy by itself. Um, hopefully not the bad crazy, but, uh, yeah, there's just going to be a whole lot going on in 2025, not the least of which is all these new models. New fiber optic network transmits data at above 400TB per second. And importantly, this is using existing fiber, not some, you know, adamantium special stuff. YouTube music is testing. A new feature lets you use AI to generate a playlist by just describing what you want to hear. I would use that a lot. There's a surge in delivery startups like Hail Ify, okay, Hail Ify, and they're focusing on shine and TMU, which are the Chinese people competing with Amazon? Why women are disappearing from tech Houston is on a path to an all out power crisis. Tour de France riders are inhaling carbon monoxide, which is a poison gas. It's the one that kills you, I think, in your car, in a garage, if you don't have, like, open ventilation, how how is a toxic gas improving performance that okay. Yeah. 130 million US adults have low literacy skills, over half of Americans aged 16 to 74. Read below. A sixth grade level. I'm gonna say that again for the people in the back, over half of Americans aged 16 to 74 read below. A sixth grade level moving on Colorado poultry workers test positive for bird flu. But we're not talking about that because we we're tired of talking about viruses. Uh, we're tired of talking about Covid. We don't want to talk about bird flu. That's just another Covid. It's, uh. We're exhausted. You know what? I give people a break. People are exhausted. They don't want to talk about negative stuff. They literally are just going to ignore this with blinders until it literally comes to their front door and, like, makes their friends sick. And other than that, any other coverage of this, they're just going to be like, yeah, I'll know about it if it gets too bad. Yeah. And meanwhile, all the experts are like, guys, this is kind of bad. It's transferring it humans. It's killed 6 million birds and is now infecting dairy cattle across the state, which means it's going into milk and dairy workers are getting it. Uh, no. Poultry workers are now getting it. So it's like. I mean, I kind of feel it along with everyone else. I'm like, uh, I guess if my friends start getting it, I'll pay attention. Very strange times we live in, we're so, so exhausted by negativity in certain types of negativity. Just foreign. 10,000 galaxies may host intelligent aliens. This is a new study basically trying to explain the Drake equation of like why we're not seeing aliens, but four out of 10,000 galaxies. I believe there's 200 billion galaxies. Unless I'm mixing up my numbers. Is it 200 billion galaxies? I think it's 200 billion. So 4 in 10,000. But there's actually. So what is that ten K to 200 billion. That's still a lot of galaxies. That's a lot of galaxies. And that's just this is just one study okay okay. Okay. So it's saying 0.003% to 0.2% of exoplanets. Yeah, but there's a lot of exoplanets like this is not super depressing to me. But I guess it's interesting if it explains the Drake equation. Okay, ideas VC's are buying medical practices. I got a buddy who's talking about how VCs are just coming into these practices, and they're just moving in and being like, you've got to sell all this super gross stuff. Order tests that you don't need, like sell them miracle drugs that don't actually work. You've got to do this to hype up the user base. They're basically turning it into like sexy sales and, um, getting them to do things other than fundamental, like high quality, proactive health like diet and exercise. It's kind of going the opposite direction towards more tests, more assessments, pills and things that don't actually work. Why? Because they make more money, which is good for VC's. So really, really bad. And thanks to my buddy for calling that out. Most conspiracies come from not realizing how often things fail. The tweet thread I put out recently on X therapy, rumination, and Untying knots. This is about therapy. Recommend you click on it. We're already going long so I'm not going to talk about it here. Discovery fluff on Lambda. Really cool. Uh really cool project by this guy here. Um, definitely want to check it out. Bullfrog GitHub action secures your workflows by controlling outbound network connections. Everything you see is a computational process if you know how to look. Um. Love it. Sean Purry emotional elicitor. This is a prompt by Moritz Kremp who's a really cool, uh, doing some really cool prompt stuff. Got to check him out. WTF happened to blogs? As an employee you are disposable. You never control the arc of your career. Smoking versus lung cancer deaths. I'm going to show this one. This is insane. Look at this thing. Hmm. Wonder if there's a correlation. Well, we see a correlation. I wonder if there's causation. That's that's compelling. Look at this. Taxes increasing bans taxes. Look at this. These are deaths going down. Great depression, World War two boom US banned cigarette ads on the radio. And surgeon General report linked smoking to death from cancer. That's the top. And it starts going down from there with all these other measures. Really interesting learning multiple concepts from a SQL image, change detection in satellite imagery. This is a hobby. I would be doing a lot more. I'm actually waiting for the AI to get really good at this. I want to create a bot that basically watches the latest satellite images from choice locations, uses AI in a pipeline, and sends me a constant flow of updates of what changed. Like it looks like they did something underground. Um, a whole bunch of trucks moved in. They appear to be this kind of truck. Here are the possible explanations for that. Again, going back to the Intel thing that I'm building, I want continuous flows of information flowing into me based on capturing artifacts, in this case images, and doing AI analysis on them. 89 things I Know About Git Commits I got the most comments about this one and substrate this week, so definitely check this one out by Jamie Tanner and the recommendation of the week. Check on your friends if you haven't heard from them in a while, send them a text. It's free and they will appreciate being thought of and be consistent about it. Like push it right? They they will appreciate that in the age of infinite leverage, judgment is the most important skill. Naval has been crushing it lately. Absolutely crushing it. I'm going to say this one again. In the age of infinite leverage, judgment is the most important skill. Naval. Unsupervised learning is produced and edited by Daniel Miessler on a Neumann U87 AI microphone using Hindenburg. Intro and outro music is by Zomby with the Y, and to get the text and links from this episode, sign up for the newsletter version of the show at Daniel miessler.com/newsletter. We'll see you next time.