A conversation with Rob Allen from ThreatLocker, UL's Black Friday sale, Finland's internet disrupted, and more...
Subscribe to the newsletter at:
https://danielmiessler.com/subscribe
Join the UL community at:
https://danielmiessler.com/upgrade
Follow on X:
https://twitter.com/danielmiessler
Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler
See you in the next one!
Welcome to Unsupervised Learning, a security, AI and meaning focused podcast that looks at how best to thrive as humans in a post AI world. It combines original ideas, analysis, and mental models to bring not just the news, but why it matters and how to respond. All right, welcome to unsupervised learning. This is Daniel Miessler. This is episode 459 of the podcast. All right. So had a great conversation with Rob Allen from Threatlocker about their approach to Zero Trust. And it turned out to be a really interesting conversation. I always like to dive into what somebody means when they say zero trust, because it could be anywhere from total horse crap to pretty decent, and I really liked Rob's approach in responses there, so definitely check that out. Got you all membership. Black Friday sale just started 20% off the first month, so or the first year, so go check that out. And jumping right into security. Actually, first of all, um, did a huge upgrade to my ubiquiti gear now moving towards a ten gigabit world, which I talked a bit about last week, and going to Saudi to speak at Blackhat EMEA. Mia Blackhat Mia. Yeah Saudi Arabia next week going to be awesome. All right. Jumping into security. This one didn't get near enough coverage. I don't think ChatGPT has basically a new feature that could read code from macOS. So apps like VSCode, Xcode, terminal, it could basically read the app and understand what it's saying and doing, and then give you analysis based on that. And I think this is getting closer to what a whole bunch of people in AI are actually working on, which is what I believe to be like an inevitable future, which is you can see all the screens, right? I'm looking at three screens right now. Ideally my I would be looking from inside the computer because it would have like agent access like we already talked about. But also I wanted to be able to see my surroundings. I wanted to be able to see my cameras. I wanted to be able to see around me, behind me, especially behind me. If you're out in the city walking around and also cameras, um, or also microphones, right. And other types of sensor, but mostly visual and audio. And that way it can be consuming, you know, the environment all the time, constantly parsing what's going on and constantly doing analysis and giving you like, views inside of a heads up display or, you know, glasses or lenses or however that interface is going to be, depending on how far the tech is. But the most important thing is it has to be seeing what you're seeing and even more than what you're seeing. Big problem right now is the on ramp to using AI. It's not a problem of the models themselves. The models are getting really smart. The problem is, how do you get stuff into the model and how do you get it out? That's the problem. So this type of thing is a really huge advance because it's able to see inside your apps. And I'm really excited about this future. I've talked about it a lot. I've talked about it in the predictable path of AI. It's just that I'm not really going to trust this. There's already a couple of companies who will watch your entire screen and will upload that stuff, and they will do the parsing and they will send you back some really cool stuff. Like there's one particular company, I think they changed their name. It's called rewind. Or maybe rewind is the new name. But anyway, I signed up for their new gadget, which should be coming soon. I did not give them access to my whole computer because I could have the calendar open or a message or something, and it's like you could be doxing your friends. You don't know what's going to happen. Most importantly, you don't know how secure that startup is and having consulted for startups in security specifically for decades. I'm not sending my data to those startups. I basically only trust like one company that much, both on the security side and also on the privacy side, and just how seriously they take this stuff, combined with a lack of conflicts of interest. And that's Apple. I would say Google is just as competent, maybe even more competent on the security side, I would say. But the problem is they also make their money off of ads or a lot of their money off of ads. So I just don't like the idea of like them having all that data and then being able to sell it or sell things to me. I don't like the conflict of interest there, even though I like the security. So that's why I mostly only trust two companies and mostly only Apple. But this is where this is going. Continuous monitoring all the time with as many sensors as possible. Front. Back inside your apps. Inside your phone. That's where everyone is rushing to. Even if they don't know that they're rushing to that location. And that's why I think Google and Apple have a massive advantage because they have the device, right? I'm on Apple for iPhone, I'm on Apple for Mac and desktop and laptop. So because they are the OS, they have a massive advantage here okay. Palo Alto Networks has released indicators of compromise IOCs for a new zero day vulnerability affecting firewalls. VMware confirmed that threat actors are exploiting two vCenter server vulnerabilities, and ones like a 9.8 and the other ones like a seven something I. And Tech Anthropic has a new prompt improver that takes a given prompt and writes a better one. Really cool to see people getting in on this. This is part of the overall ecosystem, right? Um, and we got another one here. Uh, OpenAI might launch an AI agent tool called Operator in January, and it will compete with Anthropic's computer use. I think agents is going to be the biggest thing that happens in 2025 for AI, and this is part of a bigger trend that I've been talking about, where it's more about the ecosystem. Okay, so if I if I actually click and open this thing here, it's not just about the models. Okay. So the big pieces that I think are part of these four pillars of an AI ecosystem, the model itself, the post training of the model internal tooling and agent functionality. So agent or model self-explanatory. Post training a set of highly proprietary tricks that magnify the overall quality of the raw model. Way to think of this is to say that it's a way to connect model weights to human problems. Okay. Internal tooling. All right. So look at this list. High quality APIs, larger context sizes, simple fine tuning, haystack performance, strict output control. External tooling like function calling. Trust and safety features. Mobile apps. Prompt testing. Voice mode and apps OS integration integrations with things like make Zapier end to end and things like caching mode. These are like all the internal tooling stuff that just makes it easier. Think of it this way the problem isn't the models. The problem is the on ramps onto the models and the output out of them back into your life. Okay, we are humans. We have human problems. We have business problems, we have personal problems, whatever. We need to get that content, the content of that problem into an AI and then back out into our lives, into our actual brains, into the real world. That's what this internal tooling piece is, right? Because if you can't do this, well, then it doesn't matter if your model is 13% better on some random benchmark, right? So the models have to get better, but not as important as actually getting the interfaces to the models better. So, uh. Oh, yeah. And the next one relevant to the next story there is agents. Right. So an AI component that interprets instructions and takes on more of the work in total AI workflows than just LLM response, for example, executing functions, performing data lookups, etc. before passing on results. And I actually have a improved version of that, which I won't I won't go find it, but it's, uh, it's updated on the AGI definition inside of Raid. The real world AI definitions, if you want to go check that out. Okay, so Sam Altman and Arianna Huffington have a thrive AI health company and, uh, looking at doing personalized advice on sleep, food, fitness and more, Google.org is putting 20 million in cash and 2 million in cloud credits into a new initiative to help researchers use AI for scientific breakthroughs. One of the most important things that I think I could possibly do is actively going to just invent new things, make new research right. Discover new things right. You take the smartest people in the world who who are capable of doing this, and there's very few of them. So if we can actually scale that, that's where the real take off starts to happen. Apple's M4 Max CPU transcribes audio twice as fast as Nvidia's RTX A5000 GPU while using significantly less power. I really want to get one of these clusters, like a m4 Mac mini cluster would be super cool. But yeah, a lot of people have been asking me, should I do the cluster thing with smaller boxes, or should I just get one big rig? I feel like the next generation or the current next the current generation in like another year or so, it's still going to be better to have one big box. It really depends on what you need it for, but I would say we're not quite there yet. With the cluster of smaller boxes, this exo lab and things like that, they're a little bit experimental. The best thing to do still is to buy a box. I bought a premade one and it's fantastic. It's got two 49 seconds in it. It's got tons of memory. It's very, very fast. And it's just like plug and play. Easy to do. I would say if this clustering technology gets better and better and there's more and more devices like Mac minis and stuff like that that you could piece together, then it starts to be a compelling alternative to actually buying a giant AI box. But until then, I think the giant box is probably still going to be better. Okay, iOS 18.2 is music recognition feature. Now logs where you were when you actually heard the song. So that's part of the metadata. Now pharma stocks have crashed. This is under the humans label. Pharma stocks have crashed. After RFK Jr was announced to be taking over Health and Human Services. Moderna is down like 40%. Yeah, I don't know what this is currently at uh, down five, six, two, but that's for the day. So if we go to three months, down 57% in the last three months, Moderna. Okay. They're they're one of the people who came out with the best vaccine for Covid, like 57% because of RFK. I almost feel like this is a buy opportunity. How could this not be a buy opportunity? It's not like they suddenly stopped being able to make things. And I just don't believe that people are going to let RFK just destroy these companies. I just don't see that happening. Hopefully they're going to be able to figure out because RFK is going to do some cool stuff. Like he's right about a lot of stuff. That's what's most scary about a lot of these people. They're so right about so many things, and the things they're wrong about are mixed in with it. Right. And that's not just RFK. That's a lot of people, including probably myself. So, yeah, I don't see how it doesn't bounce back from a 60% cut from three months. Right. Personally, I'm not investing, but I think it would probably be smart, at least in the long term. Netflix had a record 65 million concurrent streams during Mike Tyson versus Jake Paul fight. It did have a bunch of connection problems, though. Everyone I know at Netflix got massively spammed with like, all their friends texting and was like, hey, what's going on? It's like, yeah, you think I'm in charge of actual throughput during a Tyson fight? Like, stop texting me. New study shows that treating bullying as a collective issue rather than an individual one could significantly reduce its occurrence in primary schools. I love the concept. It's kind of like how the johns get in trouble in some European countries for prostitution instead of the prostitute, because it's like the ecosystem that's the actual problem, right? So I like this idea of with bullying, I like the idea of shaming the people around who didn't say anything or do anything. And of course, we have to be careful about shaming. We're talking about kids here in a lot of cases or in most cases. But like, I feel like all the marketing needs to be heading in the direction of like if you were one of these people watching and not saying anything, not reporting it, not intervening like you don't want to be unsafe or whatever, but you could go report it. You could do something to prevent this from happening again. And if you're not something like you are the bully or you are enabling this and this is really bad, and I know this is already an aspect of like a lot of these programs, but I think it could be significantly magnified ideas. Reboot I oh, I absolutely love this one. Absolutely love this one. I can't remember where I got this idea. I think it's been hit me from multiple places, but I want to build a local AI that can run offline. Oh, I know where I first heard this idea. It was actually from Joseph Thacker, like a year and a half ago. He's like, oh, I just want a thing that I can use offline. But, um, more recently I got this from somewhere else I can't was it, was it X or Instagram? I don't know, but the idea is let's say all the power is out, or let's say the internet is out and let's say, um, oh, I know what it was. The initial conversation I was having with, uh, Joseph, and he actually brought it to me. It was his idea. He was like, what if you could go back in time? What could you bring with you to actually move society forward or something like that? And it's like, I think about this a lot way more than the Roman Empire. I think a lot about like, could I actually move science forward if I was put 200 years in the past or 2000 years in the past? What could I actually offer to them? I think someone makes a joke about this in current stand up comedy. It's like, you know, oh, it's Nate Bathgate, Bargatze, whatever his name is. Um, it's just like, yeah, um, there's going to be phones and they're going to have satellite technology. Oh, really? What's a satellite? Well, it's this thing that goes around. How do you make one? What does it do? You're saying there's thousands of satellites in the air and the Earth is actually round? Like, can you prove anything about this? No, I can't. So we we don't know how any of this is working, right? And, uh, and we can't describe it to anyone. And more importantly, if you lose the internet and there's, like, say, a meteor hits or whatever, I'm not going to go into negativity right now too early in the show for that. But let's say something bad happens and you are stuck in your house and there's no internet. Let's say you're not dying of hunger or thirst, but you don't have the internet. Okay, so watch this. Tourniquets, sterilizing water, building shelters, identifying edible plants which mushroom is actually will kill you. And which mushroom can you put on a salad? These are important distinctions to make. So check this out. What if you had an AI that ran? We got to assume solar power, right? Or maybe the grid actually works, but there's no internet. Whatever. Just work with me. You can show it. Pictures. You could take pictures, or you could show it an actual live plant or whatever. All you have to do is show this AI the particular thing or you describe, hey, I need a shelter that does this. I've got this much water. I got it from this kind of creek. I live in this kind of area. What kind of toxins are likely to be in it? How can I get those toxins out? What kind of filter can I build? Will these iodine tablets actually work? I have these symptoms. Which drug should I take? You have a local model running. All that needs is power. Doesn't need the internet. Doesn't need OpenAI. Think of how much knowledge is inside of a llama two or a llama three or a llama four or whatever. Local model. And as long as you have power to run it and it can actually see and you can type to it, it can answer tons of stuff that can actually keep you alive. Even better, let's go a little sci fi. Okay. It's just you and 10,000 other people and the rest of the, let's say, the rest of the planet got hit by a meteor or whatever. You have to rebuild all of society. Okay. Irrigation. How to how does the stoplight work? How does a combustion engine work? What kind of metal do you need to build in order to make a combustion engine? What are all these different alloys you can bootstrap a society with one box. Isn't that crazy? You can bootstrap a society with one box that you can have, like sitting next to your MNAs over here. You just need power. You need power. Well, I guess you need the peripherals to be able to talk to it and everything, but you don't need that much. It's way more impressive than trying to collect all of Wikipedia. I mean, that was the other model, right? You just download Wikipedia and you have a bunch of things you can look for. Not nearly as good as a chat bot that you can ask questions. And if it's visually oriented, like you could just these new models, they're getting amazing, right? It's like, show me the design. Show me a picture of it, draw me a picture, design me whatever a compound that I could defend my plants with because people are probably going to come and get us whatever. Yeah, I don't have any guns. How do I defend myself from these roving people who were going to come try to get our stuff. You could ask it anything. Ideally, it would be uncensored for that reason, right? You want to be able to ask security questions. So all that to say, I'm going to do a project called reboot I with reboot being like Reboot Society or whatever, an offline oracle for emergencies. Um, now it's such a cool idea. I'm sure a million people have already had it, so they're probably already working on it. So if anyone is hearing this or they're reading this, um, send me a link and I'll just like, go buy one and or procure one. If not, and someone wants to help build one, I'm going to go build this. I want this running in my house. I've already got solar. I've got lots of ways to gather energy and store it in batteries, and it would be super nice without internet to be able to ask all those sorts of questions. So cool. Discovery Cloudflare's robots.txt file. It's a mix of Ascii art and directives for web crawlers. Obviously, that allows Twitter bot and demand based website preview to access specific pages blocks many other from accessing. Actually, you know what I'm going to do? Let me just oh bam, look at that. This is why you do a video podcast. You can just zoom in to stuff. Look at this. Why are they. That's a lot of that's that's a lot of bandwidth. Depending on how many people are pulling it. It's a lot of bandwidth. Our tree is a redwood. Cool. This is cool though. Look at all these allows I'm looking at the scroll bar. How long is this thing okay okay. It's a lot of disallows. Are those languages. No those are directories. Yeah. These are all directories okay. Why do they have these. Why do we need to say that okay. And then we got our sitemaps okay. And more Ascii art. Yeah cool I like it. Interesting. Managing high performers a guide on how to effectively manage high performing employees. And Ian's secure shoelace knot is the best shoelace. Not that I know of. And no, there's no sponsorship. Because that would be silly because no such thing exists. It's a shoelace knot. I actually tie this for my sneakers. Actually, the ones I'm wearing right now. Wish I could put my foot up there. That would hurt. Uh, and I mostly leave them that way. I literally, these are common projects. I literally just slide my foot into there. Sometimes I use a shoehorn to do that, and sometimes I just use my finger. But I tied these with this knot and it's the coolest looking knot. It's the most secure knot. It's awesome. And because we're on video, I'm going to go show you this knot. Look at this thing. So you make two thingies. Two separate thingies. Then you cross the front one over the left one, the right one goes over the the left one. And you do this twice. You do over and under for both of them, and then you pull the thing and you end up with that. You end up with this looking thing right here. And it's super symmetrical and flat. It's not like twisted and trying to go up and it's got like this really cool looking box knot in the middle. And that's the most you've ever heard about a shoelace knot on a podcast probably. All right. Recommendation of the week. Check out the aphorism of the week below. So we'll jump there. If you hit a wrong note, it's the next note you play that determines if it's good or bad. If you hit a wrong note, it's the next note you play that determines if it's good or bad. Okay, so that was the aphorism of the week. Now focus your efforts on being flexible after wrong notes, as opposed to being able to play perfect notes all the time. That's my recommendation of the week. 2020 five inches the next few years are likely to be so crazy that we won't be able to plan or play the right notes. So we just have to be good playing the next note afterwards. And I will read the aphorism of the week again, because that's what we do. If you hit a wrong note, it's the next note you play that determines if it's good or bad. If you hit a wrong note, it's the next note you play that determines if it's good or bad. Miles Davis. Unsupervised learning is produced and edited by Daniel Miessler on a Neumann U87 AI microphone using Hindenburg. Intro and outro music is by zombie with a Y. And to get the text and links from this episode, sign up for the newsletter version of the show at Daniel miessler.com/newsletter. We'll see you next time.