MEMBER EDITION: Signal OPSEC, White-box Red-teaming LLMs, Unified Company Context (UCC), New Book Recommendations, Single Apple Note Technique, and much more...
Follow on X:
https://x.com/danielmiessler
Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler
Unsupervised Learning is a podcast about trends and ideas in cybersecurity, national security, AI, technology and society, and how best to upgrade ourselves to be ready for what's coming. All right, starting off this week. So I completely reset my email labels and filters this week. So cruft going back to like, I don't know, 20 years old or whatever from Gmail. Basically all those filters got rid of them because I'm going to start filtering with AI. I think at some point instead, I'm already doing a little bit of that with superhuman. I just I'm worried I'm missing things, and I've already found a few things, like different bills for things I thought I had turned off or whatever. Um, but anyway, very clean feeling. I mean, that was probably a couple of hundred filters that were in there and like weird labels and subfolders. So I basically consolidated folders, Labels rules, and I only have like less than a dozen now of each. So very nice, clean feeling. Highly recommended. Going to be trying out Karpathy's idea. This is Andrej Karpathy has an idea of using a single Apple note instead of actually using, uh, you know, a million. I have currently around 2900, and it's not really a nice situation to be in. So I'm going to try this single note thing. It's pretty cool. And I got a link here in the show and, uh, watch your API key and I'd agents because I've got a $2,000 bill, which is the limit that I set because I did this weird web documentation lookup. And some for some reason it was calling the LLM every time it did it. And I was watching it run because I thought it was just a pure HTTP query. I don't know why I was using an agent to process results each time. It's very silly. But anyway, um, yeah, that's why you should have limits set. I'm glad I had mine set, but now I've broken out multiple keys and lowered some limits to make it, uh, even more of a backstop. Go delete your 23 andme data. They are selling out. They are going bankrupt and selling their data to whoever. So go delete that if you have it. Uh, new obscure book recommendation. Fanged noumena. Fanged noumena or fanged noumena. Maybe my friend Joel Parrish recommended that to me. So I'm going to read that. And I was made emotionally leaky last night from this, uh, video of this pianist, um, on this YouTube channel called Great Measures. And he did a cover of Fade to Black by Metallica, which is kind of an emotional song for like, a personal reason for me, uh, related to, like, a friend in high school, but I don't know, I don't really listen to the song that much, and I don't know, musically, it's not that exciting to me, but it has some emotional meaning to me. And then so I'm watching him play it on piano and it is stunning. I mean, it is absolutely just gorgeous because he's adding like so much more depth and everything to it. Plus it's on piano and it's like, yeah, it, uh, it got me. Definitely worth checking out. And I love the channel. The channel is called Great Measures. So it's basically, uh, a metal guy showing his pianist friend, so classically trained musician, showing him a bunch of metal. It's quite awesome. So I'm joining Caleb Sima and Edward Wu for a panel on Drop Zone AI Security Frontiers conference, and that is on March 27th, which is well, it will have already happened by the time you hear this. But, um, yeah, we're going to talk about where JNI stands in security today and where it's headed. And it's a virtual free and worth it. So you can go watch the thing even though it's already recorded. Cybersecurity y white box red teaming makes me feel weird. So this guy is like strand Nexus, which are how to pronounce that. But Zygi shares unsettling experiences with models appearing to express distress during advanced LM safety techniques. And the quote here is it just doesn't feel good to be responsible for making models scream. It distracts me from doing research and makes me write rambling blog posts. That's uh, yeah, that's a problem. It feels like Black Mirror to me. Like I'm not worried that it's actually conscious, but at some point I will be and it'll be hard to know the difference. And like he said, it's still disturbing. If the thing is making all the sounds of it hurting, even though you you think you know that it doesn't have the capability to do so. Next one here, right? White House OpSec fail. White House accidentally revealed top secret Houthi bombing plans to. The editor of The Atlantic magazine shared the plans in a signal group that didn't realize the reporter was in there. And the worst part is the message before they started sharing the stuff. We are currently clean on OpSec, and I believe that was like the Secretary of State saying that we are currently clean on OpSec. And of course, the person also receiving that message is a civilian and the head of a magazine. My goodness, my agent, security and companies like Microsoft. So I got invited to a Microsoft media event last week in SF where they showed off all the AI agent stuff and copilot that they're talking about this week. Basically, they're adding agents to tons of products under the banner of copilot. And I had a single thought, well, spending like three hours talking to everyone there from like red team to threat intelligence, incident response or whatever. But basically startups need to hurry up, because what I saw in that room, I think is like the future and definitely not sponsored by Microsoft. I'm not even like a Microsoft fanboy, but many of the agents in the room could talk to all of the other Microsoft services, right? Microsoft vulnerability management. Identity and access management, asset management. They could talk to all those different systems. They could pull context from across the entire organization like HR, like asset management endpoints, uh, cloud vulnerability data, local vulnerability data on the actual workstations, vulnerability management ticketing systems. What I'm saying is the companies that are going to win this AI security game are not necessarily the ones with the best AI or agent tech, but the ones that can best leverage customer company context for their AI or agents. So at first it'll be startups because they can move the fastest, right? But soon they're going to have a major disadvantage compared to like Microsoft and companies that give access, or companies where the AI has the ability to access unified company context UCC. So other companies, I think, like Amazon and Databricks and stuff, will also try to build this UCC unified customer context so that they'll be able to create that data store for people to use AI with. So it won't only be Microsoft, right, or Google or Amazon, because people like Databricks and other companies are going to come in and say, hey, let me just collect all your data and put it into a UCC. And then any AI that you have from these different vendors, they could all tap into the same thing. So that will also be a thing. It already is a thing. Again, companies like Databricks, I believe they're playing in that space. But you don't want to be a startup trying to implement AI in a customer's company. When you don't have access to their UCC and you're competing against someone else who does like Microsoft, right? That is a bad place to be. So the main game for making AI useful or powerful, especially in security and security startups, will be gaining access to unified customer context. This is all especially relevant to cybersecurity because security use cases really, really benefit from context. Their identity, actions, history of the thing you're investigating right across multiple systems. Also, there's the issue of securing the UCC, since it will be the absolute most sensitive data store in the entire company, all the juiciest bits of the company all in one place, which is like a red teamer or attackers dream. Cloudflare launched an AI labyrinth feature that messes with unauthorized AI scrapers by feeding them endless pages of irrelevant but real looking content instead of actually blocking them. So it's like a classic, like honeypot slash deception thing. Uh, to counter rogue AI that won't honor the robots.txt file. A rush release of JFK assassination files exposed 400 Social Security numbers and other sensitive data belonging to former congressional staffers, many of which of whom are actually high ranking officials now. But we just doxed their Social Security numbers in this release. New cybersecurity compensation research shows high six figure salaries are not stopping 60% of security professionals from thinking about leaving their job within a year. National security OpenAI is pressuring the Trump administration to allow copyright scraping for AI training, claiming America will lose the AI race to China without having full access to scrape. And a lot of people see this as like, corporate bullshit, trying to use security to give them a corporate advantage. But unfortunately, it's actually true. This is actually true because China has no limitations whatsoever on what it trains on, on what it crawls. They steal whatever, consume whatever with 100% free reign. And that is a path or an accelerator to get to AGI or ASI or whatever. So the question is who do you want to have AGI or ASI first? The US or China? And for me, Trump makes that question a little bit harder to answer. But my answer is still the US. Americans are buying overseas residency and citizenship as a hedge against uncertainty in the US. China unveiled a deep sea cable cutting device capable of severing undersea communications at depths twice beyond where existing infrastructure operates, so they can find deeper cables that are further down in a trench or whatever it says unveiled. I wrote that word based on the story, but it's kind of weird to like, but I'm like, is this a press release? Or are they like, you know, showing off the cool toy that they have? And I don't know. I'm thinking of like CES, they're like, come check out the cable cutter. London's Heathrow Airport announced a full day shutdown after a significant fire at a nearby electrical substation knocked out power to the entire facility. A lot of questions are going to be asked about that. I French soir sur les arc Prise Foundation released a new AI intelligence test. I think this is just another version of Arc, but the best AI models are currently only scoring 1% while humans get 60%. That is fantastic. Anthropic's Claude, not Anthropic's Anthropic's cloud has finally added I hate saying finally seems rude and trite, but whatever. Added web search to its AI chatbot catching up to ChatGPT with clickable citations. I really want this in the API. It's not there yet, and they're apparently using Brave Search to power that, uh, search capability. Gmail is rolling out an AI powered search that ranks results based on relevance, instead of just showing the newest emails first. This is cool, but I want AI filtering and AI auto drafts. Like why don't they get more Gemini into Gmail? It seems like they have the ability. Can't wait for that to happen. Technology. Apple is updating AirPods Max next month to add lossless and ultra low latency audio. Matt Claude says that while type flags make sense for terminal commands, you should use the dash dash force style option, the long option instead for better readability, someone wrote. Seth Larson wrote. This thing I fear for the Unauthenticated Web argues that the increasingly common sign in to continue message on websites is destroying the open promise of the web. Think we talked about that last week as well. Nvidia says they are investing hundreds of billions of dollars in the US manufactured chips over the next four years, shifting away from Asia and Trump's tariff threats. This is exactly what Trump was trying to do with his policies, and it's positive. But I'm worried that the damage will actually be worse than the benefit. NYPD has dramatically expanded its drone program, sending them to thousands of 911 911 911 calls. While privacy advocates are worried about the lack of transparency, and they're basically saying this could be used for surveillance humans. New research from Aalto University suggests Earth has way more people than the official 8.2 billion, due to major undercounting in rural areas. I wonder how major they're talking about. Are they talking about, like, up to 9 billion? Like that would an extra billion? Are we losing an extra billion? Curious what they I didn't find the actual number that they're re-estimating. It looked like they just said it was bad. Tyler Cowen shares insights from his conversation with Ezra Klein about the new book abundance. Okay, so check this out. You have to go check out this book. I just listened to the, um, podcast with Lex and the co-author of the book with him, with, uh, Ezra Klein. And it is fantastic, in fact, that Technological Republic by Alex Karp combined with this is, um, these are very centrist books. It's Alex Karp being a lot more liberal because he's considered kind of right. It's being a lot more liberal. And then this book is from two liberals, but they're talking a lot more centrist, a lot more kind of Republican about the government and how to move forward and all the mistakes that the liberals have made that enabled Trump to come to power. So basically, if you are a centrist or you like first principle thinking, uh, you like to figure out, like why the government is broken, like what we could do to fix it. Read this book, abundance and read the book The Technological Republic. They are fantastic books. And to be clear, I haven't read the book actually, but I've listened to hours of analysis on it, and I've they've kind of already covered the whole thing, but I'm still going to read it, uh, cover to cover, telling staff that the way to get ahead is not to accumulate a giant fiefdom, and AI is going to do the same. It's going to clean all that out. David Kellogg explains the essential differences between a manager, director and a VP, with the VP being accountable for results regardless of who approved the plan. Isn't that the case for director as well? Jonathan Kipnis and his team discovered that rejuvenating the brain's lymphatic vessels improves memory in old mice by helping clear waste that contributes to cognitive decline. So my question is, how do I do this for me? Because I'm not a mouse. Yeah. How do you rejuvenate the brain's lymphatic vessels? Is there like a pill? Is that like a workout? What do you do? All right. Ideas? Hi. Agency. I've been hearing this concept a lot in the last few months, and there are people arguing. It's one of the most important ideas out there right now. It's also highly related to my work on H3 human 3.0. So I'm going to do a deep dive on it. But it's roughly the ability to solve problems by believing they're not unsolvable Solvable if they don't defy physics. So it's like this very I want to say Ayn Randian. Honestly, like John Galt, kind of like everything is possible. Like, just go do it. Um, you know, you could just make things. I feel like it's very elani in that way. In a good Elon way. So I like the concept. I think it's a powerful concept. Oh, here's another way to say it. A sense that the story given to you by other people about what you want or can or cannot do is just that a story that is powerful. How much do flaws and traumas enhance us? I worry a lot about making life too easy as a society or as parents. So there's this timeless struggle that parents have, especially when they're like immigrants. And they went through a lot of stuff, and they want to make sure their children don't have to suffer in that way, but then they end up making what they deem to be like lesser adults, like adults that are less capable of taking on the world than they were. So I love this quote, which I saw about this. Uh, from Jason Liu. I worked a lot of my mental health and now I am no longer ambitious. I worked a lot on my mental health, and now I am no longer ambitious. Great discovery. Most bitter people you will ever meet. Gut punching, three paragraph essay. I'm not going to read the whole thing. Go check it out. Delphi AI new platform lets you create and share a digital clone of yourself. Lange, Magnus. No. Lange, Magnus. New open source tool that makes it easier to build autonomous agents using Lange chain and Lange graph. Pure clever new browser hack lets you read any Paywalled content. It's very much similar to AI. Rise of Agentic AI is out which I had a chance to contribute to. Definitely go check that out in the links. Personal best. Neat little tool that shows you which personal blogs are most popular on Hacker News. So I'm going to mine this thing and actually put them all into threshold so they will be in threshold soon. Someone says they recommend against brave. I thought that was a pretty good arguments. I don't use it, so it wasn't super pertinent to me, but worth sharing. Circuit tutor. Neat little tool lets you describe simple circuits in plain English, and get both schematics and interactive explanations for folks who need e refreshers. Go act. New tool that turns your text or files into a browser based explainer. Videos and OS. New GitHub Osint tool that scrapes public user info, including emails, organizations and repositories, and the recommendation of the week. Ask yourself every six months, what would you do if you weren't afraid? What would you do right now if you weren't afraid? What's causing the fear? Should you push through the fear and do it? Or can you at least build a plan to reduce the risk of doing it? Constantly challenge yourself on this. A better life is on the other side. And the aphorism of the week, which is insane. This thing is crazy. Crazy. Aphorism this week you sense you should be following a different path, a more ambitious one. You felt you were destined for other things, but you had no idea how to achieve them. And in your misery, you began to hate everything around you. You sensed you should be following a different path, a more ambitious one. You felt you were destined for other things, but you had no idea how to achieve them. And in your misery, you began to hate everything around you. Fyodor Dostoyevsky. Unsupervised learning is produced on Hindenburg Pro using an Sm7 B microphone. A video version of the podcast is available on the Unsupervised Learning YouTube channel, and the text version with full links and notes is available at Daniel Miessler newsletter. We'll see you next time.