UL NO. 474 | Signal OPSEC, White-box Red-teaming LLMs, Unified Company Context (UCC), New Book Recommendations, Single Apple Note Technique, and much more...

Published Mar 31, 2025, 6:31 PM

STANDARD EDITION: Signal OPSEC, White-box Red-teaming LLMs, Unified Company Context (UCC), New Book Recommendations, Single Apple Note Technique, and much more...

You are currently listening to the Standard version of the podcast, consider upgrading and becoming a member to unlock the full version and many other exclusive benefits herehttps://newsletter.danielmiessler.com/upgrade

Subscribe to the newsletter at:
https://danielmiessler.com/subscribe

Join the UL community at:
https://danielmiessler.com/upgrade

Follow on X:
https://x.com/danielmiessler

Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler

Unsupervised Learning is a podcast about trends and ideas in cybersecurity, national security, AI, technology and society, and how best to upgrade ourselves to be ready for what's coming. All right, starting off this week. So I completely reset my email labels and filters this week. So cruft going back to like, I don't know, 20 years old or whatever from Gmail. Basically all those filters got rid of them because I'm going to start filtering with AI. I think at some point instead, I'm already doing a little bit of that with superhuman. I just I'm worried I'm missing things, and I've already found a few things, like different bills for things I thought I had turned off or whatever. Um, but anyway, very clean feeling. I mean, that was probably a couple of hundred filters that were in there and like weird labels and subfolders. So I basically consolidated folders, labels, rules. And I only have like less than a dozen now of each. So very nice. Clean feeling. Highly recommended. Going to be trying out Karpathy's idea. This is Andrej Karpathy has an idea of using a single Apple note instead of actually using, uh, you know, a million. I have currently around 2900, and it's not really a nice situation to be in. So I'm going to try this single note thing. It's pretty cool. And I got a link here in the show and, uh, watch your API key and I'd agents because I've got a $2,000 bill, which is the limit that I set because I did this weird web documentation lookup. And some for some reason it was calling the LLM every time it did it. And I was watching it run because I thought it was just a pure HTTP query. I don't know why I was using an agent to process results each time. It's very silly. But anyway, um. Yeah. That's why you should have limits set. I'm glad I had mine set. Um, but now I've broken out multiple keys and lowered some limits to make it, uh, even more of a backstop. Go delete your 23 andme data. They are selling out. They are going bankrupt and selling their data to whoever. So go delete that if you have it. Uh, new obscure book recommendation. Fanged noumena. Fanged noumena or fanged noumena, maybe. Um, my friend Joel Parish recommended that to me. So I'm going to read that. And I was made emotionally leaky last night from this, uh, video of this pianist, um, on this YouTube channel called Great Measures. And he did a cover of Fade to Black by Metallica, which is kind of an emotional song for like, a personal reason for me, uh, related to, like, a friend in high school, but I don't know. I don't really listen to the song that much, and I don't know, musically, it's not that exciting to me, but it has some. Emotional meaning to me. And then so I'm watching him play it on piano and it is. Stunning. I mean, it is absolutely just gorgeous because he's adding like so much more depth and everything to it. Plus it's on piano and it's like, yeah, it, uh, it got me. Definitely worth checking out. And I love the channel. The channel is called Great Measures. So it's basically, uh, a metal guy showing his pianist friend, so classically trained musician, showing him a bunch of metal. It's quite awesome. So I'm joining Caleb Sima and Edward Wu for a panel on Drop Zone AI Security Frontiers conference, and that is on March 27th, which is well, it will have already happened by the time you hear this. But, um, yeah, we're going to talk about where JNI stands in security today and where it's headed, and it's, uh, virtual free and worth it. So you can go watch the thing, even though it's already recorded. Cybersecurity y white box red teaming makes me feel weird. So this guy is Zygi Strannix, which are how to pronounce that. But Zygi shares unsettling experiences with models appearing to express distress during advanced LLM safety techniques. And the quote here is it just doesn't feel good to be responsible for making models scream. It distracts me from doing research and makes me write rambling blog posts. That's uh, yeah, that's a problem. It feels like Black Mirror to me. Like I'm not worried that it's actually conscious, but at some point I will be and it'll be hard to know the difference. And like he said, it's still disturbing if the thing is making all the sounds of it hurting, even though you you think you know that it doesn't have the capability to do so. Next one here, right? White House OpSec fail. White House accidentally revealed top secret Houthi bombing plans to. The editor of The Atlantic magazine shared the plans in a signal group that didn't realize the reporter was in there. And the worst part is the message before they started sharing the stuff. We are currently clean on OpSec, and I believe that was like the Secretary of State saying that we are currently clean on OpSec. And of course, the person also receiving that message is a civilian and the head of a magazine. My goodness, my agents, security and companies like Microsoft. So I got invited to a Microsoft media event last week in SF where they showed off all the AI agent stuff and copilot that they're talking about this week. Basically, they're adding agents to tons of products under the banner of copilot, and I had a single thought while spending like three hours talking to everyone there from like red team to threat intelligence, incident response or whatever. But basically startups need to hurry up, because what I saw in that room, I think is like the future and definitely not sponsored by Microsoft. I'm not even like a Microsoft fanboy, but many of the agents in the room could talk to all of the other Microsoft services, right? Microsoft vulnerability management, identity and access management, asset management. They could talk to all those different systems. They could pull context from across the entire organization, like HR, like asset management endpoints, uh, cloud vulnerability data, local vulnerability data on the actual workstations, vulnerability management, ticketing systems. What I'm saying is the companies that are going to win this AI security game are not necessarily the ones with the best AI or agent tech, but the ones that can best leverage customer company context for their AI or agents. So at first it'll be startups because they can move the fastest, right? But soon they're going to have a major disadvantage compared to like Microsoft and companies that give access, or companies where the AI has the ability to access unified company contacts UCC. So other companies, I think, like Amazon and Databricks and stuff, will also try to build this UCC unified customer context so that they'll be able to create that data store for people to use AI with. So it won't only be Microsoft, right, or Google or Amazon, because people like Databricks and other companies are going to come in and say, hey, let me just collect all your data and put it into a UCC. And then any AI that you have from these different vendors, they could all tap into the same thing. So that will also be a thing. It already is a thing. Again, companies like Databricks, I believe they're playing in that space. But you don't want to be a startup trying to implement AI in a customer's company. When you don't have access to their UCC And you're competing against someone else who does like Microsoft. Right. That is a bad place to be. So the main game for making AI useful or powerful, especially in security and security startups, will be gaining access to unified customer context. This is all especially relevant to cybersecurity because security use cases really, really benefit from contexts, their identity, actions, history of the thing you're investigating right across multiple systems. Also, there's the issue of securing the UCC, since it will be the absolute most sensitive data store in the entire company, all the juiciest bits of the company all in one place, which is like a red teamer or attacker's dream. Cloudflare launched an AI labyrinth feature that messes with unauthorized AI scrapers by feeding them endless pages of irrelevant but real looking content instead of actually blocking them. So it's like a classic, like honeypot slash deception thing. Uh, to counter rogue AI that won't honor the robots.txt file. A rush release of JFK assassination files exposed 400 Social Security numbers and other sensitive data belonging to former congressional staffers, many of which of whom are actually high ranking officials now. But we just doxed their Social Security numbers in this release. New cybersecurity compensation research shows high six figure salaries are not stopping 60% of security professionals from thinking about leaving their job within a year. National security OpenAI is pressuring the Trump administration to allow copyright scraping for AI training, claiming America will lose the AI race to China without having full access to scrape. And a lot of people see this as like, corporate bullshit, trying to use security to give them a corporate advantage. But Unfortunately, it's actually true. This is actually true because China has no limitations whatsoever on what it trains on. On what it crawls, they steal whatever, consume whatever with 100% free reign. And that is a path or an accelerator to get to AGI or ASI or whatever. So the question is, who do you want to have AGI or ASI first? The US or China? And for me, Trump makes that question a little bit harder to answer. But my answer is still the US. Americans are buying overseas residency and citizenship as a hedge against uncertainty in the US. China unveiled a deep sea cable cutting device capable of severing undersea communications at depths twice beyond where existing infrastructure operates, so they can find deeper cables that are further down in a trench or whatever. It says unveiled. I wrote that word based on the story, but it's kind of weird to like, but I'm like, is this a press release or are they like, you know, showing off the cool toy that they have and I don't know, I'm thinking of like CES, they're like, come check out the cable cutter. London's Heathrow Airport announced a full day shutdown after a significant fire at a nearby electrical substation knocked out power to the entire facility. A lot of questions are going to be asked about that. I. French Swash Relays Arc Prize Foundation released a new AI intelligence test. I think this is just another version of Arc, but the best AI models are currently only scoring 1% while humans get 60%. That is fantastic. Anthropic's Claude, not Anthropic's Anthropic's cloud has finally added I hate saying finally seems rude and trite, but whatever. Added web search to its AI chatbot catching up to ChatGPT with clickable citations. I really want this in the API. It's not there yet, and they're apparently using Brave Search to power that, uh, search capability. Gmail is rolling out an AI powered search that ranks results based on relevance, instead of just showing the newest emails first. This is cool, but I want AI filtering and AI auto drafts. Like, why don't they get more Gemini into Gmail? It seems like they have the ability. Can't wait for that to happen. Technology. Apple is updating AirPods Max next month to add lossless and ultra low latency audio. Matt Claude says that while type flags make sense for terminal commands, you should use the dash dash force style option, the long option instead for better readability. Someone wrote Seth Larson wrote this thing I fear for the unauthenticated Web argues that the increasingly common sign in to continue message on websites is destroying the open promise of the web. I think we talked about that last week as well. Nvidia says they're investing hundreds of billions of dollars in the US manufactured chips over the next four years, shifting away from Asia and Trump's tariff threats. This is exactly what Trump was trying to do with his policies, and it's positive. But I'm worried that the damage will actually be worse than the benefit. NYPD has dramatically expanded its drone program, sending them to thousands of. 911 911 911 calls. While privacy advocates are worried about the lack of transparency, and they're basically saying this could be used for surveillance humans. New research from Aalto University suggests Earth has way more people than the official 8.2 billion, due to major undercounting in rural areas. I wonder how major they're talking about. Are they talking about, like, up to 9 billion? Like that would an extra billion? Are we losing an extra billion? Curious what they I didn't find the actual number that they're re estimating. It looked like they just said it was bad. Tyler Cowen shares insights from his conversation with Ezra Klein about the new book, abundance. Okay, so check this out. You have to go check out this book. I just listened to the, um, podcast with Lex and the co-author of the book with him, with, uh, Ezra Klein. And it is fantastic, in fact, that Technological Republic by Alex Karp combined with this is, um, these are very centrist books. It's Alex Karp being a lot more liberal because he's considered a kind of. Right. It's being a lot more liberal. And then this book is from two liberals, but they're talking a lot more centrist, a lot more kind of Republican about the government and how to move forward and all the mistakes that the liberals have made that enabled Trump to come to power. So basically, if you are a centrist or you like first principle thinking, uh, you like to figure out, like why the government is broken. Like what we could do to fix it. Read this book, abundance and read the book The Technological Republic. They are fantastic books. And to be clear, I haven't read the book actually, but I've listened to hours of analysis on it, and I've. They've kind of already covered the whole thing, but I'm still going to read it cover to cover, telling staff that the way to get ahead is not to accumulate a giant fiefdom, and AI is going to do the same. It's going to clean all that out. David Kellogg explains the essential differences between a manager, director and a VP, with the VP being accountable for results regardless of who approved the plan. Isn't that the case for director as well? Jonathan Kipnis and his team discovered that rejuvenating the brain's lymphatic vessels improves memory in old mice by helping clear waste that contributes to cognitive decline. So my question is, how do I do this for me? Because I'm not a mouse. Yeah. How do you rejuvenate the brain's lymphatic vessels? Is there like a pill? Is that like a workout? What do you do? Discovery. Most bitter people you will ever meet. Gut punching. Three paragraph essay. I'm not going to read the whole thing. Go check it out. Delphi AI new platform lets you create and share a digital clone of yourself. Lange Magnus no Lange Manus new open source tool that makes it easier to build autonomous agents using Lange chain and Lange graph. Pure clever new browser hack lets you read any Paywalled content. It's very much similar to AI. Rise of Agentic AI is out which I had a chance to contribute to. Definitely go check that out in the links. Personal best. Neat little tool that shows you which personal blogs are most popular on Hacker News. So I'm going to mine this thing and actually put them all into threshold. So they will be in threshold soon. Someone says they recommend against brave. I thought that was a was pretty good arguments. I don't use it, so it wasn't super pertinent to me, but worth sharing. Circuit tutor. Neat little tool lets you describe simple circuits in plain English, and get both schematics and interactive explanations for folks who need e refreshers. Go act. New tool that turns your text or files into a browser based explainer. Videos and OS. New GitHub Osint tool that scrapes public user info, including emails, organizations and repositories. All right, this is the end of the standard edition of the podcast, which includes just the news items for the week. To get the rest of the episode, which includes my analysis, the discovery section that has all of the coolest tools and articles, I found this week, the recommendation of the week and the aphorism of the week. Please consider becoming a member. As a member, you'll get lots of different things, from access to our extraordinary community of over a thousand brilliant and kind people in industries like cybersecurity, AI, technology and the humanities. You also get access to the UL Book Club, dedicated member content and events, and lots and lots more. Plus, you'll get a dedicated podcast feed you can put in your client that gets you the full member edition of this podcast. To become a member and get all of that, just head over to Daniel Meister. That's Daniel COVID-19. We'll see you next time. Unsupervised learning is produced on Hindenburg Pro using an SM seven B microphone. A video version of the podcast is available on the Unsupervised Learning YouTube channel, and the text version with full links and notes is available at Daniel Comm Slash newsletter. We'll see you next time.