➡ Secure what your business is made of with Martial Security:
https://material.security/
In this episode, I speak with Patrick Duffy from Material Security about modern approaches to email and cloud workspace security—especially how to prevent and contain attacks across platforms like Google Workspace and Microsoft 365.
We talk about:
• Proactive Security for Email and Cloud Platforms
How Material goes beyond traditional detection by locking down high-risk documents and inboxes preemptively—using signals like time, access patterns, content sensitivity, and anomalous user behavior.
• Real-World Threats and Lateral Movement
What the team is seeing in the wild—from phishing and brute-force attacks to internal data oversharing—and how attackers are increasingly moving laterally through cloud ecosystems using a single set of compromised credentials.
• Customizable, Context-Aware Response Workflows
How Material helps teams right-size their responses based on risk appetite, enabling fine-grained actions like MFA prompts, access revocation, or full session shutdowns—triggered by dynamic, multi-signal rule sets.
Subscribe to the newsletter at:
https://danielmiessler.com/subscribe
Join the UL community at:
https://danielmiessler.com/upgrade
Follow on X:
https://x.com/danielmiessler
Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler
Chapters:
00:00 - Welcome & High-Level Overview of Material Security
02:04 - Common Threats: Phishing and Lateral Movement in Cloud Office
05:30 - Access Control in Collaborative Workspaces (2FA, Just-in-Time, Aging Content)
08:43 - Connecting Signals: From Login to Exfiltration via Rule Automation
12:25 - Real-World Scenario: Suspicious Login and Automated Response
15:08 - Rules, Templates, and Customer Customization at Onboarding
18:46 - Accidental Risk: Sensitive Document Sharing and Exposure
21:04 - Security Misconfigurations and Internal Abuse Cases
23:43 - Full Control Points: IP, Behavior, Classification, Sharing Patterns
27:50 - Integrations, Notifications, and Real-Time Security Team Coordination
31:13 - Lateral Movement: How Attacks Spread Across the Workspace
34:25 - Use Cases Involving Google Gemini and AI Exposure Risks
36:36 - Upcoming Features: Deeper Remediation and Contextual Integration
39:30 - Closing Thoughts and Where to Learn More
Unsupervised Learning is a podcast about trends and ideas in cybersecurity, national security, AI, technology and society, and how best to upgrade ourselves to be ready for what's coming. All right, Patrick, welcome to Unsupervised Learning.
Thanks, Daniel. Pleasure to be here.
Yeah. So, um, last time I chatted with, uh, material, um, I spoke with Abhishek. We had a really interesting conversation, and, uh, what I really took away from that, that I found so interesting was like this focus on because I asked. I asked about, uh, detective controls, and he's like, yeah, it's not so much about detection. It's more about like putting on the seatbelts and preventative controls, like what happens after a breach. How do you limit the blast radius? Um, and I thought that was a really interesting characterization. Is that the way you think about it as well?
Yeah, that's certainly a large part of it. And, you know, when I think about what we're doing here at material, it's actually the the seat belts, but also the brakes as well, and picking up on the accidents before they happen. So that's where we're heading as a, as a company of, you know, not only doing the threat detection, making sure that if and when something does does go kind of sideways, we can stop that and prevent as much impact as we can.
Yeah. That's fantastic. Um, so before we jump into the product more deeply, uh, what types of stuff are you seeing out there? Like, what types of threats are you seeing? Uh, attacks. Like, where are the attackers doing currently?
Yeah. So one of the things we're seeing, uh, you know, not going to be surprising, I think, to your audience, is a lot of inbound phishing threats that are hitting organizations. We know that that is a pretty, uh, popular entry point into a lot of the infrastructure for teams. And so we see that pretty frequently. But also, you know, one of the things that is, I think, Underlooked, when it comes to the cloud office. Is that lateral movement across the cloud office? Right. So if you think about the credentials for Google Workspace or M365, it's a pretty valuable piece of information for the attacker, because for any employee at an organization, it's the first thing that they get when they onboard and the last thing that they have when they off right before they off board. So that's usually the keys to the kingdom when it comes to not only other tools, but also within the organization. So you can move pretty freely once you have access to somebody's email credentials, to head across the shared drives and all that sensitive information, which can be pretty damaging as we've seen with past breaches.
Yeah, so that makes sense. So it's not only access to email, but like you said, it's Google Docs, it's drives. It's um, I mean, that's the power of the ecosystem is that you can move around, right. So that same same advantage is for the attacker.
Yeah. And it's.
Also.
Sorry I just want to jump in as well. It's interesting. Right. Because it's such a collaboration tool. It's also a challenge for security teams. And one of the problems and challenges we're seeing is trying to not be just the Department of No, but of facilitating collaboration across security, IT and their other colleagues. So you can't just shut down email, you can't just shut down access to drive. You have to rightsize who gets access to what and when. And so we have a lot of tools to help support that for our customers.
Okay. So what does that look like? Is that a specific product for the Google space or what is that.
Yeah. So it's a capability that comes with our product out of the box where we're able to enable just in time access and toufar controls for sensitive documents and documents that have basically aged out of a grace period. And so you can say, you know, within two weeks, let's put a toufar block on anything that has financial information across the organization or for these subsets of users. Let's put Toufar behind all historical emails in their inbox. So if a hacker does get access to their credentials, they can't just run wild and export data with sensitive information, proprietary information, things that could lead to real, substantial harm to an organization either reputationally or from a business impact perspective.
I find this whole concept really, really cool. So so again, it's not about a sensor. It's not about, oh, I detected this. Let me make this change. It's like, look, we have this giant lake or ecosystem of sensitive content and data, and there are things we could be doing right now. Like you said, time based that are just like tweaking the knobs for settings and lockdown and configurations. So I guess there's if I'm thinking about this from like a fundamental standpoint, there's like, um, things you have to lock down. There's identity you could use, there's granular permissions. And so the product seems to be like just deciding what ideal might look like or something like that, and just going in and making those tweaks kind of on a continuous basis. Is that right?
Yeah, it's on a continuous basis. And it's also contextually aware. Right. So you have to be able to understand where your employees are logging in from at an individual level on the regular, you know, on a regular basis. Because if I'm logging in from the East coast of the United States regularly and then you see a login from, you know, somewhere in Western Europe or around the globe where I'm not usually that should raise some alarms, right? And you might have some tools that will kind of flag that, but that might be in isolation. And same thing with your DLP tool like oh there are some sensitive searches happening, but that will happen in isolation. And you really need a tool that will help connect the dots of saying, we noticed a login and then we noticed some suspicious activity, and then we noticed some data exfiltration happening. Or, you know, for us, the ideal state of what we're building for is that whole throughput of, you know, if there's a novel attack happening via the inbox and the email threats that we're seeing and that, you know, a user might click through something or go to a login page that has a credential harvester of knowing that they got a suspicious email, and then they click through. And then we saw the suspicious login and then the weird anomalous activity being able to connect those dots together. Because what I've seen in my experience is point to do a pretty good job of picking up those individual data points. But it's taking a step back and seeing the full mosaic, so to speak, and having a clear understanding is that's where there's steam. Teams are still having a lot of trouble and, you know, doing a lot of work themselves that they don't necessarily need to.
Okay. So let me rethink then. So so you are doing some uh, current context analysis of like what's currently going on. So I guess that is so what are the sources for that. What are you able to see. Is that all within like a Google Workspace, the logs that you're using or is that other. Do you have other telemetry other signal from other systems?
Yeah. So uh, Google Workspace or Microsoft 365 or certainly it's a pretty big source of data for us. We also do in incorporate other third party intelligence tools that you would expect for a security product. Um, that really allows us to say, you know, once we see something suspicious happening, let's put things on lockdown, let's make sure that things are, are right sized for in terms of access, or we can revoke access if we notice that there's an issue that's been raised of a user with a suspicious login and a file share to a third party that is, you know, basically unsanctioned.
Okay, so so walk me through like a scenario here. So I think I think you were giving me an example earlier. So it's like, um, is it a strange time of night or a strange geo that the person logs in with, like what are the various triggers that could that can get this going?
Yeah, it could be, uh, you know, noticing a pattern of successive failed logins. So if somebody's trying to brute force a password and then they finally get in, we might pop a notification that says, you know, user has a login from a, you know, after a suspicious or brute force attempt. Um, and then we saw anomalous search activity on the drive. And then the administrators can, within our product, have already set up some automation that will say revoke external access to that, to those files that are being shared after that suspicious search. And so you can automatically with our product say, you know, once you've seen 1 in 2, go do the third thing automatically to revoke the access. And therefore the data shouldn't be leaving the building. Um, for anybody outside of the registered domains.
Okay. So those are pre-set up. So those are preventative. Yep. Uh, is there anything that's like dynamically happening uh, to like, dynamically write some of those rules?
Um, so our team is we have a team of three researchers, and we also use AI. As you know, a lot of the companies are today, um, to analyze our rule sets and make sure that we're really being really precise with, um, the roles that we're writing, but also our customers can come in and write the custom rules based off of very specific threats that they're thinking about, or wanting to be proactive around data sets that isn't necessarily super sensitive, but important to their organization. You know, an example that we, you know, we've seen a couple of times is around some notes for board meetings where there might not be sensitive information in there, per se, but you don't want those getting out. And so customers can set up rules for saying, you know, if it goes to X, y, z domain or XYZ email address, you know, let's put that behind the toufar. Um, for any access for that document.
Yeah, that makes sense. And are there different, um, sets of templates. So like if there's a particular customer that comes on, is there like a threat model for that particular kind of customer where they would want to install a whole bunch of these preventative things, like all as, like a group or a cluster?
Um, not today. One of the things we are thinking about and engaging in conversations with our customers with is really trying to rightsize what that looks like based off of the threat plane. Um, so, you know, we work with financial services companies and we're seeing something, but that's also fairly similar to healthcare. Um, and so we're starting to, to group things like that. Um, you know, if it's a, you know, an upcoming SaaS startup with a small security team that's a different set of profiling that they want to do. And so, um, you know, we're starting to, to work towards that. Um, and that's definitely an evolution of our product that's coming.
Okay. But when somebody onboards what what all gets turned on automatically, is it a number of. Yeah. These preventative controls are already just by default there. Right.
Yeah. So we have, um, you know, in the hundreds of rules at this point between our email and cloud office. Um threat suite, um, that are on by default when somebody signs up for material. So this includes things like all of our inbound threat email threat detection rules, but also suspicious file shares. um, you know, alerting to risky configurations as well inside of the workspace. So, you know, the way we're thinking about this in terms of an analogy is something like EDR for the cloud office or, you know, using an example for, you know, it's been the headlines over the last week was for Google Workspace. And so thinking about how do we give that full visibility and rule set for customers and their teams to be able to really identify where there's the most risk and then remediate it out of the box?
Yeah, that makes sense. So it sounds like you're you're very much focused on this, uh, type of functionality. But what are some of the attacks that you're seeing in that space?
Um, yeah. So it's, um, you know, some of the malicious actors are still, you know, phishing, trying to get the credentials, but also it's the inadvertent sharing. Um, so it's not necessarily an attack, but it's risk to the organization of a lot, you know, employees sharing sensitive documents to be set to anyone with a link can see it and just public shares where, you know, data leakage like that can be pretty damaging for organizations. And so that's that's an area where we've seen a surprisingly high number of issues being raised by our product so far of sensitive documents being shared to the entire world.
Yeah, I love that. I love the fact that it doesn't have to be like, um, a front page news article like Sexy Attacker that's doing the damage. It could be the benign user who just made a mistake. And because both of those are risks, both of those are handled via rules, right?
Yep. Exactly.
Yeah. Awesome. What are some other, um, abuse cases? Uh, not so much attacker based, but, um, like, what are some of the other pieces that the rules cover?
Yeah. So it's things like, uh, best practice configuration for the cloud office. So, you know, one example that, um, I have is organizations that might set group moderation settings to be able to be interacted with externally. And so if we think about that, right. So if you have a group of VIPs, if, if that group is externally visible, somebody can then just email bomb campaign, their VIP team or you know, their, their C-suite. And so that is can be hugely disruptive to businesses. If there's all of a sudden a DDoS campaign on the inbox because they're, you know, a group of malicious actors is actually just spamming the inbox and filling it up. Um, so that's, you know, that's another thing we've seen quite a few times, actually, where some moderation settings weren't quite, uh, optimized. And then from there, their executives, uh, inboxes were just totally filled up because of a harassment campaign.
Okay, so, so humor me on this. Um, I'm trying to think of all the different granular controls you could possibly do. Um, so, so what are some of the some of the control points you could prompt for MFA if you see something suspicious. Um, you could remove access via, like, an ACL type control. You could have, like, a time based control if, like, something is outside of a certain time window. Yep. Um, what's another signal? Geo based signal?
Yeah. So based off of IP address where we're seeing the logins. Um, also who they share with usually versus who they're sharing with in frequency of shares is something we're looking at. And so you know, we look for not only like the anomalous search and activity within the drive, but are you starting to email folks within your organization that you don't normally email. Right. So those sort of pattern.
There you go.
Outside of, you know, outside of the normal, um, kind of I would say, you know, kind of base level data points of picking up where real anomalies that this is unusual for a user based off of what we usually see.
And how about like a classification or a content type. Are you able to see anything like that?
Yeah. So we classify everything in the drive based off of the type of information that's there. So we pick up on things like Social Security numbers, financial information. Customers can also add additional custom tags for things like proprietary information. And so you know and that integrates directly with Google Workspace where, you know, if somebody puts that tag on a document, we'll obviously pick it up and then some. The security team can then remediate and protect it as, as they want.
Okay. And then you mentioned particular drives internally. What what about threat intelligence on or at least basic threat intelligence on like oh, it's pastebin. Uh, this very sensitive thing is being put on pastebin, which everyone knows is like a dump site. Is that the type of thing you can get signal from?
Yeah, yeah, we're pulling in, um, you know, things like that of, you know, have I been owned? Uh, sorts of websites of being able to pull that Intel for accounts and other sorts of information. Um, you know, we're also looking out for, um, Impersonation campaigns on login site login sites. So, um, you know, taking kind of the IP and um, other information from customer, an organization profile and how malicious actors are using it and try and protect against that.
Yeah, that's really cool. Okay. So you said you have like hundreds of rules. That's that's the basic, uh, magnitude there. And that would be these various combinations of these different signals combined with the different control point. Right.
Yep. Exactly.
Um, interesting. Yeah. Any, um, any other thoughts on, um, the functionality here or the types of attacks that it's, uh, detecting?
Um, really the thing that I'm excited about that we're picking up on and helping protect against is really the lateral movement across the workspace while not getting in the way of productivity for the organizations. Because, you know, I you know, I've been in security for a few years now. And, you know, one of the things that's, um, you know, really interesting to see is how passionate security professionals are about trying to stop the threat, but also how afraid they are of getting in the way of business operations. And I've seen a little bit of a gun shy mentality at times of like, hey, should I go do this remediation? Should I, you know, cut off this box from the internet? Well, I know that the attacker is there, so yes, you should. But I also know that this is a pretty important laptop, right? And so being able to then connect the dots of this lateral movement. And now the office where everyone is working, uh, you know, in the cloud, it's, it's pretty cool to be able to then say like, hey, we are actually with pretty high fidelity picking up the suspicious login, the suspicious drive activity, and then shutting it down in a way that the user might not even realize how they're being protected at the end of the day, because we're not kicking them out of their session immediately. We're protecting the the data first, and then allowing the security team to then go follow up and say with the user like, hey, just so you know, we've been compromised, we're going to shut down, you know, revoke access to your account for a second. We'll do a password reset, which we can also support. And so that's all within the realm of being able to really work collaboratively, collaboratively with colleagues for security team versus just coming in and being a disruptive force for the organization.
Yeah. Interesting. So what does that interaction look like? How do you let them know or how does that interaction happen when you feel like you're dealing with something that's live and like has to be dealt with right now? Um, so are you, uh, is it a text platform? Is it an email or like, how are they getting that?
Yeah. So our customers can integrate us into their entire workflow. So we integrate with tools like Pagerduty, slack. You know, we integrate with tools like tines. And so any automation workflow that will then take that signal from us and you can run with it automatically is something we support. And so, you know, for security teams, they can sign up for a slack notification when they get a high or critical alert from us. And that'll drop into their, you know, security team channel that they may have configured for material inside of their workspace, inside of their slack workspace. And then from there they can, you know, decide to action it. They can let their automation tool take care of it, or they can reach out out of band for their, you know, for their colleagues, of letting them know, like, hey, like, we know we've protected the document. That's cool. We now need to reset, you know, the password, and we'll make sure that you do that in a timely manner.
That makes sense. And when you talk about the lateral movement piece, give me an example of, um, you talking about an active attacker doing that or are you talking about. Um, yeah. Exactly. What do you mean by the lateral movement?
Yeah. So it is an active attacker. So once, you know, credentials have been, you know, stolen and somebody is able to log into an organization, you know, that might not have MFA or they've bypassed MFA. Once we start seeing that anomalous login or the anomalous search activity, we'll be able to then shut down the the attack by putting everything behind the toufar for a customer at that moment. Or, um, you know, if that's already pre-configured as a general protection, They don't have to worry about that.
Okay. Yeah that's interesting. So normally they would have been able to just pull up files from a given share. But because they did this sensitive activity that looks anomalous, it doesn't stop them, but it just puts up the toufar to just guarantee that they're actually who they say they are.
Yeah, yeah. And customers can also dial this in to the severity that they wish. So you know, we have some customers that will say, you know what. As soon as we see this suspicious login, revoke the session, reset the password. You know, no matter what's going on with the okay. But other organizations will say, you know what? Like what I really care about is the sensitive documents and not being disruptive to my colleagues. And they have that configurability within the product where they can really set their threshold for how much risk they want to take on or disruption they want to bring to the organization. Because, you know, past lives, I've seen instances where the security team has found ransomware on a machine, and it happened to be for an executive who was about to give a board presentation. And so, you know, it's one of those like, situations where you really want to try and think about how your organization works and rightsize the response. And we're trying to enable that by picking up the signal all the way across the spectrum and allowing security teams to choose when is the right time for them to respond for specific sorts of threats within the product.
Yeah. And like you said, to adjust the control set according to the risk appetite for that organization. Right. Um, yeah. I love the fact that you can switch right into ATO if, uh, the security team wants to, but if there's like a culture of like, no, that's too extreme. And that would make the security team look bad or whatever. We need a more gentle approach. Let's just prompt for, uh, MFA.
Yeah, exactly. And, you know, one of the up and coming use cases that I've had some conversation with customers about that I think is is also pretty interesting, is around sharing of documents internally and starting to enable tools like Google's Gemini inside of the workspace. And so if there is sensitive information inside of a document that is unknowingly shared to everyone at the organization, that also means that Gemini can pick up on that. So that could include things like compensation and other bits of sensitive information that you might not want somebody to very easily query inside of Gemini. And so yeah, it may have been harder to find in the past, but now it becomes a lot easier to find with gen AI entering into the workspace a little more proactively.
Okay. And and what is the, um, the signal pickup there? Like, how are we finding out what they're doing?
Yeah. And so with that, we have a rule where it is, uh, document has sensitive information and it is shared with the entire organization.
Okay.
And so, you know, even something like that, a customer can then say, okay, you know, email the owner letting them know that they have this pretty over permissive sharing enabled for this file, and then give them six hours to remediate it. Otherwise then revoke access and set it to private.
Okay. Yeah, that makes sense. Um. Anything new coming out soon? Any, uh, new functionality you're excited about?
Uh, yeah. One of the things we'll be, uh, unveiling pretty soon is, uh, really more connective tissue across, you know, and continuing to evolve the threat, uh, and detection and response capabilities across the attack life cycle. So, um, as we're building out and maturing the product, we're focusing really on being able to connect the dots, you know, with even higher fidelity, more types of, um, use cases. So from broader DLP to inbound email threats, um, being able to really say, hey, this really does look like a problem and we're going to help you remediate it. So you're not just adding tickets to a queue, but helping, um, you know, really remediate for folks is something that we're going to be spending a lot of time on.
Oh, nice. Remediation. So what will that flow look like? Like, how are you tying deeper into the remediation flow?
Yeah. So that will include things like, um, you know, individual slack notifications or out-of-band notifications for, for end users at an organization or things. Um, you know, beyond just revoking access, but being able to start to pull in things like integrations with your IDP and starting to disable, um, access not only within the workspace but beyond it. So knowing that there has been some compromise of the identity. Um, and we're seeing a firsthand account of somebody trying to get across it, being able to validate that, you know, if your IDP doesn't know about it yet, we will help enable the remediation there.
Okay. Yeah, I'm really excited about that. So it's basically getting additional context from other places in the organization to be able to do the stuff you're already doing.
Yeah. Yeah, because context is everything when it comes to security, right. Of being able to know not only what's weird and what's not weird, but like when you should go do something and when pausing for a moment might be the right call. Um, you know, I've worked, you know, with security teams where they've really been trying to be not only good colleagues, but champions for their security organization and getting that buy in. And one misstep in a response action where you have, you know, either the wrong call or at the wrong time. Um, you know, that really sets the security team back. So we're trying to partner with them to make sure that they're rightsizing the remediation, you know, automatically with as much context as possible.
Nice. Well, awesome. Where can we, uh, learn more about the the products?
Yeah. You can find us at Materials Security. Um, you'll see everything that we have to offer there across the protection for the whole workspace.
Sounds good. Hey, David, are you there?
Yeah. Can you hear me?
Yeah, yeah. Any modules? Um. Any other, um, pieces of functionality we should, uh, ask about?
No, I don't think so. I think that pretty much touches on, like, the basic stuff. I mean, we do have, like. Like I mentioned, the cloud workspace, like, in general, that's our general vision, um, that we're kind of releasing in a, uh, I guess two days. Um, so I think he kind of touched upon it. It's not really necessarily a new release. Quote unquote. A lot of, like, our real releases have already happened. So this is just more of like an announcement kind of stuff that brings it all together, all these elements. So I don't think there's anything on our end from a feature perspective that has not been called out, that should be called out.
Okay. Sounds good. All right, Patrick, I enjoyed the conversation and thanks for the time.
Yeah, thanks for having me.
All right. Take care.
Bye.
Unsupervised learning is produced on Hindenburg Pro using an Sm7 B microphone. A video version of the podcast is available on the Unsupervised Learning YouTube channel, and the text version with full links and notes is available at Daniel Comm Slash newsletter. We'll see you next time.