What’s Hot in AI Security at RSA Conference 2025?
Audio-only version also available on your favorite podcast streaming service including Apple Podcasts, Spotify, and iHeart Podcasts.
Episode Summary:
In this fast‑paced episode, Madi Vorbrich and Protect AI Co‑Founder Daryan “D” Dehghanpisheh walk you through everything you need to know before heading to RSA Conference 2025: top sessions (NIST, CSA, OWASP), live booth chats with SAP, Microsoft, Trail of Bits, and more. Whether you’re chasing GenAI agents, zero‑trust strategies, or real‑world threat intel, consider this your RSAC survival guide.
Transcript:
[Intro]
Madi Vorbrich (00:07):
Welcome to the MLSecOps Podcast. I'm your host and one of your MLSecOps Community Leaders, Madi Vorbrich. And with me, I have the pleasure of introducing D, who is the Co-Founder of Protect AI. D, thank you.
D (00:22):
Hey, Madi, great to see you. And I'm so glad to be on this podcast with you. This is the first time that we've done this together.
Madi Vorbrich (00:28):
I know!
D (00:29):
So, this is great.
Madi Vorbrich (00:29):
I know!
D (00:30):
This is going to be fun!
Madi Vorbrich (00:30):
I'm super excited. This will be a very chill talk. I thought it'd be really fitting since we're, you know, about a week out from RSA to talk about kind of what's hot at RSA, what should people attend, what should we listen in on, what should we double click on?
So, for those who are listening, RSA is one of the biggest cybersecurity conferences in the US where people from all over the world, I mean, cybersecurity professionals, industry leaders, converge in San Francisco for a week and just discuss current and future concerns within that space.
So, D, with that, I know that Protect AI will be having a booth at RSAC this year. We're going to have a lot of fireside chats, a lot of demos, a lot of cool stuff happening. So, for our listeners, can you kind of just break down what those chats are going to look like and why they're important from like, an educational standpoint and what they can get out of it?
D (01:28):
Yeah. Well, I think it's worth noting that, you know, this is our third RSA as a company, Protect AI, but it was interesting because every year there's a theme that comes about. Right? And the first year we went, it was just the founders of the company, and there was an empty room basically. It was this massive room, and it was us and some people from MITRE and a couple of other people who were thinking about starting a company. And in the first year, it was really about AI security. And we didn't know if it'd be a thing actually. We were like, oh, maybe this is a mistake. But it turns out that the following, there wasn't even a hundred people in a room built for thousands. And so this big empty, cavernous area and there wasn't really a theme of AI.
D (02:21):
Fast forward one year, the next year, all of a sudden it was all AI, AI, AI. Well, what changed? GenAI hit, right? And so that, the second show, our second year in existence, coincidentally, was all about generative AI. That was the theme. And we capitalized on that pretty well. We did some great events. We had a lot of talks on the floor. We had fireside chats. It was really cool stuff. And you can go back and look on the podcast and some of the things that we did there, and also some of the material that Protect AI had.
But this year, if I had to guess, I'm going to say it's an extension of the GenAI theme, and it's going to be about agents. And so as a result, we have a lot of fireside chats and talks and guests that are going to be talking about agents and agents vis-a-vis the GenAI hype cycle, which is real.
D (03:16):
You know, and that's why we have Ken coming on. We have Ron and John, Keith Hoodlet, like Ken Huang is just, you know, it's going to be a great series of talks, right? So we're going to be having conversations with Ken about GenAI and agent security. We've got Ron and John talking about the OWASP Agentic Security Initiative. Again, like you can tell that this is kind of the theme that's marching on.
And finally, Keith Hoodlet's going to talk about the technical risk landscapes as it relates to AI and deployment pipelines. Think about prompt injections, and a whole bunch more. But as you can tell, it's really going to be about the next phase of GenAI, which are agents, right? And as a result, we're going to have a lot of talks in those spaces.
Madi Vorbrich (04:02):
Awesome. So with that, how would you tie in Protect AI solutions with those fireside chats? Like where's that connective tissue there?
D (04:12):
Yeah, so that's a great question, right? MLSecOps is a practice, and it's not about a brand. It's not about a single company. And MLSecOps is the foundation for AI security. And when you think of it in that capacity, the question becomes, okay, how does Protect AI, how does the suite of products that the company has really help in this journey? And it really does it in three really great ways.
The first way is that we scan the AI code. So there's a lot of SCA and SAST tools that people are familiar with in the security space. Basically code scanners that look for vulnerabilities in that code. We do that targeted to the AI landscape by focusing on the security of model files and associated files with models and the code that is really a, you know, an integral part of that.
D (05:04):
And besides having a whole lot of unique capabilities and unique things in our code scanners that keep the code more secure and more safe, if you will, and therefore more trusted when it's deployed, we also have context that other companies just don't understand. So, for example, if there's certain types of calls in the code, most scanners would say that's okay. But we know the difference being AI natives in our course of our career, as well as everyone in the company. We know why certain things should never be in a model file ever. Right? So the first thing we do is we create secure and trusted AI by scanning the code.
The second thing we do is really by automating the testing of those GenAI apps and establishing baselines for the models that power them. I think that's really important because what separates AI from every other class of software, and that's all AI is, it's a unique category of software, but what's different about it is that it is powered by a machine learning model.
D (06:07):
And so when we talk about automating the testing of that, we're really talking about autonomous red teaming and you know, human augmented red teaming and penetration testing, if you will, and capabilities that kind of dynamically scan and dynamically test and validate and verify the behavior, the security posture, all of the things that you'd want to know in a very rigorous way around those GenAI applications and the models that power them. Right?
So the first thing we do is scan the code. The second thing we do is automate the testing. And that automated testing goes throughout the lifecycle. It is dynamic. It can be evoked at any point in the CI flow. There's all kinds of great capabilities.
And then last but not least, when you put that application, once it's been, you know, it's been built with secure code, you know, it's been tested before it goes out, and it is continually tested, not just every now and then on the calendar when there's a breach or when there's some compliance thing that's needed.
D (07:06):
But the last component being that when I've put that into production, I'm firewalling and observing the full context of what happens between the use pattern. And the reason I'm saying use pattern is, today, it's about humans interacting with the GenAI apps. Tomorrow it's going to be about agents, but you still need to monitor that and guardrail and firewall that traffic. And you need to be able to observe what is happening throughout that application and throughout that usage pattern, right? So again, we scan the code to make sure it's built well. We dynamically test it, continuously test it to make sure that it's doing what it should. And we're monitoring and observing and firewalling the traffic to make sure that the usage is safe and trusted, right?
So that's how our products come about. We're very, very proud of it. And, you know, I encourage everybody who's there to come by the booth, see how that is contextualized. And, you know, post RSA, stay tuned to this podcast. Look at the videos about Protect AI, watch the material that the team is producing, and you'll get a better context for how our products really help you, you know, build, deploy, and manage secure and trusted AI.
Madi Vorbrich (08:23):
Yeah, I mean, it really just sounds like a one stop shop at the booth. I mean, you can really get it all by going there and tuning into the fireside chats, doing demos, talking to you and the team. So I think that's going to be really exciting for those who are attending. And then our listeners who stop by.
D (08:39):
Yeah. It's important to remember that. Yeah, sure, Protect AI is the sponsor of MLSecOps, right? But MLSecOps is a practice. That's why competitors and customers, regulators and technical users, everybody from lawyers to students is involved in this community, right? Because it's designed to be a place to learn, a place to understand. But obviously, you know, the company being behind MLSecOps behind being the leaders in AI security, we know that we're not alone in this journey, but the mission to build more secure, trusted, and safe AI is really going to be incumbent upon everybody. But we're taking the lead in that. And, you know, we're very proud of what we've done so far.
Madi Vorbrich (09:26):
Yeah. As you should really. And those who are listening. I do have kind of a list of when you can come see these people and interact with people like Ken Huang. So, starting off strong, on Tuesday, we're going to have him talk about previously what D mentioned we're, we're also going to have Ron Del Desario, John Sotiropoulos, that'll also be at the booth, Robert Linger, Helen Oakley, Keith Hoodlet. I mean, the list really goes on and on. And it would be daily, hourly for those two days, Tuesday and Wednesday at RSAC. So it's going to be a lot of jam packed sessions, which I'm excited to hear more about.
D (10:08):
And these are really talented practitioners, right? They're not only, you know, and they not only have really well-formed opinions and thoughts and leadership on this, but when you think about Ron as the VP and Head of AI Security at SAP, you know, he's a practitioner of this. SAP one of the largest companies in the world. I mean, we're all talking about supply chains and the era of tariffs and what's going to happen. But like, it's kind of neat to think that one of the largest supply chain oriented software companies in the world has this mission to build secure AI. John is Head of AI Security at Kainos, right? The Head of AI Security going back again in 2022. That job didn't exist, right? So, and now in 2025, that job is super critical to every company, right?
D (11:00):
All of a sudden it's there. And at the core of AI security for John, his team, his role, is MLSecOps, right? Robert Linger, Vice President and Information Advantage Practice Lead at Leidos, great partner of Protect AI. Critical for us in, you know, mission critical sensitive spaces such as the Department of Defense, the intelligence community, critical industries, healthcare, life sciences, manufacturing, like Leidos is an incredible partner of ours. I'm really excited about the talk Rob is going to be giving.
Helen Oakley, Senior Director of Secure Development at SAP, again, a great company. Keith Hoodlet, the Engineering Director at Trail of Bits. Everybody in the security space knows Trail of Bits. The fact that we have such an expert like him coming on to talk about, you know, the security trends and Kevin Magee, Global Director of Cybersecurity Startups at Microsoft, right? Like, if there's, you know, an ecosystem that's better than this, talking about AI security, I don't know of one. So I'm really, really excited about that.
Madi Vorbrich (12:03):
Listen, I've got a huge list, of stuff. It is really jam packed. I mean, this is a busy, busy week with some really good talks, some really great industry leaders to listen to. So I have kind of like a top three list that I have going on. And then I have some, like, additional talks that I think would be really valuable for our listeners and anyone that's in the MLSecOps Community and those adjacent to it. Right? So I would say, first...
D (12:29):
Let me turn that question around then. What are your top three?
Madi Vorbrich (12:33):
Oh, okay! Well, I would definitely recommend the OWASP hosted event on GenAI Security 'cause again, that is a very common theme that we're seeing right now. They're also going to cover, you know, the OWASP Top 10 for LLMs, AI red teaming, best practices. It's definitely a must-attend if you want a comprehensive view of the LLM vulnerability landscape. Outside of that. We also have Ken Haung that you mentioned. He's going to have his own talk on Zero Trust.
D (13:03):
Super excited about a guest like that, man. He's so well respected in the AI security space.
Madi Vorbrich (13:07):
Oh yeah. He's a well of knowledge. So this is going to be top-notch. Again, another must-attend talk for people that are in this space. He's gotta dive into how to apply zero trust principles to AI agents that handle sensitive data. So again, it'll be a deep look as a how sorry, a deep look at how to prevent unauthorized model behavior, which is super important.
D (13:31):
Keeping agents from going rogue, man!
Madi Vorbrich (13:34):
Yeah, exactly, exactly! So those are two. I have another one that I would definitely recommend, which is the Cloud Security Alliance event that is going to be an AI focused summit on Monday. So as soon as RSA starts, you know, it's going to be bright and early 8:00 AM.
D (13:52):
Off to the racings.
Madi Vorbrich (13:53):
Oh, yeah, yeah, yeah, yeah. They don't take any breaks over there. So they're going to have some top-notch speakers discussing how to build trustworthy AI. So perfect for folks that are juggling both cloud and ML security as well. Yeah. So that's going to be a really good event. And also, D, I mean, there's just so many relevant talks to, again, sessions that relate to our community members. And those adjacent to it. There's so much going on. I'd love to dive into this list if you'd let me D 'cause I think this is a really good lineup for those going.
D (14:29):
Yeah, why not? I mean, you know, you've shared with me the agenda and I thought it was great. So, you know, let the listeners and viewers have at it. Let's, what do you got? What's on deck?
Madi Vorbrich (14:41):
Alright, let's dive right in. And for those who are listening, we're going to have all of these in the show notes. So don't worry, you don't have to write this down. This will all be on the website. But starting off strong on Monday, we're going to have folks from NIST and MITRE dive into the cybersecurity framework and AI. So they're going to introduce NIST's new cyber AI profile. There's going to be discussions around securing AI implementations and developing structured risk management around AI threats.
D (15:10):
That's going to be big man. 'Cause everybody follows NIST AI RMF, right? The risk framework. That's huge. So this is kind of the what's next of that.
Madi Vorbrich (15:18):
Yeah. And again, strong start off for the Monday when everyone's coming into RSAC. Must-attend for MLSecOps teams as well when they're building risk and compliance strategies. So definitely must-attend on that. And then as we continue on into the day, I would say and I may, I may be a little biased here, but Diana Kelly from Protect AI, will be...
D (15:42):
Diana's giving a talk? Gotta go to that one. And there's no shame in that game of being biased on this one.
Madi Vorbrich (15:48):
Oh yeah. And she's going to be with some folks from Google, from Denim Group, et cetera. And they're going to talk around Shadow AI. Specifically how, you know, tackles you know, the rise of...
D (16:03):
Unsanctioned AI, huh?
Madi Vorbrich (16:05):
Yeah, yeah, yeah. So that's going to be really great. They're going to explore tools like AI-BOMs as well, which I know that Protect AI has talked about in depth as well. So that's going to be a really good panel discussion.
D (16:20):
Nice. And I think we have some people from our engineering team come in and our product team as well, with more of a community angle aspect. It's really kind of like securing RAG pipelines and LLMs, if I recall right? Which is really kind of focusing on vulnerabilities and vector databases, prompt injections, poisoned retrieval sources. It, you know, RAG is kind of like that standard in gen AI right now. So I think I'm excited to hear from them in a non-product way. It's more of an educational component. I think it's going to be great. Those guys are awesome.
Madi Vorbrich (16:52):
Yeah. It definitely seems more of like a technical session there, which will be nice. Yeah, that's going to be a really good talk as well. And then we have other panel discussions. There's one with like, you know, members from the FBI, DoD Cyber Command...
D (17:08):
Yeah, so this one is really interesting. What I heard about it. "A Year(ish) of Countering Malicious Actors' Use of AI: What Have We Learned?" You know, what I love about this, is that how many times were we asked last year in the community? Well, how many things, like how are these attacks real? Have they ever happened? Blah, blah, blah. You're about to hear from the FBI, U.S. DoD Cyber Command, Department of Justice, Sherrod DeGrippo from Microsoft. Like this is real people. We're not here to say the sky is falling, but we are here to say, get prepared. Be prepared. Boy scout motto, right?
Madi Vorbrich (17:41):
Yeah, no a reality check on AI threat intel. You know, this is going to be huge.
D (17:46):
That's really cool.
Madi Vorbrich (17:48):
Yeah. And then also we're going to have some other folks from Protect AI too, that I saw that are going to dive into their talk is on "Unmasking Hidden Threats in the World's Largest AI Hub". So they'll kind of go over results of scanning over what a million models from Hugging Faces public...
D (18:06):
A million models, but over 6 or 7 million files now. Like it's not just the models, it's all the associated files that we have. And that's like a massive threat awareness, a massive amount of expertise. And so it'll be cool for people to really learn how our public scanning of that database to detect and avoid those risky models. Like we just put out a video on that. And I'm shocked, I thought over 40,000 of them now we've detected, like that's crazy.
Madi Vorbrich (18:34):
Yeah, I know!
D (18:35):
Number just keep growing, which means the attacks are real, which goes to the first question, right?
Madi Vorbrich (18:40):
Right. Exactly. Exactly. And also, oh, go ahead.
D (18:45):
No, yeah, sorry. You know, these are fantastic. But what if we'd like, step back for a moment, what are some of the major, you know, RSA conference trends that you are hearing that resonate with the community? You know, you're, you're the community leader here now. You know, what, what are you hearing?
Madi Vorbrich (19:06):
Well, in terms of like relevant themes that I'm hearing a lot of one is what you mentioned earlier when it comes to GenAI, agentic AI, I mean, it dives into all of the, the hot button stuff, you know, generative AI, LLM security. Also we're seeing a push forward when it comes to S-BOM adoption and extended AI, which is really interesting. Like building an AI-BOM.
D (19:31):
Wow. And that's cool 'cause we were the pioneers in that field. You know, it was interesting. It's interesting to see that all of a sudden become a massive component here at the RSA conference.
Madi Vorbrich (19:42):
Right, exactly. There's also a spotlight on zero trust AI you know, how to treat AI systems, including automatic or autonomous agents rather under a zero trust model. So that's also a trending theme.
D (19:58):
What does that mean, Madi? Maybe give people a sense of when we hear things like zero trust AI, an autonomous agent under a zero trust model, what's that mean?
Madi Vorbrich (20:08):
Yeah. So basically in a nutshell, it means that you're verifying each model's behavior and access privilege, just like you would for, you know, from a human user. Right?
D (20:18):
There you go. Yep. Makes a lot of sense. That's why I say it's like, hey, every job in the future is going to be a manager. You're just going to be managing agents, right?
Madi Vorbrich (20:30):
Oh, right. Right. Exactly. Yeah. So I think honestly, I mean, this year, I think there's going to be a bigger discussion floor in general when it comes to, I mean, even outside of RSA, I mean shit is moving fast, right? So a lot of doors are opening, there's a lot of discussions being had, and it's just going to be really cool to see this ramp up and see what's yet to come, you know?
D (20:54):
Yeah. And I think that it's cool because now that AI is everywhere, which, that wasn't the motto in 2022, I can tell you that. We're like, is AI security a thing? Now it's the headline topic, it's the headline theme. And I think it proves that our instinct on MLSecOps, because ML is what makes AI, AI. It's the machine learning model, and MLSecOps is about ensuring the trust and the security of those. That's no longer a niche, my friend. What you and Charlie have done in establishing that community is pretty darn cool.
And it's now relevant for everybody, whether you're MLSecOps Engineer, which is actually a job category that we've seen now, which is awesome. A security professional, just kind of new to AI or kind of, you know, the fact that RSAC 2025 is full of content in this space now when there was nothing several years back. You know what I mean? It just kind of validates what we do. So, hey, as we head into the final stretch for the MLSecOps Community, you know, what are any closing thoughts or key takeaways as kind of the leader here?
Madi Vorbrich (22:02):
Closing thoughts and key takeaways? I would just say really just soak up as much as you possibly can. Really. I mean, when it pertains to this, especially with RSAC, just join as many talks as you can, really network and get everyone's fresh take and opinion, because everyone has one, as you all know in this space. And don't be shy. Put yourself out there and, and yeah, just have a good time. Get out there, go to conferences like this.
I mean, these are game changers and they really open your eye to a lot of shit that maybe you just didn't really know about before or you didn't have a good grasp on. So I think having this experience and going to conferences like this is really helpful in terms of growth and career and everything.
D (22:47):
Yeah. You got it. You know my final thought here is it's only going to go faster if it's everywhere in 2025. It's everywhere, all at once, in 2026. So and I'm really excited about that. So that's about it from my side.
Madi Vorbrich (23:07):
Awesome. Well, hey guys if you want to talk more, if you want to meet Charlie McCarthy, one of our MLSecOps Community Leaders at RSAC, if you want to go say hi to D, you can go ahead and meet us at booth S what is it? S-1549 in Moscone South building. We'll be over there. The whole team will be there. So feel free to connect. And D, thank you so much for joining me.
D (23:32):
Yeah, it's a pleasure to do this. We gotta do this more often. I enjoyed it with Charlie, I'm going to love it with Madi. It's going to be quite fun. So this is great. Thanks for having me on. And hopefully we'll see as many people as possible at RSAC, where AI is everywhere.
Madi Vorbrich (23:50):
And for our listeners, I hope this helps you plan for your conference coming up. And if you enjoyed this episode, please share it with your friends, colleagues, everyone, and we'll see you next time.
[Closing]
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Model
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.