MLSecOps | Podcast

AI Security: Map It, Manage It, Master It

Written by Guest | Mar 13, 2025 7:31:37 PM

 

Audio-only version also available on Apple Podcasts, Spotify, iHeart Podcasts, and many more.

Episode Summary:

In this first part of our two-part episode, veteran security expert Brian Pendleton shares his journey from early hacker days to pioneering AI security. He explains why cataloging every AI touchpoint is crucial for uncovering hidden vulnerabilities and discusses the importance of aligning ML and security teams to safeguard enterprise systems. Tune in forinsights into AI adoption and risk management that can help you protect your organization's evolving AI ecosystem.

Transcript:

[Intro]

Charlie McCarthy (00:08):

Hey, everybody. Welcome back to the MLSecOps Podcast. I'm one of your community leaders, Charlie McCarthy, and recently we had the opportunity to talk with Brian Pendleton, who has a deep career in the security world. Brian joined us to talk about some of the AI initiatives he's been involved in, like the AI Risk and Vulnerability Alliance, AI Village, and you might have heard of this one, it's pretty cool, it's been happening for the last at least couple years, Hackers on the Hill

We had so much to discuss with Brian that we actually get a special two part episode here on the MLSecOps Podcast. So, today we're gonna be diving into the first part of what we talk about AI adoption in enterprises and the security considerations that teams need to be aware of when building, deploying, or adopting AI enabled tech in their enterprises. And then we'll be releasing part two next week. Hope you stay tuned.

Charlie McCarthy (01:05):

Brian, welcome to the show.

Brian Pendleton (01:07):

Thank you for having me. I've been following [Protect AI] for a bit and I like to see what you guys have been doing. So very honored to be asked to be on the podcast.

Charlie McCarthy (01:19):

Thank you. Absolutely. It's a pleasure. Let's give the audience a bit of background about you first. So you've had an impressive career across kind of an array of roles. Can you give us a bit of that background and then maybe talk about whether or not there was a defining moment, Brian, that led you into AI security research over the past several years, and your work with like, AI Risk and Vulnerability Alliance, AI Village, those types of organizations?

Brian Pendleton (01:48):

Sure. So, I'll tell you, I've been in the hacker space since I got my first TRS-80 Model II and could connect it to a modem. So that, security, playing around with the computers, has been there since I was like eight years old. And got, well, actually, my dad got the computer. I just took it over right? And then when I was originally gonna go to school, I wanted to be a lawyer, of all things. But I also was gonna do engineering. So I wanted to do like patent law or something. And in college what sparked the interest in security, again, was seeing university computers and how they were networked with other universities and even other buildings within the colleges and realizing, "Wow, there's a lot you could just do on these systems and no one would ever know or be able to stop you if they did figure it out," you know.

Brian Pendleton (02:54):

And then as you said, I have really a varied career, and it's just because people, I think I'm a natural born troubleshooter, and people have just always said, "Hey, come work for me and help me with this problem." And so I go and do it for a while. But hacking and dealing with computers is like my video games. So that's, that's how it's always been with me. And even though I have been like head of IT departments and stuff, I stayed very close to doing the security work, but I tried to stay away from it because I would go home and wanna play around. And that was like my way of de-stressing, you know? 

Now, for AI, in 2017 I was considering doing a doctorate. And when I was talking to one of the professors, you know, they were obviously, what do you want to take a look at? And, you know, AI was starting to be talked about a lot, and it finally looked like we were gonna start making some real breakthroughs to make it much more prevalent within organizations. And so I said AI security, because everything that I had looked at before then really talked about adversarial ML, which to me was very academic type attacks, but in the real world you didn't see any of them. So I was trying to find that place of where do we really need to think about security as we're using these systems.

Charlie McCarthy (04:35):

Right.

Brian Pendleton (04:35):

And that's when I actually first got involved with the AI Village. You know, they started in 2017 at the AI Village at DEF CON. And it was great to see others that were starting to go, "Hey, we need to think about this from a security perspective, not just that academic security perspective," if that makes sense.

Charlie McCarthy (04:59):

It does. Yeah. And I like the comment that you're making about the academic space. I mean, there's a ton of great research out there, but when we're talking about practicality and like real world attack scenarios, the AI security space occasionally starts to feel a little bit noisy. And even on this show in particular, we try to be really conscious of making sure that we're bringing kind of new educational elements to the audience and not just regurgitating the same themes show after show. 

That said, AI security is a hot and necessary topic because the pace of innovation is just, I mean, it's like a rocket ship right now. Are there particular narratives that you've been coming across or even folks who claim to have been working in AI security for a certain period of time that you feel like are kind of misleading or missing the point when it comes to AI security specifically?

Brian Pendleton (05:52):

So, I mean, obviously every time that there becomes a lot of money... Let me rephrase that. Every time that money starts pouring into an industry or at least talk about, well, we need to do X or we need to do Y, people kind of pop up and start going, oh, we can do that. And you know, I think I've said before to a lot of people that I find people that are out there talking to different companies or on the Hill, when I go on the Hill, they go, oh, I've been doing AI security for 20 years. And I'm like, no, you haven't. There hasn't been AI security for 20 years. You might have been doing cybersecurity and helping a data science team, or maybe you were using a model in your product for spam, you know, for spam evaluation or something like that.

Brian Pendleton (07:02):

But you have not been doing AI security for 20 years. That's not a thing, right? I mean, really 2000, I would say 2015, 2016, whenever the adversarial ML paper started coming out and people started going, oh, yeah, we do need to think about security. And really, to me, and I'm very biased on this, I really don't think people started doing it until, you know, 2017, 2018. So anyone that says they've been doing it for a long time, I kind of push back on them. 

The other thing that I think that we're seeing is, yes, it's necessary, but you have the machine learning engineers, data science teams, that think security is one thing. And then you have the cyber teams, which I really have to stop calling it cyber, it's security and it doesn't matter what you're doing, it's security, they think of ML and AI as something else. And the biggest thing I've seen right now is just that need for there to be better cooperation between the two.

Charlie McCarthy (08:15):

Like knowledge sharing, kind of bridging the knowledge gaps between the two areas of expertise.

Brian Pendleton (08:20):

Yeah. And you know, one of the things that, to me, I mentioned it in my dissertation, but actually I'd been talking to someone that I consider, you know, a rockstar within the AI security world Will Pearce. Will and I have been talking since 2017 about the need for there to be, what we kind of call, a ML security engineer. And it is somebody who is maybe in one team or the other, but they can speak to not just the other team in the terms necessary to get the points across from a security perspective, but they can also talk to the management team to get them to understand why security and a model or the whole ecosystem is important. And, and I think we're, we're starting to see that, but it hasn't been super prevalent from what I've seen.

Charlie McCarthy (09:17):

Yeah, yeah, I agree with that. You almost need that linchpin of a persona to, you know, it's all fine and well, to have the AI developers, ML engineer, data scientist teams, and the security folks converging to help each other understand where these systems are at risk and how to mitigate that. But it doesn't necessarily do a whole lot of good if you can't communicate to your leadership, like, here's why these risks are bad, and here's possible consequences of like failure, you know?

Brian Pendleton (09:49):

Absolutely. And, you know, one of those ideas is sometimes, and you know, I'm gonna badmouth the ML engineers, they've never had to think about security. And for a lot of the people that do programming or building of these models, I'm also going to disparage 'em by saying they're failed PhDs. And what I actually mean by that is, you know, they went out and they got a PhD and they were like, man, I really wanted to be an astronomer. I really wanted to be a geologist. And then for some of those fields, it's tough and it's kind of boring. But they learned all this way of making models and manipulating data and stuff, and all of a sudden somebody said, you can make a lot more money if you do this.

Brian Pendleton (10:37):

So they're not in their field at all anymore, right? So they got this PhD in a field, but they're working building models. But a lot of those people didn't learn secure programming practices. You know, they weren't an undergraduate computer science major, a master's in computer science. So they didn't learn how to securely make things. 

And Python, as much as everyone loves Python because it's so easy to prototype and to use, no one thought about, "Hey, let's go to a more secure language," or let's teach you how to do very secure Python programming. And then a lot of times for, in these organizations, the model builders then send it off to another team who's gonna wrap it up and put it, you know, think about security, but were we thinking about the security while we were building the model? And can we be sure that nothing within the model is insecure, that even though we may wrap it in something that's secure, is there still a vulnerability because of the way that we've built this model?

Charlie McCarthy (11:44):

Right. Are we paying close enough attention to the whole AI supply chain, if you will? Or even, I mean, the common example that we've been talking about a lot, you know, builders who are enthusiastic about building models and doing so quickly, downloading assets from an open source repository like maybe Hugging Face, and not being aware that there's the possibility of a model there that is a victim of like a name squatting situation where it's not the actual model that you want. 

And so depending on, you know, how you bring that model file into your ecosystem, you could be opening up your system to a virus, like with malicious code within that model. That just, things that we haven't really thought about before. And to your point, individuals who aren't trained on the security piece of it, how would you know, right?

Brian Pendleton (12:37):

Well, and think about it, a lot of, you know, when OpenAI opened up their market or what now, I can't remember if that's what they call it, but you know, whenever they opened it up and said, "Hey, everyone, start using OpenAI to build models," and you know, "We have a place where people can come find your product." A lot of those companies that were doing it, were one in two, three people companies, right? Not companies that have the resources to not only think about, I need to get my product out, but I need to secure it. I need to make sure my data that I'm using is clean. You know, all these other things. They're like, oh, we're gonna build on top of OpenAI and they've done everything that we need. But that's not the case. 

You know, if you think about like software liability, what if OpenAI has in some dataset that they weren't ever able to find child pornography, right? But your client or your users say the one phrase that exposes that. You know, that builder may go, well, that's OpenAI's fault. It's like, no, it's your fault for not checking it, for not understanding your product and doing good safety practices before releasing a product. Right?

Charlie McCarthy (14:05):

Right. And the tricky part right now also is there's not, we haven't reached the point where there's enough legal precedent for that kind of stuff to be like, yeah, no, I'm covered, it's the other person's fault. Like, well, maybe until a court decides otherwise. And then from there, you know, they'll start to figure that stuff out. But yeah, you can't depend on that. Okay. 

This is maybe a good seg point, Brian, for us to transition into talking about more of the AI attack surface. I'm curious from, you know, insights you might be able to offer from your experience within the IT realm, you've spent a lot of years working with like complex IT and network environments. How do you envision AI systems changing the traditional system security model?

Brian Pendleton (14:56):

Well, and here's the funny thing, I was just at the last ShmooCon a month ago, and I got into a large argument with somebody about this. To me, we have to remember, AI is just software. There is nothing magical about it. And quite often, going towards that, I've often seen the ML teams, data science teams, actually try to tell the cyber teams, yes, you know, there is something magical about it, to get them to kind of acquiesce to some of their requests, right?

Charlie McCarthy (15:34):

Oh, shoot. No no, don't want that.

Brian Pendleton (15:35):

But, you know, that is exactly that idea that if we look at it from just software. And then from software, we look at it as a system. That's the most important thing to me that we have a lot of people trying to say, no, it's something special. I always go back to it's just software contained in a system, just like it's Excel or, you know, Google Chrome or anything else. Yes, it does something different, but every piece of software does something different. 

So if you're not starting with the very basics, both while you are creating the model, and then once you're out, you know, you've put it out on a live service and it's serving, doing inference, you still have to be looking at the same types of security things that you would do for any live service. Right? Back to your basic API security, you know, your OWASP Top Ten for web-based attacks.

Brian Pendleton (16:46):

Those are all relevant for OpenAI. You know, for Claude, for ChatGPT, anything that's on the web, you know, it's not, oh, this is a model. Well, you're serving it through a web browser or you're serving it through an API. So everything that we've learned over 50 years of security, you need to be doing. Just because it's AI doesn't mean it's different. 

But the number of people that have pushed back on me and say, that's completely wrong. You know, I know it's an opinion, and everybody has their opinion on it, and the people that say that it's wrong have good points too. But we do have to realize that at some point, you have to start with basics. And I will always fall back on securities 50 plus years of experience rather than seven or eight years of AI's security experience.

Charlie McCarthy (17:43):

Sure, that's fair. That brings up a question around, I mean, we talk a lot about on this show, and of course it's sponsored by Protect AI and we're focused on some specific aspects of AI security, and a lot of the guests we talk about like AI supply chain security and making sure that you're securing assets within your AI ecosystem. But as you're talking about the larger system, are there areas in the infrastructure surrounding your AI system that you've heard from other folks or noticed through research that might be more susceptible to vulnerabilities?

Brian Pendleton (18:25):

So one of the things that I found when I started off my research, there's a person by the name of Shawn Riley, who in 2014 came up with this cyber terrain model. And then he updated it in 2021. And I wish everybody, and I will give you a link to it so that maybe you can post it, and I think that everyone should take a look at it because he did a very good job of pointing out every point that he could find that could have some type of cyber attack. And, you know, I will tell you, I am also what I would consider a failed academic because you know, once I did my dissertation, I did it during covid, and by the time I got done with it, I just didn't want to have anything else to do with writing or anything for a while.

Brian Pendleton (19:23):

But my whole plan had been to take his cyber terrain, and I had talked to him about this, and find those points where AI specifically could be attacked or at least has some type of, if not vulnerability, at least a point that you want to ensure that you've at least looked at it and made sure there's no vulnerability there as the systems change. So that's one of the things that I think that we also are just now starting to do over like the last year, year and a half, is understand the systems aspect of it. 

But even going beyond just computer system, you know, the technical part of it, and this is one of the reasons why I like Shawn’s cyber terrain, is it also talked about the GRC aspects of security. So I think that we're finally starting to see people also talking about GRC and making sure to bring in the management team from the day that somebody says, I think we should build a model.

Brian Pendleton (20:36):

Before it was a lot of the data science teams just starting to build it. And then maybe when they're close to being done with it, maybe asking the security teams, "Hey, what can we do about this? And, and should we do this?" And the financial teams, the risk teams, the legal teams weren't talked about, you know, weren't brought into it at all until maybe near the end. 

But I think one of the things that we're seeing now is good security practices and risk practices bring being brought in right from that very beginning. You know, I still kind of question, at least from a for-profit organization, do they bring in the finance people a little too late? Because we still have a huge number of projects that just fail, right? And so, they're costing companies lots of money when you put all these resources together, and then you don't get a product outta it.

Brian Pendleton (21:33):

But, I think that that's also, at least on mid and smaller sized companies, I think the security teams tend to try to cover all of these different aspects for good or worse. Because sometimes that can maybe put too much pressure on them to try to find every single risk, every single thing that could go wrong. And you're never gonna do that right. And then one of the other things about going back to like supply chain, right? It can be so hard just... 

Python, let's just take Python. You know, the versions move fairly quickly. And so a lot of those versions, you know, there's an increase because there's a security fix. But a team may have built a model, at least the prototype and said, no, we need to use this version of Python, but we figure, but the security figures out that, oh, that leaves a security vulnerability.

Brian Pendleton (22:37):

I need you to move up to this level of Python on it. And sometimes they'll go, no, I can't, I need this package and it's only in this version of Python. Or, you know, something to that effect. And, or maybe they need a package that hasn't been updated in a while but there's known vulnerabilities, right? So it becomes very hard for the teams to figure out what they should do. 

And that always gets back to the main focus of the security team, right? It is to keep things secure, but not at the expense of the organization not being able to do its work, right? So, what the data science team needs, or the finance team needs, or whoever, they should be able to state, I need to do this, and the security team should go, okay, but to make sure it's secure, we have to do this X, Y, and Z.

Brian Pendleton (23:36):

In the past with data science teams, I have found that in general, they kind of get to override everything because that whole it's magic thing, right? So the management team is, no, we need them to do this. And it hasn't been until recently that the security teams have finally started to go, no, this is important. And I think that has come about more because of the reputational risk of LLMs. Once there was a risk that a management team could actually really see, it made it a lot easier for the security team to finally say, see, we, we've been telling you, you need to have these issues addressed.

Charlie McCarthy (24:21):

Yeah. And the impact of those risks too. Like if there is a failure, or what have you, are you gonna be facing financial consequences, reputational damage, you know, for your brand? That really, really catches management's attention.

Brian Pendleton (24:37):

And let me do one other thing, so the other thing is with AI, so many people confuse safety and security, right? And the security team is not necessarily in charge of ensuring that the model is going to be safe or that any other system is gonna be safe, right? That's not really security's role. It may be put on them in an organization, but one really hard thing is you may have a security team that doesn't understand the safety aspects of AI. You know, whether because it's so new or maybe they just don't have anybody that has that experience. Which gets back to having that ML security engineer that has kind of a knowledge of both fields so that they can talk to each other.

Charlie McCarthy (25:32):

Yeah. I like the overarching message, or the idea, that security is this piece under a larger GRC (governance, risk, compliance) umbrella, because that really puts the focus on a cross collaborative effort across many different teams within an organization. And that kind of needs to be what it is from the get-go.

Brian Pendleton (25:55):

Absolutely. I mean, I like saying, security is a team sport, and I'm not the one who came up with that. I've heard that from the time I was in the military and I've seen it everywhere, right? And it is a team sport. And if one member or one group within the team doesn't want to play, your system is instantly less secure than it could be. Cause they're making it harder for the people that do want to make it secure, to secure that system. And to me, security starts with the governance. I mean any organization, the management team has to be telling us how much am I willing to to risk, right? So they set the risk. How much am I willing to spend for my security? So they set the budgets. I mean, they set everything.

Brian Pendleton (26:47):

And right now it's kind of a wild west, right? Because the US has not passed any AI governance laws. And, you know, Biden's EO got rescinded, and the replacement EO didn't talk as much about securing AI as much as the other EO did. And I'm not gonna say that Biden's EO was like perfect about it. There were things in it that were very performative, not actually impactful, but at least had a little bit more in there than President Trump's EO. But you know, without GRC, I mean, you're just somebody setting up going, well, I'm gonna do this. But you don't know how it fits into the overall organization's plans, right? And that's not what the security team is supposed to do. The security team is supposed to take what the organizational goals are and fit security to those goals.

Charlie McCarthy (27:55):

So kind of along the lines of GRC, several individuals that I've spoken with for the show and outside of the show, and I tend to agree with this, is that for AI security practices, really we can augment our current risk assessment practices and GRC, it's not like we're building from the ground up. We should be taking what we're already doing in these spaces and just enhancing them for AI, not bolting on. So I guess I would be curious, Brian, if you were leading security for a company that is building, deploying, maybe adopting AI today, and it's a little bit newer for them, is there a first step you'd take to work toward making sure it's secure or advisement that you would give to other leaders?

Brian Pendleton (28:53):

It's funny you say that 'cause I'm applying for a position where they asked me that exact same thing.

Brian Pendleton (29:01):

So maybe it's make or break, but my very first thing that I would do going into an organization is catalog every instance of a model or AI touchpoint within the organization. Because today, if it is a midsize company or higher, maybe even like a hundred person company or higher, they probably in some way, shape or form are using AI. Whether they know it or not. 

And that's why it's super important because, for instance, let's say they have Office 365. Well, did they know that by default Co-Pilot was gonna be turned on? And if they did, did they allow it to be, you know, did they make a conscious effort knowing that on this date it would be turned on? We are going to allow it because why? You know, were they purposeful in why they have every piece of AI related software in their systems?

Brian Pendleton (30:14):

Because from that starting point, then you know at least the initial surface that might be attacked. But if you just go in and go, oh, they, this company processes credit cards, so I'm just gonna go take a look at the model that is making that, that decision on who gets a card and who doesn't. Well, how do you know that there's, you know, not five other models that are also feeding that model and there's a chat bot actually on the website of the person that is applying for the card, and maybe the chat bot could somehow be manipulated to where the social security number isn't getting passed through right... 

You know, all these different things, right? So if you don't take that catalog first and know, and then go over with both the technical team, the data science team and management, and say, here's 25 points that I found. Did you know about all of these? And next, if they say yes, you know, at least for all the ones they say yes. Why, why are they there?

Charlie McCarthy (31:29):

You're essentially making a case for an AI bill of materials. Well, that's how it's landing on me. Like catalog, yes. But to your point, like a lot of these things are connected within the system, and so you need to know what the versions of things are, and access controls, and who touches these models and when, and kind of all of that plays into it.

Brian Pendleton (31:56):

Oh, exactly. I mean, and the second thing that I would do is set up a data chain of custody. Data provenance is the next big thing. But all of these things, so you brought up an AI bill of materials. To me, SBOMs, we shouldn't have model cards. And I understand why we do, and I know some of the people that develop these model cards. And it's an example of the AI field not taking a look to see what the security field has already created and done.

Charlie McCarthy (32:37):

Can you double click on that? Just quickly, for some of our earlier learners on the show, when you say we shouldn't have model cards, will you expand just a little bit on that for them? Like some of your logic behind that, do you mind?

Brian Pendleton (32:49):

Well, so the reason why I say we shouldn't have model cards is I believe that we should be using SBOMs. Which is software. And since I say that AI is just software, that format that CISA has already created for everyone, is what we should be using. But there are sections in there that could be specifically either enhanced or we could add an additional section in if there is something very AI specific that needs to be in that software. But in general, they're very similar. Right? So no matter what, we should have a document that says why we have this, who's supposed to touch this? You know, how often are we supposed to update it? You know, what are we supposed to do when we're gonna retire it? All of these different things. 

And then yes, there should be a section of, you know, just like if you're building anything where you're like, here's all the different things that went in here with the versions, all of that. Here's all the different data sets, yada, yada, yada. And it should be updated. I just say, instead of model cards, I'm more on board with SBOMs because CSA has already given us a very good formula for that and had had been pushing it for a long time. And if you take a look at it, it could be very easily, in my opinion changed to fit every need for a model.

Charlie McCarthy (34:26):

Yep. That makes sense. Thank you. I know that was a little bit out of the scope of what we prepped for this episode, but that felt like a good learning point for some of our folks.

Brian Pendleton (34:34):

Yeah, and I mean, there's a huge contention on that as well. So there are lots of people that sometimes are like, what is a model? Why do we have model cards and why do we have SBOMs? Aren't they kind of the same? And they're like, they are kind of the same, but. And so it's good for people to know that there are two different things and it's a preference right now. Of course. You know?

Charlie McCarthy (35:02):

Yeah. Okay. Awesome. So kind of just recapping, I completely agree with what you said about like, good first step is just cataloging where you're using AI in your ecosystem because there's absolutely no way that you can identify risks and, you know, help work toward mitigation if you don't even know what you're working with.

Brian Pendleton (35:20):

And that's also trying to catalog shadow IT as well. So, you know, taking a look at your net flows. Are people on their cell phones? Can you tell that there's traffic going to Anthropic and OpenAI and stuff, so that maybe they're not using it on their browsers or something but are they using it in another way, you know? And then...

Charlie McCarthy (35:47):

Which by the way, is not a bad thing. We just wanna be able to know and coach on like, you know, and have those safeguards, guardrails.

Brian Pendleton (35:56):

Absolutely. Because, you know, it could be that the organization says, I don't want anybody to use this. And so, you know, and maybe that's why they didn't allow it on your desktop. And so if someone's getting around it and then putting it in the work that they're doing, that they're submitting, you know, the company may not like that. Or the company may just want to be able to go, oh, people do want to use this. Great. Let's come up with some definite outlines of how we want to allow people to use this in their work. And then if they're using it on their phones, maybe we should bring it onto their desktop so that at least we, you know, after we set up these guidelines, we can ensure that people are doing that.

Charlie McCarthy (36:43):

Right. Kind of wrapping up this AI and enterprise security piece that we've been talking about, you mentioned earlier something that I'm hoping to get a little bit more of your take on, in depth. API security within AI environments. What's your take there like, as far as if it's going to be, you know, a larger concern in the coming years, or is there stuff that people in organizations need to know about API security specifically? Because I keep hearing this come up more often, and you mentioned it earlier in the show, and so I just wanna give it a minute.

Brian Pendleton (37:23):

So, I'm gonna be honest with you, that's a topic that I am not an expert on. Again, it's one of those things of, you know it's a touch point that you have to take a look at. In my opinion, it will be important because more and more services are being generated not actually on ChatGPTs website or Claude's website or anything, but through APIs, right? So, for instance, most of the services that I've seen on OpenAI's marketplace, they make their calls through APIs. 

So if OpenAI hasn't appropriately secured their API, sure there could be issues. And remember, one of the big things about security is it's not necessarily always an attack to get data or something like that. If somehow I could impersonate one of these services, what if I just hit their OpenAI's API a million times and drive up a bill so huge for this small company that it puts 'em outta business, you know?

Brian Pendleton (38:57):

Or what if I am hitting OpenAI's API and doing a denial of service attack? Now, they understand how to defend against that. At the very beginning though, their security may not have done that because it wasn't something that they might've thought at. But as people start building other models, if you take a look, let's forget the big guys, but if you take a look at other companies, they've started building APIs to access their models. So again, we have to start thinking about it. 

And I think the easiest, the easiest attack I always go back to is just a DOS attack on it, right? If I wanna bring their service down then I'm just gonna hit their API and make calls left, right, and center and either make it to where other people can't use them, or, I mean, you know, inference costs money. Maybe I'm just sending thousands, thousands of requests just to try to increase their compute costs.

Charlie McCarthy (40:11):

That's another point that people need to be conscious of and make sure that they're addressing.

Brian Pendleton (40:16):

Oh, absolutely. I mean just like, I don't know, maybe you've heard stories about this of how when you misconfigure AWS, you know, several smaller companies, 'cause they didn't understand how to turn compute on and off. And I mean, they've been hit with like two $300,000 bills and it's put them outta business 'cause they just, they didn't have the money to pay their compute costs. And I'm sure in some cases, maybe Amazon has set 'em aside, but in a lot of places, you know, they're like, no, you burned up that compute you owe us. And that right there could put a small company out of business very quickly.

[Closing]

Be sure to check out part two of this conversation for even more insights into securing your AI ecosystem!

 

Additional tools and resources to check out:

Protect AI Guardian: Zero Trust for ML Model

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.