<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4373740&amp;fmt=gif">

The MLSecOps Podcast

MLSecOps: Securing AIML Systems in the Age of Information Warfare

Mar 29, 2023 22 min read

Susanna Disesdi Cox discusses Securing AI Systems in the Age of Warefare in MLSecOps

The MLSecOps Podcast does a deep dive with security researcher, AI/ML architect, and former political operative, Disesdi Susanna Cox, author of "Securing AIML Systems in the Age of Information Warfare" and founder of AnglesofAttack.io.

YouTube:

 

Audio Only:

 

 

Episode Summary:

In this podcast episode, we interview Disesdi Susanna Cox about themes from her paper "Securing AIML Systems in the Age of Information Warfare." Cox explains that as AI becomes increasingly adopted in industry and government, it becomes more important to take security risks and threats seriously. Cox discusses the various applications of AI in offensive and defensive security practices, and emphasizes the need for organizations to view their machine learning systems as safety-critical systems.

We also talk about the lack of focus on defending against AI security risks, the need for comprehensive documentation, and the importance of adopting MLSecOps practices. Cox highlights the security risks of ML supply chain attacks and the need for increased sharing of information between security and AI professionals. Overall, Cox's work aims to raise awareness about the serious issue of securing AI/ML systems and provides a framework for organizations to use to help mitigate these risks.

Transcription

Introduction 0:08 

Welcome to the MLSecOps Podcast presented by Protect AI. Your hosts, D Dehghanpisheh, President and Co-Founder of Protect AI, and Charlie McCarthy, MLSecOps Community Leader, explore the world of machine learning security operations, aka MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. This is MLSecOps.

D 0:38 

Hey, everybody. Thanks for joining The MLSecOps Podcast. I'm your host, D. With me today is Charlie, and our guest is Susanna Cox.

Susanna, I cannot wait to have you talk to all of our listeners here. First of all, your website, anglesofattack.io; that's an interesting theme. Talk to us a little bit about that.

Susanna 1:01 

Thank you. I'm absolutely thrilled to be here with y’all today. So Angles of Attack kind of came out of my love of aviation. In addition to doing security work and so forth, I'm a big time AV geek.

My father was a pilot, so I like to joke that I grew up with about 18 years worth of ground school just hearing about it all the time. So aviation is kind of a side interest for me, and I also do some aviation security research on the side.

So the “angle of attack” for a fixed wing aircraft has to do with the angle that the wing is hitting the wind. And I thought that was kind of, like a really neat sort of way to refer to security, too, because we have angles of attack, and I'm a red teamer, so it just kind of fits in with my theme.

Also, there's a joke that pilots can't shut up about how they're pilots, and that's kind of how I talk about security engineering; super obnoxious, but I find a way to work it into literally every conversation.

D 2:10 

Awesome. Talk about security for a second. I read in the background something about you being involved in a manhunt. What was that about? 

Susanna 2:18 

Well, it's funny. I was actually accused of a crime, which I did not do. 

D 2:22

That's what they all say.

Susanna 2:28

Yeah, there's a happy ending in this story, but I was told that I should turn myself in and go to jail. And obviously I disagreed with that.

So I kind of went underground for a year. And I had constraints. I was working with lawyers and so forth to get my name cleared and be exonerated.

But I had to stay within the state of California; had to do that for about a year. And then in the last week, I had to drive from California to North Carolina without being detected. And they had tweeted my photo, my likeness, my car, my license, all kinds of things about me.

I was able to make it. And during that time, I was also doing machine learning research for the startup that I was working at. So it was a pretty intense time but, you know, it's one of my proudest accomplishments.

D 3:18 

I’ve never been involved in a manhunt, and I don't know that Charlie has either. 

Charlie 3:22 

No, that's wild. 

Susanna 3:25

Yeah, it's a little bit unusual, but I ended up being completely exonerated.

D 3:29 

Alright. And now you're here with us today. 

Charlie 3:32 

Great story. Yeah. 

Susanna 3:36 

Yeah, thank you.

Charlie 3:40 

Okay, so one of the themes you seem to be really into, Susanna, is related to AI in the age of information warfare. And for those who haven't read your paper yet, what do you mean by that exactly; information warfare? 

Susanna 3:45 

So, there are a couple of meanings. We think of information warfare traditionally as sort of trying to run influence operations and that sort of thing. And that's actually kind of where I got my start in policy back in the day, if you will.

I grew up learning rhetoric and that sort of thing. And then we saw these sort of influence operations being wielded across things like social media prior to the 2016 and 2020 elections and so forth.

So that is one meaning of it, but there is a secondary meaning, which is that information is data, and this is what AI/ML models run on. And so any ability to manipulate the data and manipulate the models I see as a form of information warfare. And I kind of wanted to raise awareness with organizations about what that meant and how serious of an issue it is and also provide some concrete steps that they could take to mitigate this sort of thing. 

D 4:52 

Talk about a little bit more about that: AI in the age of information warfare. Where do you think this is going? Why do you think it's a big deal? Tell people a little bit about that and kind of how you think security plays into that. 

Susanna 5:07 

I think there are two important words that come to mind when I start thinking about AI in the modern era, and I'm talking about in the last few years, the explosion that we've seen; scale and scope. And these, I think, take on a few different meanings when we're talking about AI now because we have, obviously, these massive large language models and things like that that are just absolutely enormous in terms of their computational power and their parameters and so forth and so on.

But we are also seeing a huge scale of deployment and scope, too. The applications for AI range from, you know, it used to sort of be the algorithm that you saw in your feed was most people's interactions with that, but people are becoming increasingly aware that these are being used in safety critical systems, as an example, all the way from industrial control systems to aerospace and defense, on and on and on.

And so as AI becomes more adopted in industry and government and so forth, it becomes increasingly important for organizations to take the threats and the security seriously. 

D 6:16 

So how did AI change that threat landscape, though, right? Because what you're talking about in terms of taking out industrial control systems and things like that, that's happening today.

Just look at Ukraine and Russia right now. And that's been going on for a while. What is different about AI in that space that you think is not being addressed? 

Susanna 6:41 

Wow. Where to start? So there are many applications of AI in all of these spaces. You have offensive security and defensive security.

You have AI that's being used in security systems. You have AI that's being used to attack these systems.

Then you just have sort of regular, run of the mill utilities that people are using that are also subjected to security risks. And we also see in industry, there's not really the same focus on security that we have in more, I would term it, quote unquote, traditional DevOps applications and landscapes. And so I think there's sort of perhaps a perception that the same rules for security don't apply in AI.

So we have multiple vectors here to think about. How are we building our defensive security? What are the AI attacks that can happen, or machine learning based adversarial attacks?

And then how are organizations vulnerable just with these models themselves that they've invested tremendous amounts of money and resources in developing? How secure really, are they?

We've seen in my work, in particular, working around how we can take over different models and poison them in different ways and cause them to behave in ways that are unintended by the people who developed them. And I have not seen nearly as much focus as I would like to see on how we defend, and perhaps it's because we have so many vectors to look at right now. So it's a wide open landscape and something that I think is really unprecedented in software development. 

Charlie 8:19 

So, Susanna, part of building a defense is pen testing, right? I want to come back to some of the references you made in your paper, especially around the red/blue/purple team sections. You acknowledge that a lot of AI/ML practitioners may not be familiar with some of these security terms that we're using. Why do you think more red teamers and pen testers aren't going after the intellectual property of ML systems? 

Susanna 8:46 

That is a fascinating question. To me, it was sort of a natural place to start poking around, because one of the things I talk about in my paper is that, especially for systems that have feedback loops or some sort of continuous training on publicly available data, you have, as a user, access to the data. And when you have access to the data itself, you effectively have access to the model.

So for me, it sort of seems like a natural vector. But I wonder sometimes if there is sort of an intellectualized ideal of AI, as this, I don't know, special technology that maybe people aren't really thinking along those lines or they just assume it's secure or haven't really thought of the vulnerabilities as such.

I also wonder if there's maybe a lack of knowledge in general with security people who are maybe unfamiliar with how ML models work and so forth. I think I'm kind of unusual in that I've done both. People tend to specialize very heavily in one or the other, and so maybe it's not as deep on the AI/ML side, but just a basic core understanding of how these models work.

I've built them in production. I've overseen the entire software development lifecycle here. And so I have perhaps more familiarity with it than a lot of security people.

And there's also the fact that the Internet is kind of a trash fire anyway, so a lot of security professionals, they've got their hands full doing what they're doing. And this is a whole new world and a completely new landscape for people to dive in and learn.

There are some people who are working in this space, and I don't want to speak over them, but you're absolutely right. By and large, we aren't seeing nearly the interest in this that we are seeing in some other fields in security applications.

D 10:36 

But I also think it's kind of interesting that the security red teamers and pen testers don't go after it just generally, right? So you talked about the barrier to AI entry, and to me it's like, well, if a red teamer just got the rights and privileges and the roles of, say, what a data scientist would have, they could probably go in and knock over almost an entire model registry.

Once you assume that user's permission you traverse, you go find the model registry and take all the intellectual property, not just one model, right? Why do you think that's not happening? There's an AI space that needs to think about security, but there's also kind of like an educational space for red teamers and pen testers about the value that is within this small little portion of a company's code base, I would assume. 

Susanna 11:30 

Yeah, absolutely. We're not seeing things like role based access control or just basic security controls around these. A lot of the development in my experience, particularly in industry, is sort of viewed as ad hoc by default, and that's just kind of how they do it.

You're talking about stealing a model from the model registry. That's assuming already a high degree of operationalization that organizations may not have. So it kind of makes sense that they're not really minding the front door as much with these models. There's just such a push to get things out, and iterate very quickly.

It does surprise me that we aren't seeing more just kind of traditional digital breaking and entering and stealing these.

But I spoke to some really talented engineers who don't necessarily work in AI/ML spaces specifically, who are really unaware that you could steal a model, and it's like, yeah, this is something that a company has invested huge amounts of resources and development, when you think about the cost of having two or three data scientists work on something for a period of a few months, the iterations that go into it.

I mean, everything from curating your data to producing a final finished model, it's a massive amount of money.

And so you would think that organizations would be more aware and more eager to have these things pen tested and make sure that they're secure. 

D 13:00 

You're bringing up the theme of this podcast, right? MLSecOps and the need for MLSecOps and to adopt some basic security capabilities, as well as some, let's be honest, advanced ones like Zero Trust Architecture approaches and things like that.

Inside of that, how do you think about the need to not only educate the red teams, but also educate the blue teams on how to start applying some of their security capabilities and processes into their ML systems? I mean, blue teams often, and CISOs will say, well, code is code, and we know what we're doing, you know; where do you think it's different? 

Susanna 13:43 

So many layers to that? In one sense, if we're talking about someone breaking in and stealing your model in that respect, yeah, kind of code is code, and you should really lock your doors, so to speak. When we start talking about attacking the models themselves, poisoning the models, poisoning the data, and other ways that we can use to access even defensive security AI, it gets a little bit more complicated.

You do need to have an understanding of basically how these models not only work, but how they're operationalized in production. To me, this kind of seems fairly straightforward and simple once you understand the basic statistics of that. But I may be taking for granted seven years of AI development under my belt, and not everybody has.

But I've done this from the ground up, starting with a Jupyter notebook to actually putting models into production with an engineering team, and there are a number of considerations involved. Where are we sourcing our data?

How are we going to make sure that our data stays what we understand it to be? Right now, there's a lot of emphasis with data scientists on things like exploratory data analysis and things like that, but very little emphasis on what you do if your distribution shifts while the model is in production.

You have to maintain training and that sort of thing. And we see organizations, as you well know, struggling just to have a proper ML pipeline. 

D 15:17

Yeah, true. 

Susanna 15:19

It's rough. It's rough out there. And that's one of the reasons that so many businesses fail to actually bring models into production and get any value out of them is a failure to operationalize.

So my paper was kind of like, okay. I think you all kind of know or you can get from more detailed sources, you know, what is the proper MLOps pipeline? Here’s how to integrate basic security controls and checkpoints into that pipeline as applied to AI.

These are things that your security engineers may not necessarily think of because they're not familiar with either the pipelines or how the models themselves work. 

Charlie 16:24 

So when we're talking, Susanna, about improving security for AI and machine learning, that kind of leads us to the conversation, it naturally gives way to the topic of regulation, or lack thereof, and compliance.

And you touched on this a little bit just now, but you noted in your writing that one of the aims of that paper was to provide a practical and actionable framework that helps these organizations in early compliance and with regulatory requirements specific to AI ML systems.

D 16:33 

Part of operationalizing a system is to comply with some regulatory frameworks. What are those regulatory frameworks that you think are missing that need to be in AI? Because your paper addresses some of those. You provide a regulatory framework.

Susanna 16:49 

Yeah, so what I would like to see and what I think would benefit most organizations is comprehensive documentation from start to finish. And this takes on a completely different shape from other software applications when we're talking about AI/ML because, number one, the data plays such a tremendous role in this.

In my paper, I picked an auditing framework just because it's a well received one. It's from researchers at top organizations. But the goal was just to say, here is the framework. Here is the way to integrate a framework.

Because I went through the literature trying to figure out what is the root cause of this problem? Why are we not able to operationalize? Why are we not seeing security controls?

Because me and my friends, people that I work with, it's bad. We can mess with these models and take them over kind of at will, so depending on how the organization is structured and what it does. So I kind of wanted to unroll it a little bit and say, okay, in all these paperworks you say that you want to apply auditing.

You say that you want to have model cards. You say that you want to document your data, but you don't know how. And most of the papers that I found around this were either taxonomies of different auditing frameworks or people saying, help, we don't know how to do this, there are too many to choose from, and so forth.

So my thinking was, okay, let's pick a framework. Let's just pick one and then show you how to put this into a pipeline. And then you can pick your own frameworks and things like that from there. But they need to have sort of these minimum characteristics.

And for me, auditing is not complete without a failure mode and effects analysis, which it also feeds into penetration testing. And penetration testing, when we're talking about models, involves more than just trying to see if someone can get access to a physical system or a network and so forth. It also involves seeing how the models break and what happens when they break.

This is informed by a failure mode & effects analysis, FMEA, which is something that is borrowed from aerospace and defense and a lot of safety critical systems. I would like for organizations to view their AI/ML systems in any application as a safety critical system and take the opportunity to document your data, document your models. We're coming back to the issue of having a model repository.

Do you have a model repository or is it a repo full of Jupyter notebooks somewhere? Right? How are you operational?

Do you have any system of model governance in place? Are you rechecking and performing data validation?

If you're doing some sort of continuous or triggered training, all of these things need to be in place. And organizations may feel like this is a lot of overhead, but to my way of thinking, it's actually sort of minimal once you're properly making an MLOps pipeline to add these controls and do them parallel with your ML development process. And once you have them in place, currently, the regulatory landscape is kind of up in the air.

We don't really necessarily know where all they're going, but we do know that regulation is coming, and it's going to revolve around data and user privacy and also some of the end effects of these. So my goal was for businesses, government, whoever's being audited or regulated here, to be able to go back to their different repositories and pull out the model artifacts and say, okay, right here, here's where our data comes from.

D 20:37 

You’re really basically talking about like, lineage and provenance, right, of an entire system that you've got to ferret out, I would assume.

Susanna 20:44 

Yeah, literally. You have to be able to go back and trace back, too, especially if you have a pipeline where you're constantly retraining or something like that. You need to be able to go back and say where was our model? What triggered this, and so on.

D 20:58 

One of my friends told a story about the fact that his team wrote a model, and four years later the Chief Risk Officer came down because of a lawsuit and they had to explain everything. And so there was a lot of things that were missing in terms of their ability or inability to explain things.

And I think it highlights the point that you were just talking about, which is you need a framework to hold accountability, responsibility, so that you can do all the other things necessary to keep the system secure, right?

Susanna 21:33 

Yeah. And you have to think too. Do you, as an organization, want to be scrambling when, not if, there is an event because people need to start recognizing that it's going to happen.

That's the nature of security and software anyway. You need to plan for when the event happens and have processes in place.

Also need to plan for, if you, like you said, get sued, or if the regulations in your jurisdiction change. How are you going to be able to answer for that?

Do you want to suddenly be scrambling and diverting resources from other development processes and business processes that you need? Or do you want to be able to just simply pull something up and say, here's the information that you need? We've been keeping track already. 

D 22:19 

Yeah. So you were talking about frameworks, right? And there is this framework out there, a taxonomy terminology of attacks and mitigations, the adversarial ML kind of framework from NIST that's open for comments.

Have you looked at that? Do you have any thoughts on it? 

Susanna 22:37 

Yeah, I've also looked at their set of standards for AI and so forth, and I think I'm eager to see what comes out of this call for comments because the part of it that interests me the most is where they talk about sustainable mitigations. And for me, it's the documentation, it's the lineage analysis, it's the proper operationalization, it's checking security artifacts in and having those at the ready should you need to produce them for legal or regulatory requirements, and so forth and so on.

I think all of that is super critical to the development of future AI/mL. I really like the standards that have come out talking about how your AI should be transparent and it should be fair and that sort of thing. But there's also, to the extent that my paper was sort of platform agnostic in the sense that you can build this out on any platform, hopefully, I think that any set of standards has to be even more sort of general.

They can't really have the room to give direct guidance to businesses because the technologies are evolving so quickly and changing so quickly. I think we've seen just a bunch of platforms crop up for generative AI alone. So, yeah, the tech is just constantly changing in this field and it's moving very quickly. So what we desperately need is a way to look in and see and understand the problem and analyze it before any mitigations can really start to take place.

D 24:19 

So one of the things of being able to peer into, look into things, see things, as you just mentioned, that means you have to have the ability to audit it. And you mentioned that in your paper. You say audits are the first line of defense, if you will, for AI/ML systems.

How do you think about an ML audit beyond just kind of the things that you were listing? You have some concepts in there about adversarial testing, looking at a couple of other elements about data sheets and model cards; contextualize that for anybody who's listening, the need for these audits. 

Susanna 24:57 

So it's really hard to contextualize that for me without first laying the groundwork of saying, this is not being done at all right now.. In many cases, as a new data scientist and all the way up to being a senior and a lead, you're sort of tasked with, okay, here's this business problem that we need to analyze. Here are the tools.

Get cracking. And that's really the guidance that you're given.

You're not given a lot more than that in many instances. And so I think it's super critical for organizations to take a look at their development process, to standardize that and to really understand what is going on with their models beyond just, this is my Jupyter notebook and here are all the cells, and here's the history if anyone wants to read it.

Wow. My model is getting such great results. Let’s ship it.

D: 25:54

So, Susanna, you've talked a lot about the threats out there and the lack of defenses, the lack of security, the way we're not doing things, which is leaving all of us exposed in some way, I would imagine. Which begs the question, like why do you think we either, A, haven't heard of a major AI security breach, or B, when do you think it's coming up?

I mean, it feels to me like I'm not trying to say the sky is falling, right? But if we're leaving all these holes and leaving all these gaps in systems, it's not going to be long before they're exploited, or maybe they already are and we just aren't aware of it. 

Susanna 26:35 

Yeah, that's a great question. Well, my thing is that it's already happening, and I happen to know for a fact that that has happened. We are also, because of the explosion in ecosystems and technology applications for different AI models and text and so forth and so on, we're seeing a massive increase in the supply chain attack surface.

Last year, I think September or so, Symantec put out a report showing that they did an analysis of apps and found AWS credentials hard coded in just an absolute massive number of apps. And one of them that was so interesting to me was an AI company that actually did biometric logins. And many banks, yes, I think it was five banks had contracted with them to do biometric logins.

And in the SDK that they put out, they had hard coded their AWS credentials, which exposed not just the company data and so forth, but literally their users’ biometrics and personally identifiable information. And luckily, luckily, this was caught by Symantec researchers who put out the report, I assume after this had been repaired and looked at.

But you're talking about users’- you can change your password if it gets leaked, you cannot change your fingerprints. And so, again, I think even in some of the most mundane applications, we need to be considering these as safety critical.

As to why we haven't heard about more of this, I think there's a significant incentive for organizations to keep this quiet, especially if it doesn't leak out to reporters. And I also think that maybe it gets a little bit buried in the AI hype cycle because we see so much about the new large generative models and that sort of thing, and everybody is pretty rightfully excited about where this tech is going, and maybe attention is not necessarily going to the security aspect effects of it.

So definitely security breaches are happening. Whether or not we're going to see that, well, let me walk that back. We're definitely going to hear about one in the future. When that's going to be, I couldn't say, but my money would be on sooner rather than later.

D 29:00 

I don't want to wait for it, but I guess I will. 

Susanna 29:03 

You hate to be right about this kind of thing, but I'm afraid it's going to happen, especially if people don't start taking it more seriously.

Charlie 29:16 

Okay, I think we're running out of time, but as we wrap this up, Susanna, if there's one thing that you want our listeners to walk away with from this talk, what would that highlight be?

Susanna 29:22 

I would love to see the sharing of information between security professionals and AI professionals increase. And I'm going to say two things. I would love to see organizations take ML Ops and MLSecOps specifically, seriously. And I think between those two items, we could see a massive improvement in the state of the art.

I think more organizations will be able to get more models into production. They'll break less often, and it's going to benefit everyone from the businesses down to the consumers and users and so forth. 

D 30:02 

Susannah Cox, our guest with the paper, “Securing AIML Systems in the Age of Information Warfare. Make sure you check out her website. It is hosted on anglesofattack.io.

And the one thing I'm walking around here after this is going to say, when's the next attack coming? When's it going to be here? And what could we have done differently?

Hey, thank you so much for your time, Susanna. We really appreciate it. And thanks to all of you for tuning in. 

Closing 30:31 

Thanks for listening to the MLSecOps podcast brought to you by Protect AI. Be sure to subscribe to get the latest episodes and visit MLSecOps.com to join the conversation, ask questions, or suggest future topics.

We're excited to bring you more in depth MLSecOps discussions. Until next time, thanks for joining.

Additional tools and resources to check out:

Protect AI Radar

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard - The Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for listening! Find more episodes and transcripts at https://mlsecops.com/podcast.

Share This:

Supported by Protect AI, and leading the way to MLSecOps and greater AI security.