<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4373740&amp;fmt=gif">
MLSecOps-favicon PAI-favicon-120423 icon3

Cybersecurity of Tomorrow: Exploring the Future of Security and Governance for AI Systems

 

Audio-only version also available on Apple Podcasts, Google Podcasts, Spotify, iHeart Podcasts, and many more.

Episode Summary:

In this episode, we joined Strategic Technology Branch Chief,  Martin Stanley, CISSP,  from the Cybersecurity and Infrastructure Security Agency (CISA), to celebrate 20 years of Cybersecurity Awareness Month, as well as hear his expert and thoughtful insights about CISA initiatives, partnering with the National Institute of Standards and Technology (NIST) to promote the adoption of their AI Risk Management Framework, AI security and governance, and much more. We are so grateful to Martin for joining us for this enlightening talk!

Transcription:

[Intro] 0:00

Intro 00:00

D Dehghanpisheh 00:21

Welcome back, everybody! Thanks for joining. It’s been a while since we’ve all been here together, as we kick off Season 2.

I’m here to welcome…

Martin Stanley, CISSP 00:28

Martin Stanley, Strategic Technology Branch Chief at the Cybersecurity and Infrastructure Security Agency (CISA). Prior to that role, I ran our Cybersecurity Assurance branch. And then my big cybersecurity job before that was running the Enterprise Cybersecurity Program at the Food and Drug Administration.

Before that, I had a number of entrepreneurial roles at companies like UUNET Technologies and Vonage. 

D Dehghanpisheh 00:52

Martin, let's start with what is the mission of CISA and what do you do there? 

Martin Stanley, CISSP 00:59

Great question. Thanks for having me and for celebrating the 20th anniversary of Cybersecurity Awareness Month with us. 

So, CISA is the America's Cyber Defense Agency and national coordinator for critical infrastructure and cyber resiliency and security.

And in that role, we lead the national effort to understand, manage and reduce risks to the national cyber and physical infrastructure. And as you can tell, that's canned language. But that's what we do. And within that effort, we have this really broad view across, not only all the critical infrastructure sectors, but all the federal civilian agencies and really any partner that works with us on a voluntary basis.

And I think a good qualifier, you know, for all of the statements here today is that we're not a regulatory agency. We assist agencies and we assist critical infrastructure partners in meeting these missions. And similarly, as we talk about the work with NIST, NIST is also not a regulator. So as we talk about approaches and things like that, these are voluntary frameworks and voluntary programs.

And I can provide a little background too, if you’d like, on the work that I'm doing. So I am the Strategic Technology Branch Chief. And in that role, which I've held for about the last five years, we coordinate across all of the CISA mission spaces to identify what our strategic technology requirements are and we translate those into R&D requirements.

And we run a pretty large R&D portfolio in conjunction with the Science and Technology Directorate to deliver capabilities that meet gaps on the mission side. And D, as you might imagine, one of our largest areas of focus has been around the sort of non-sexy area of cyber machine learning. But we've been working on cyber machine learning projects for quite some time.

And as a result, we developed quite a bit of AI expertise in-house. In addition to that, it's put us in a position to build a lot of relationships across government, which we'll talk about during the conversation. 

And previously at CISA I ran our Cybersecurity Assurance program, which worked with other federal agencies to look at their cyber programs and to make them more robust to strengthen their protections around things like high value assets and their trusted Internet connections and the like.

D Dehghanpisheh 03:08

So this is not the first technological transition that you have worked with your partners on. There's been cloud, there's been mobile, IoT, others. 

What's different about this one with AI and ML from your perspective and the mission at CISA, but more critically you as the leader here? 

Martin Stanley, CISSP 03:26

So, I think that's an awesome question. 

So I think the difference between AI and some of these other technologies – and really there's been a tremendous amount of technology transition just in the last couple of years, but really throughout my federal career and before that – is there's this unique combination of hardware, processors, data, and people that we didn't necessarily have before. When we think about cybersecurity, we think of confidentiality, integrity and availability of systems but when we start talking about AI, we're really focused on people. 

D Dehghanpisheh 04:00

When you say you're focused on people, that obviously feeds into governance, I would imagine, right, if it's really about people. Can you draw out for us a little clearer delineation between, say, what is the tool side, what is the process side, and what is the people side? When you say people, maybe give us some context for that. 

Martin Stanley, CISSP 04:18

Sure. Absolutely. 

Cybersecurity and technology has always been for people and there's been a people element to it. There's your technical, your managerial, and your operational controls. But we've generally focused on the technical control set. And I think probably one of the big messages out of this conversation today is that technology controls are not going to be sufficient. Those technical measures and those static measurements are not going to be sufficient to get the right kind of assurance that we need out of our AI systems. 

And I think you've had some really great conversations here on the podcast about that, about some of these other areas. 

Obviously, AI right now is a big topic because of generative AI. There's been dozens and dozens of hearings and all kinds of efforts. While I haven't been involved in any of the latest conversations with the Congress – thank goodness – I can talk about some of the history, which I think is really important here, because some of the things that we're going to talk about today, particularly the federal approach and the the standards efforts that have resulted are the result of a long term effort. 

There were a couple of executive orders in the previous administration around trustworthy artificial intelligence, around preserving American leadership in artificial intelligence and the like. And there's also been some other kinds of things that have gone on. But those two executive orders in particular were the spark that led to the identification of the need for an artificial intelligence risk management framework, and then the work that this is done around developing and releasing that and then all of the support materials around it. 

And then also their collaboration and work with CISA to get that out to our stakeholders who have all of these needs that we talked about to manage these risks to our critical infrastructure and to all of our critical services.

D Dehghanpisheh 06:06

So, you're currently assigned at NIST to work on the Trustworthy AI Project and a part of that is the AI Risk Management Framework or AI RMF, right? Can you talk about how those two are nested together? 

Martin Stanley, CISSP 06:18 

Absolutely. So as I mentioned, you know, running the R&D program, we had this long, you know, this long engagement with a number of partners and that included all the work that was going on at NIST.

When the AI Risk Management Framework was released earlier this year, in one of those conversations that I was having with some counterparts at NIST, really the question of, how do we get the word out? How do we get stakeholders to start to engage, get us feedback on how it's working? How do we identify all the different components that we're going to need in order to make it work?

At CISA, what we're interested in from a strategic technology perspective is, first of all, how can we use strategic technology in our mission? So how can we use AI in our mission space? But we also think about how our stakeholders are going to leverage AI and how that may change the attack surface we have to help them protect.

And then the third area is, of course, how adversaries leverage new technologies to malicious intents. And as a result of this, and as a result of this consistent set of engagements that we have with all these partners, it's a real natural collaboration with NIST to help them to evangelize and move the AI Risk Management Framework out, and also provide real world feedback as to how that's going as well as early adopters and other folks are moving in and trying to leverage the different kinds of capabilities within the AI RMF. 

And there's a lot of that going on right now. I can't really talk too much specifically about particular entities that are using it, but there are a lot of federal agencies, there are a lot of large commercial companies that are taking the AI RMF and they're looking at adapting it into their risk management framework.

And I think before we get too far into talking about the specifics of the AI Risk Management Framework, we should really highlight that the AI Risk Management Framework, just like I think MLSecOps is as well, is focused on specifically managing the risks associated with AI. 

There's other mechanisms and other regimes in place for managing cybersecurity and privacy and the like, and they're all related in an enterprise risk management way.

D Dehghanpisheh 08:22

Yeah, at Protect AI, the sponsors of the MLSecOps Community, we think there is a difference between safety, security and governance, right? And when we kind of think about those differences, I'm curious as to (A) if that rings true from your perspective – that there are differences between safety, security and governance – and if so, how? And where do you think is the easiest place to get started in those entities or in those capacities, right? The three distinctly different ones.

Martin Stanley, CISSP 08:53

That's a great question and it's something that I really enjoy talking about, so you’ll probably have to cut me off as we get going. But so, before I worked at CISA, I ran the Enterprise Cybersecurity Program at the Food and Drug Administration.

One of my colleagues there and I co-wrote a book called Digital Health in 2021, which was a benefit risk patient provider framework for applying technology to medicine. What this was ultimately was a socio-technical risk management framework, and that is what the AI Risk Management Framework is. That is what I see a lot running through in MLSecOps.

I think you've got five focus areas as well. There's a lot of different names for a lot of different kinds of capabilities, and you'll see this across all different kinds of regulations or all of the different kinds of approaches to managing risk to AI. I think one of the concerns folks have to have is this conflation between, well, we need safe AI or we need secure AI, or we need this, or that, or the other.

Those things are all good. And we need all of that, right? But we have to understand that when we're talking about that, that may limit the tools and the capabilities that we get. And so we have to be specific about our intent. So what do I mean? 

The AI risk management framework, the NIST AI Risk Management Framework has a concept of Trustworthy AI, which goes back to the trustworthy Artificial Intelligence executive order, and within the sort of general concept of trustworthy AI, there's seven trustworthy character mistakes and those are safe, secure and resilient – and these all sound familiar – explainable and interpretable, privacy enhanced, fair with harmful bias managed. And you had a great MLSecOps Podcast about that.

The sixth area is accountable and transparent, and the seventh is valid and reliable. There's a structure that's inherent in the AI RMF around those trustworthy characteristics. You have trustworthy systems and you have responsible entities, you know, responsible people that use AI, so those trustworthy characteristics do map, I think to a lot of the MLSecOps areas.

And this is, I think, how we ended up talking here in the first place. My early project coming over to work with NIST on this program was to figure out, okay well… and we're all still scratching our heads, what does cybersecurity for AI mean? And if there's one thing that this community really is well situated to do is to help us to answer that question.

But, what does cybersecurity for AI mean? And so, I started going through and looking at the various practices. I looked at DevSecOps, I looked at all the different things, especially a lot of the data security things. And then I came across your website and I started listening to your podcast, and I was like, man, this is like right where it's at as far as this particular topic goes. 

The most important takeaway from this conversation for anyone that's listening is that it's not a question of applying cybersecurity controls or looking and saying, well, we've got an ATO, so we're good to go, we’ve effectively managed all the risks. 

You did a great job. You’ve probably done a lot to help to manage those risks. But the other kinds of risks, the more social risks probably are not managed. Also, you may not have the right kind of engagement with the stakeholders that need to advise you on even identifying what those risks are.

And so I'll stop there. I told you I could go on and on about this.

D Dehghanpisheh 12:19

No, that’s great, because I think you talk about the trustworthy characteristics, right? That was the phrase that comes directly from that. You map that against the social risks. Maybe that's the biggest difference in AI/ML right now, is that there is no real other technological transformation in recent times that has as much of a sociological impact as the speed and scale of which AI can have.

And I've had other guests come on and educate me and kind of change my worldview. When you think about those, and you think about those social risks, if you will. What is a social risk for a government is not always the same thing as a social risk for an enterprise.

So let's talk about the social risks. And as it relates to the RMF right now. Do you see a difference in the social risks that the government has that they're trying to address from an AI perspective for that trustworthy AI element versus say, what enterprises are doing? 

And maybe you can talk a little bit about that Venn diagram overlap where there is some and then where there is absolutely none.

Martin Stanley, CISSP 13:28

All right. So this is kind of a Pandora's box. So you know, open and we'll–

D Dehghanpisheh 13:33

Look at that, she’s dancing around!

Martin Stanley, CISSP 13:36

And we'll blow right through it. So the answer to the question is there is not one answer. And let's start with the NIST AI RMF itself is, as I mentioned, it's voluntary, it's risk based, and it's rights preserving.

So, its intent is around all the things that we're talking about here. I think the biggest difference between the way I think about things as a cybersecurity professional and the way I think about things as an AI risk manager-wannabe is that the risks are contextual. And so, it really matters how you're using the AI and in what context to identify what those risks are. 

And as we've talked, had focus groups, and engaged with different kinds of parties whether they're in government or not, identifying what those risks are has been really, really challenging for people. Going through your existing risks and your existing threats, I think, is a really good start. 

But like really, really going through and understanding from, you know, if you've got somebody who's a CISO, for example, other than like, maybe an iPhone catching on fire or somebody getting carpal tunnel, you're not thinking about safety. But, you know, with these systems, you have to think a lot more about safety and in ways that we're not necessarily accustomed to in our world.

But there are whole regimes that do that. So the challenge is bringing the right people in to advise on that. So, it's a big challenge. 

D Dehghanpisheh 15:07

I want to talk to you about something that you gave us in a briefing ahead of this podcast, and this is a great place to introduce that. And when you and I were having our banter before. So this dialogue, this banter comes naturally for those who are listening or watching or reading. 

Martin was educating me on this notion of, hey, when I brought to you the perspective of, well, what makes us think that having others who are not technologists come into play, how this could be roped in and what do we think, or is there a corollary? 

The example you gave me was from the Food and Drug Administration and how the FDA has done both security and safety and socio-technical risk– 

Martin Stanley, CISSP 15:50

Safety and efficacy are the two. And so I don't work for the FDA, I don't represent the FDA, but I certainly know about the FDA, and they regulate 25% of the economy based on safety and efficacy.

D Dehghanpisheh 16:05

And you were talking a little bit about how that was done, and how those types of things could be brought in. Maybe you could draw out some of that conversation that we had, which I was really fascinated by, and we can use that as a comparison for some follow up questions I have. 

Martin Stanley, CISSP 16:21

So, this is related to something that CISA is prioritizing and is very, very interested in as part of the national cybersecurity strategy, in fact. And I will definitely come back to where I think some of those approaches that you mentioned fit in. 

So, secure by design, secure by default is an effort that was highlighted in the National Cyber Strategy. It's something that CISA is working to engage on. And in a nutshell, it's tech products should be built to reasonably protect against malicious cyber actors.

And to that end, AI must be secure by design as well. We're not regulators, obviously, but to that extent, these are those principles. And the U.S., along with a number of international partners, has published Secure by Design Principles and Approaches. And those are available on the CISA website. And notice we're talking about secure. Right? And this is back to this whole terminology thing of kind of conflating–

D Dehghanpisheh 17:18

That’s what I said. Right? I said there's a difference between secure and safe. 

Martin Stanley, CISSP 17:22

And so we were talking about, when we did the secure by design kickoff, there was this conversation about highway safety, and safe at any speed. And I never thought I'd hear Ralph Nader referenced again, but there was this conversation about it. And it was great.

D Dehghanpisheh 17:38

A throwback for a lot of older listeners.

Martin Stanley, CISSP 17:41

Right. And it was new to me that, and it makes sense because driving cars is a bit of a passion of mine, that at one point in time there was never even – and it makes sense because it certainly wasn't – that cars should be safe. Like, there was a time where folks didn't even think that that should be a thing. Right? Which explains a lot. 

But coming out of an agency where I worked for four years, where that was a primary function. We had an entire entire regime for that. And so, whether it's pre-approvals or, it's labeling, it's post-market surveillance, it's mandatory reporting of adverse events, all of that kind of stuff. We know how to do safety, whether it's in that regime or whether it's in consumer products or it's in transportation, whether it's in the workplace.

We understand safety. Similarly, as we've talked about with some of these other social risks, we understand them as well. It's just there's different people that we have to bring in. You know, I'll go back to that podcast, I forget the gentleman's name, who is talking about who's talking about bias. But it was just amazing. 

D Dehghanpisheh 18:48

Shea Brown. Dr. Brown

Martin Stanley, CISSP 18:51

Yeah. 

If you don't know anything about that subject, it's worth listening to that episode. 

And it doesn't mean I wouldn't call it, because I've said this a couple of times to some tech people and they're like, ‘Oh, you just want to go slow.’

I worked at UUNET, I worked at Vonage, I worked at like, ‘innovate till you break’ companies. And, I get it. I understand. I'm not saying go slow. I'm just saying we know how to actually manage some of these risks, if folks will think about it. 

D Dehghanpisheh 19:18

Yeah. It's interesting, right? You said it's risk based and rights preserving. Right? 

And I get on the safety component, the “risk based.” I think the “rights preserving” thing is also a very interesting dynamic. I don't think that we've ever really had a technological transition or transformation that had to encompass that, right? And so what I hear you describing is something that we, especially this season on The MLSecOps Podcast and at Protect AI, we're going to be going into is this concept of Violet Teaming.

So, on the technical side, people who come from, if you're familiar with the purple team concept, red team, blue team – offense, defense – thinking about that from a purely technical element, you get to purple. But the violet teaming really brings into these kind of socio technical risk components, right? So, I loved what you said about a CISO now being an AI risk manager as well, because that's primarily one of the things that they do. 

They are thinking about risk from a technological perspective. They also need to be thinking about it from a brand reputation risk perspective, which means they have to have a new friend in the CMO, right? They have to have a new friend in HR. 

Martin Stanley, CISSP 12:32

There’s civil rights, civil liberties there. You know, maybe they have an IRB in the organization. 

D Dehghanpisheh 20:39

Yeah, exactly.

And I'm wondering if you're seeing companies who have taken the Risk Management Framework and are seeing this notion of “risk based and rights preserving” and maybe they have their own lens on it, or maybe they have their own aperture for how they're interpreting those terms. But are you seeing anything in the private spaces that are actually taking those two balanced weights of “risk based and rights preserving” and employing that in how they think about taking the AI Risk Management Framework into their operational processes?

Martin Stanley, CISSP 21:15

I don't have any quantitative information to provide about that. And I would like to go back to– 

D Dehghanpisheh 21:21

Hey, anecdotes are always welcome. We love stories here. 

Martin Stanley, CISSP 21:25

Yeah, no. It's the stories that get me in trouble. But I'd like to go back and talk about the Violet Teaming, because I think that's a really great example of the way that the community can help in this. And I promise I'll come back and answer your question. 

D Dehghanpisheh 21:39

No, no, it's okay. This is the answer. 

Martin Stanley, CISSP 21:41

If you remember my old job before I took my current job at CISA was running the Cyber Security Assurance branch. And that was basically all the blue teams that would go out and do these assessments of, whether it was systems, or networks, or insider threats, or whatever it happened to be.

And when we had to develop an approach for identifying weaknesses to high value assets, which was a big, big deal back in 2015, we realized we needed both. We needed a red team, we needed a blue team. And in fact, we actually also needed an engineering team and we needed other kinds of skills and specialties to come in and understand what are the controls that have been deployed that check the box, really meeting the protection need that you have here.

And so it's that interdisciplinary approach. That is what we're talking about. This is really, as we talked about the beginning, it's people and it's data and it's computers, right? All together, really at every level. 

As far as who's looking at this, the AI Risk Management Framework is intended to work as part of an enterprise risk management program. So that means that all of the those other elements, whether you have a privacy and civil liberties, whether you have some kind of institutional review board that looks at the way you do human factors and human studies and things like that, it's intended to work in conjunction with all that and to help to frame and to manage and to measure the risks of having AI in your environment or having your stakeholders use it or having you use it. 

I think the great thing that the violet team could bring to all this is helping us to start to think about how do we measure static measurements that you repeat over and over and over again and just say they're not going to cut it for this, right? We're going to need an entire new measurement science. 

D Dehghanpisheh 23:47

It's interesting because when I started my career learning from Andy Grove at Intel, one of the things he used to go around saying was, “You can't manage what you can't measure,” right? And I would say you can't mitigate what you can't manage, right? To a degree.

So, if you take that in and you start seeing the concepts of risk based and rights preserving components, and you've brought that forth in the AI Risk Management Framework, how are you advising governments and are there any success stories that you have from the US government perspective of where the AI Risk Management Framework is being applied?

Martin Stanley, CISSP 24:30

So, I would say stay tuned on that. I don't want to call anybody out here, but there are a number of agencies that are already looking at adopting the AI Risk Management Framework. They're going through, I think there's 74 sub functions within the AI Risk Management Framework. And for those that aren't familiar with it, there's also an AI Risk Management Playbook which has specific actions that you can take to meet the particular outcomes that are identified in the framework. And so that's really a great resource. 

NIST is leading a generative AI public working group. The intent of the working group in the outcome is going to be an AI risk management framework profile for generative AI.

And there's a couple of other ones around out there that are actually being developed in academia. But this will be specific. It's being developed by the community. It's going to be really interesting. And having worked with the playbook, having worked with the framework and working with various stakeholders, there's a lot of great ideas you'll see that reflected in the draft, which will be out hopefully and in November, if there's not another shutdown threat or something like that. Or maybe early December for public comment. These are all developed through an open public process. 

D Dehghanpisheh 25:53

So I'm gonna steer to something that wasn't something we had spoken about prior. But I think that the notion of this generative AI working group, I'm curious, AI has been in deployment in government and a lot of places for a long time. It’s not just something that all of a sudden arrived on our shores. 

What is it that you think got the attention of policymakers that said, Oh, wow, all of a sudden now it's a thing, whereas three years ago wasn't a thing? 

Martin Stanley, CISSP 26:27

Well, you know, I'm not an expert on generative AI, but, you know, I think one of the things that you have to remember is whatever it is that you're using it for, the output is a prediction. And that prediction is based on whatever data that it was trained on. 

And so your use of that output should be in recognition and understanding of that context. And I think that because a lot of these outputs are so good, that tends to get forgotten. You know, there's there's a, I think, a perception, a personification, if you will, that there's thinking going on. And there really, as far as we know, is not thinking going on that's producing those outputs. And not only that, I think it's something that is I mentioned I mentioned the the unsexy-ness of cyber machine learning, Right. Yeah. That's not something that the average person can go out and, you know, engage with. But my best friend's 80 years old and he plays around with ChatGPT, and

D Dehghanpisheh 27:27

Right. It captures the imagination.

Martin Stanley, CISSP 27:29

Yeah, absolutely. Not to mention the fact that it's multimodal, meaning that imagery and language and most importantly, coding. I think that the most interesting development about it is coding. 

So, this generation of content, it brings in a lot of other kinds of issues, but it's just it's mainstream. And I think that's probably the biggest difference now is that it's mainstream where in the past it wasn't mainstream and It was very specifically focused on particular use cases. 

I think one of the things that were, at least in the part of the working group that I'm heading up that we're focused on is this idea that you can change context with a particular generative AI system and that totally changes the risk unbeknownst to the person that's changing the context, like how you're using it.

D Dehghanpisheh 28:21

Yeah, we have an interesting story about that. An interesting targeting. Yeah, interesting drug development. Drug targeting, which is really going to use that. That same system can be used to create the same chemical diagrams that would give you chemical weapons. Right? Like, there's lots of interesting used components, which is part of the abstraction. Right?

Martin Stanley, CISSP 28:42

You just put a negative sign in front of it and then we’ll let you go. Not!

D Dehghanpisheh 28:47

Yeah, no it's not. 

But, to that end, your role in promoting adoption of the AI risk management framework, obviously you're consulting and advising a lot of different parties and entities. What are some of the strategies you're finding to be most effective where people are adopting that, whether it's agencies, or departments, or whole branches, if you will?

Martin Stanley, CISSP 29:13

Well, so this is actually, I think, a great question for a security professional, right? Because this is what we deal with. 

People want risk managed like right out the gate. We say that we accept risk, but generally there's an aversion to accepting risk. And in a lot of places you really can't. To promote adoption, thinking not necessarily about the end game, but more about getting started, making sure that you have the right people involved, making sure that you're thinking about your use cases and the context of those use cases and how those you know, how those use cases can impact people to get the right the right actors is the term that the AI RMF uses, but you want to have them involved. 

And at the end of the day, it's almost everybody. But, let's back up into what's achievable in the near term and that's look at in your mission space if it's in the you know, a federal agency, which is my specialty at this point. Boy, the people that know who could be impacted by an output or an outcome of a system are the people that work on your mission side. They work every single day and they know all the stakeholders and they're the ones who you should listen to. 

You have those resources, and I think they may be the ones initially that are going to be able to help you to know whether or not and maybe, again, I'm certainly not a scientist, and this is not the policy of any agency that I’m associated with. But, you know, maybe we're going to have some kind of science that is about whether you have an expert who's observing the system. And that's sufficient as a measurement that it's working okay. Right.? Like, you know, maybe–

D Dehghanpisheh 30:58

Less excessive agency type things.

Martin Stanley, CISSP 31:01

Exactly. You know, we did some initial work on this at CISA when we were thinking about how did we want to evaluate use cases. And it was high benefit, low regret were the use cases that we wanted to focus on. And to that end, and it's and you see this again, we talked about this early on about this sort of careful conflation of all these terms, but at the end of the day, they all come back to these seven trustworthy characteristics.

The other dimensions that we considered were explainable and in complexity. So we wanted high benefit, low regret, use cases that were highly explainable and not too complex because we understood that you may not be able to measure every aspect of that particular process when you automated it. 

And so, thinking about what's going to work best for your organization, you know, is is probably the best first step, and getting the right people at the table is a close second.

D Dehghanpisheh 32:03

You started off earlier in the conversation talking about how when you talk to technologists that they get this look in their eyes of how you guys just want to slow down stuff. I've asked a number of guests this question, and I'm curious to get your take on it. 

And for those who have listened or read the show notes in the past, they're going to see this question coming a mile away, which is have we created a double standard for machines versus people? When we talk about excessive agency of AI elements or things and a driverless car has an accident, and even if it's a fatality, as sad as that may be, it's like, oh, let's take off all driverless cars. But then that same day, how many crashes were there of people making mistakes or just doing stupid things like intentionally doing stupid things, and yet nobody's saying we'll pull off all the drivers, pull out all the cars. 

Like, how do we think about this almost double standard component between our infinite capacity to forgive humans and no capacity to let machines learn on their own?

Martin Stanley, CISSP 33:09

So that's a fascinating question. And I'm going to give you a great answer, but I’m going to keep you waiting for a second. 

So, I'm on the record. I've been on the speaking circuit and I've said we have to adopt these AI technologies. We need the expertise because if we don't have the expertise, we're not going to be able to advise our stakeholders who are adopting them. We're certainly not going to be able to understand our adversaries who are adopting and what kind of capabilities that means. So this is not a question of saying we can't innovate. We have to innovate and we probably have to innovate faster than we've ever innovated before.

I think the killer use case for generative AI is going through these troves of unstructured data that we have, and the federal government is probably the biggest store of all that. 

All those rights preserving questions come right into play when you start talking about that, right? So you have to and I think what we're trying to do is we're trying to chart a path that ends up with the least amount of regret. 

I think the question that you asked before I went on that little tangent is, why do we look at harms caused by machines in a different context than we look or different light than we look at harms caused by humans. I think that's going to be the question, right?

That may be part of this measurement science that develops. We don't know what that threshold is. I don't think that just being a person and this is not the again, you know, not the opinion of CISA or NIST or any other federal agency, 

D Dehghanpisheh 34:45

It’s a discussion between D and Martin. 

Martin Stanley, CISSP 34:47

I don't think that I don't think that if you have X amount of harm by people and X amount of harm by machines that that will ever be tolerable.

It's going to have to be a lot less on the machine end, and I think fundamentally – and we may have talked about this in the in the preview, but – when you get on the road and I live in the countryside and you're driving down the road and there's a single lane on either side and there's this double yellow line down the middle and there's another car coming at you in the other direction and you see a person, you can make certain assumptions. 

Now, those might be bad assumptions, but you can make certain assumptions that they have some, you know, skin in the game as to how this whole encounter is going to go with you. Are you both going to just pass or, you know, is someone going to make a mistake? With machines we don't have that.

So you don't have that level of assurance when you're interacting with the machine. And I think that gets to a lot of the concerns that have to be addressed upfront before we can go mainstream with any of that. 

D Dehghanpisheh 35:52

So, that is a really powerful sentiment and I really appreciate that thoughtful answer. Before we go, obviously NIST relies on communities and different community types to help pursue the goals and objectives that you want.

And there are communities like MLSecOps, which is trying to do a similar thing, right? We're just a different spin and a different take on it, which is why we're pleasantly able to engage with you and others today. 

In the broader landscape of the mission to begin this AI governance framework, right? And get this as a standard practice and government business and increasingly maybe even everyday life 

In that mission, how do you see collaborative communities – security, safety, socio technical communities like MLSecOps – how do you see them playing a role? In other words, in what you know, what are the things that you want other communities to help do and amplify and give you back in terms of a feedback loop? Like what's our role here together with you?

Martin Stanley, CISSP 36:56

Well, I mean, first of all, thank you because I think you're already doing a lot of that, which is to bring awareness to the different kinds of risks that need to be managed and to provide a structured approach to build a practice around managing those risks and implementing, in this case, security, but you're actually doing more than security, even though it' says MLSecOps, there's probably some more assurance going on in there as well. 

And there's going to be a lot of opportunity. Certainly we'll have this generative AI, public working group draft that's going to come out in the end in November or early December. Comments back from from industry groups or academia or, you know, actually anybody because we'll take comments you know 

D Dehghanpisheh 37:46

Bring it into the MLSecOps Slack channel. You'll get an active audience there with a lot of feedback and I would encourage everybody to join.

Martin Stanley, CISSP 37:53

And some and some of those folks are actually like on the working group, and they check in and they provide suggestions. And I've said the same thing to some of the zero trust groups. As a security professional, I feel like there's a lot that we can bring to address these challenges and to streamline and facilitate, you know, implementation of trustworthy AI systems. We probably have the most comprehensive set of measurements in place already from cybersecurity and privacy risk management already.

We have that stuff in place. It's not sufficient for the full set of risks that we're going to be concerned with from an AI perspective. But how much of that actually is going to be helpful in that realm? Or as we develop the measurement science around it, can we leverage those existing mature measurement areas to help us to achieve the kinds of assurance that we're looking for through the risk management framework using a practice like MLSecOps, for example?

D Dehghanpisheh 39:04 

Well, to the community, I want to say thank you for being with us. And Martin, thank you from the community for having this conversation with us today. 

Some fascinating insights. And I really like this concept of a new measurement science that gets applied to this space. What a fascinating concept. Thank you so much for joining us, Martin. 

Our guest again has been Martin Stanley, CISSP, Strategic Technology Branch Chief at CISA.

Thank you for the work that you're doing and that you continue to do.

[Closing] 39:35

Additional tools and resources to check out:

Protect AI Radar

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard - The Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for listening! Find more episodes and transcripts at https://mlsecops.com/podcast.

SUBSCRIBE TO THE MLSECOPS PODCAST