<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4373740&amp;fmt=gif">
MLSecOps-favicon PAI-favicon-120423 icon3

Securing AI: The Role of People, Processes & Tools in MLSecOps

 

 

Audio-only version also available on Apple Podcasts, Google Podcasts, Spotify, iHeart Podcasts, and many more.

Episode Summary:

In this episode of The MLSecOps Podcast hosted by Daryan Dehghanpisheh (Protect AI) and special guest-host Martin Stanley, CISSP (Cybersecurity and Infrastructure Security Agency), we delve into critical aspects of AI security and operations. This episode features esteemed guests, Gary Givental (IBM) and Kaleb Walton (FICO).

The group's discussion unfolds with insights into the evolving field of Machine Learning Security Operations, aka, MLSecOps. A recap of CISA's most recent Secure by Design and Secure AI initiatives sets the stage for a dialogue that explores the parallels between MLSecOps and DevSecOps. The episode goes on to illuminate the challenges of securing AI systems, including data integrity and third-party dependencies. The conversation also travels to the socio-technical facets of AI security, explores MLSecOps and AI security posture roles within an organization, and the interplay between people, processes, and tools essential to successful MLSecOps implementation.

Transcription:

[Intro] 00:00

D Dehghanpisheh 00:07

All right, welcome back to the MLSecOps Podcast. With me today is a friend, colleague,  a bit of a mentor too in this space. Martin Stanley, who is the Strategic Technology Branch Chief at CISA [US Cybersecurity and Infrastructure Security Agency]. Welcome back to the podcast this time as a co-host, Martin. Thanks. 

Martin Stanley, CISSP 00:26

Thanks for having me, D. This is great.

D Dehghanpisheh 00:28

Yeah, we're really excited to have you, along with two guests, Gary Givental and Kaleb Walton. Both have over 20 years of experience leading security architectures at IBM. Kaleb's now the chief architect at FICO, which is a pretty important company. If you're not familiar with what they do or why they're so important in your life, I would encourage you to follow up and check it out. 

But with that, Martin, I'm curious, you know, you've had an interesting journey getting to the role that you're in and in particular, helping lead this nation and, in some capacity of the globe, on building secure ai. Talk to me a little bit about that journey, share with the -  share that journey with our listeners, and then let's try to figure out what the journey is for Gary and Kaleb.

Martin Stanley, CISSP 01:16

Sure, thanks. Gary and Kaleb, welcome. It's awesome to have you both here, and I'm looking forward to hearing about your contributions in this space. So again as D mentioned, I'm the Strategic Technology Branch Chief at CISA. I'm currently assigned actually over to NIST [National Institute of Standards and Technology] on the Trustworthy and Responsible AI Project where I'm working on a number of efforts related to the AI Executive Order

But also, I have an eye on my day job and working with CISA, which has been putting a real stake in the ground on where the cybersecurity community can contribute. And I think this is really what this one of the great things about this particular podcast and the outreach, D, that Protect AI is doing around, around MLSecOps is to talk about how, while there's a number of new disciplines and different kinds of expertise that are necessary to get AI systems in place, a lot of the contributions that can be made are from some of the existing disciplines out there.

In particular, the cybersecurity and privacy folks have had a long experience introducing technology into these mission spaces. And as we talk with you know, folks with different kinds of expertise from different areas with these security backgrounds, they can help us to really figure out what is the, I think the greatest near term, near term challenge. D, as you guys post pretty much every day, it's protecting the machine learning pipeline from a lot of the, just you know, immediate cyber threats that we have today. There's a lot of other kinds of AI Risk Management that we can talk about that, you know, we're involved in around some of the more sociotechnical elements. But specifically, you know, CISA is focused on how we motivate and assist our stakeholders to safely and securely implement AI systems. 

And, with that, you know, it's great to have you both here. Gary, if you wanna tell us a little bit about your background, it would be awesome to hear how you got to where you are today and, you know, what you'd like to talk to us about.

Gary Givental 03:25

Sure. Well first of all, thank you for having me and Kaleb on this podcast. So it's great pleasure, great honor to talk to you guys. So my journey is maybe interesting, but maybe not so interesting. I started in cybersecurity at a small startup here in Detroit and through several acquisitions ended up in IBM where I've been for the last over 17 years now. 

And I've always kind of been interested in AI and artificial intelligence. That's what my Master's was in. And I've been kind of front and center through that entire journey of like very rudimentary, not quite AI, but let's call it AI anyway, type of systems all the way into expert systems, and then machine learning, and now GenAI [generative AI] being the big hype. So that's kind of been majority of my career applying AI in all its various shapes and forms to cybersecurity domain specifically.

And now, of course, you know, in the last year or so that ChatGPT kind of blew up the industry there, there's clearly a whole lot more attention and focus on what are the risks of using this type of technology in the market. And, and now that, well, everyone wants to use AI for everything. There's just more focus on it, which doesn't necessarily mean there shouldn't have been focused on it before, because certainly we've had machine learning pipelines for many, many years now, and these concerns you know, have always existed for quite a while. But looking forward to getting into that topic as we, as we continue this conversation. 

D Dehghanpisheh 05:08

Oh we're gonna go deep, my friend. Deep and broad.

Gary Givental 05:11

I'm sure. 

Kaleb Walton 05:13

Well, thanks. Yeah. Gary always holds this over me. He hired me <laugh> in that startup. So I've worked with Gary for nearly 20 years at IBM, and yeah, Gary did a lot of the hands-on work with the expert systems and with the AI systems. I I helped on the other end of it, which was designing the interfaces that would manage them, and then also securing the pipelines. I had a lot of DevSecOps background, and prior to that, a lot of the user interface background to front those expert systems. 

And then most recently after the DevSecOps, it's been MLSecOps, pushing some thought leadership on that inside IBM and applying it to one of our flagship products. Not as much as I would like to have applied there, because the tools that we have available are still maturing quite a bit, but as they mature we're gonna be able to apply a lot more there.

I recently joined FICO as a Principal Security Architect, and it's been pretty sweet finding out that FICO has been doing AI for like 50 years. So, so it's kind of cool. Excited to improve their ML there as well. And yeah it's a fun, fun space to be in.

D Dehghanpisheh 06:32

Awesome. So I guess maybe the, you know, Martin, you guys are doing a lot on Secure AI, right? And how you think about Secure AI, and I'm curious, maybe you could guide this discussion a little with Gary and Kaleb about how CISA sees the world from a Secure AI perspective, and then, you know, that complimentary MLSecOps component that exists. Maybe give our listeners, viewers, readers on the show notes, a little bit of background about Secure AI, and then we'll kind of get into that.

Martin Stanley, CISSP 07:06

There’s a couple, a couple of I think important releases that CISA has made in the last couple months related, you know, particularly to the topic of Secure AI and a couple of stakes in the ground that we made. 

So from a systems and data perspective, CISA considers AI to be software. And as a result of that, you know, it applies under the Secure by Design effort, which is part of the National Cybersecurity Strategy, which, you know, drives our priorities. And in particular, what we're interested in, is we're interested in seeing products get delivered into the marketplace that are you know, mitigating a lot of the basic cybersecurity concerns upfront so that they're not things that need to be configured, that they're free from, you know, defects that cause those kinds of harms as well. And so that effort is, you know, probably one of the first areas that we're focusing on, you know, getting the word out and then working with industry and with stakeholders to improve the ecosystem.

And then the second area of focus, you know, for us is this Secure AI guidance that we co-sealed with the UK. And so, there were a number of other partners that co-sealed that guidance, but it really talked about some of the baseline expectations for what a a secure AI system would look like, and the kinds [of] things that we're gonna be looking for for, you know, both the folks that are designing and also deploying and using AI systems. And there's so many other efforts across government to support this. 

For example, under the AI Executive Order we just conducted, as part of my work at NIST, a public workshop on updating the Secure Software Development Framework which is mostly focused on, you know, the existing kind of ecosystem. So we wanted to give it an uplift for generative AI. We're in the process of adjudicating comments on that. 

And so there's a number of efforts across government that are supportive, but specifically back to CISA’s mission, we're talking about how do we, how do we ensure the secure deployment of systems and having, you know, users not be surprised by the behavior of systems. 

And I think, you know, to that end, given your background it would be very interesting to hear from both of you what your experience has been in in that ecosystem, you know, designing systems and delivering them, you know, in particular, maybe in your most recent background, how did you find that and what were some of the biggest challenges working with, you know, your development staff with your marketing and, and product staffs around that?

Gary Givental 10:05

So, a couple of clarifications for context. So, you know, Kaleb and I, when we were working on all these systems, the context of our use case is very much a platform, right? So, you know, coming from Managed Security Services we weren't necessarily in the same scenario where we're deploying software as a service or we're acting as a vendor product technology where, you know, now it's sitting out in the wild somewhere on somebody's enterprise, and therefore any kind of data coming into that system is, you know, completely like untrusted, and the data pipelines are running on somebody else's environment now. It's a third party vendor which has its own, you know, host of concerns you have to work your way through. So in some ways, our use case is a little bit easier because at least we have control. 

So as an MSSP [Managed Security Service Provider], you know, we are operating our secure environment. We are under, you know, in my case, IBM's constraints for what that is, and security and privacy by design and the various corporate audits and, you know, operating the platform and infrastructure wherever that compute may be, it's within our, you know, guarded walls and we have full control. So that makes it at least a little bit easier because then when we're thinking about, well, how do we secure that environment in general, very much to what you said earlier, Martin, it is software. 

So we started on this journey, treating it as just another piece of technology within our guarded walls, and well, how do we protect those walls? So, so, that's just kind of one point. Another point now, that being said, and I'm sure Kaleb will comment on this as well, there are differences between just treating it as purely software and you secure that software in a variety of ways, you know, looking at third party libraries that it might be using as dependencies, and are those secure? And securing in the compute environment that it's running on, secure the data center, any API connectivity, like these are normal DevSecOps types of motions and actions you have to take. 

Well, there are differences with machine learning or, or anything that uses external data to train any kind of models. So we certainly have encountered some of that, where we do have to explicitly think about, “where is this data coming from?” 

And that's where, even in our use case where it's all running under our control, so we can lock it down as we would with, you know, your, your basic DevSecOps best practices, but what about the data? 

Well, the data is coming from our customers’ environments, who we are protecting. And now you have kind of like those little holes being poked in that guarded wall where protecting the data and, “Is the data itself secure and what is getting into our training pipeline?” actually does become another dimension, which previously we didn't have to think too much about because we're just thinking software. Library dependencies, licenses, our own bugs and our own software and, and anything that was disclosed in those third party dependencies. 

So, it is interesting from that perspective. Now, IBM as a company, when it comes to generative AI and that sort of thing, they have a very specific strategy where - for GenAI and for their Granite models and all the watsonx™ stuff - they have created what they call the blue pile. They do not trust data that is purely external. They take a very different approach strategically than some of the other companies out there. So they are very purposeful in curating their data set in order to do any large language models [LLMs] and that kind of training.

So, I think that's a good approach. Now, it doesn't quite help us in the MSSP context because if we're creating machine learning models for any kind of predictive analytics or what have you, and we're using data that is anonymized but collected from the various customer environments and so on, with full disclosure of course, then we still have that same concern about “what is the data contained and how do we use it properly.” But the rest of the pieces of the pipeline, and again, I'm not gonna go into too much detail, I think Kaleb would be great to take this one, but that very much falls a little closer to your standard DevSecOps type of a process.

Martin Stanley, CISSP 15:08

Gary, you made, you made a couple points, which I think we wanna highlight before we get Kaleb's thoughts. 

The first one is, you know, sort of the system and data view versus the people harms view, which, you know, we talk about when we think about risks. And you know, when we talk about Secure AI at CISA, we're really focused on that system and data view as opposed to, you know, benefits and risks to people as a result of operating these systems - that more socio-technical view - which is also the view that we're focused on at my NIST work. But specifically with respect to this, you know, Secure AI effort at CISA, that's what we're talking about. So I think, you know, making sure folks understand that we are concerned about both [system and data view vs. people harms view], but we have a different lens.

And then the second thing is something that you talked about very extensively here, but you know, just to kind of bring it up a level and talk about specifically the system boundary that we're trying to secure. And as we bring in these algorithms that someone else may have developed, and data which we don't necessarily know the provenance of, and then we're using all of that, and then people are interacting with the system that may be outside of our traditional boundary and, and the kinds of risks that that creates. And I think you did a nice job of describing all the different aspects associated with that. So Kaleb, I guess throwing it over to you now to talk more in detail.

Kaleb Walton 16:27

So you were asking about some of the, I guess the challenges, and I will speak a little bit about some of the people challenges. The roles that are involved in AI/ML sort of a solution are different than we're used to just in traditional DevSecOps. 

We experience, you know, there's a lot of work with researchers, and the way researchers work is not typically the same as the way that developers work, especially in a mature development organization that is following kind of a rigid process, and they've got security controls in place, even maybe back into their IDEs and into their development environments on their Git commits. They've got security controls spread all, all across. 

When you get to the researchers that are the ones that are like working on the models and stuff, they don't have any of that. They don't even know what you're talking about. And so we, that was a tough thing. They are now, it's becoming more and more prevalent, of course, but like - 

Martin Stanley, CISSP 17:24

Well and also to go under the radar of, you know, security professionals that don't know what they're looking at.

Kaleb Walton 17:31

Right, that's, and that's a good point because there is a - and we did our best and there were challenges because of course, researchers need data, they need data to be able to create these models - so we ran into a lot of challenges where writing a piece of software, you don't need the data necessarily, you can mock it all out, it's no big deal. It's different when it comes to creating these models. 

So bridging that gap, being able to trust, create the layers of trust and security controls to, to give them some form of the data that is still even valuable to them, and including that in a repeatable pipeline. We experienced a lot of challenges in that space. 

D Dehghanpisheh 18:13

Repeatable and sometimes self-executing, too.

Kaleb Walton 18:17

Sure. And it's not just the people, process - it's also just the terminology. I mean, there were challenges across the board, and then they all, there always is this overarch - or like under underscore, undercurrent - of the security and the data privacy, and we’ve got to make sure that we're maintaining it. That was tough. And that's been a challenge.

D Dehghanpisheh 18:39

One of the things that it felt like to me in this, you know, in this march to DevSecOps has been that it's almost hyper distributed. There's not one element in an enterprise or an organization that is, “I'm the DevSecOps person,” right? It's, it's really kind of diffused.

What I've noticed this year more than anything, like if I, if I were to look back last year and look forward to today and compare those two moments in time, this year you actually [now] have a Director of AI Security in a company. And that role seems to be staffed and funded, and, you know, they're building things that kind of standardize the security posture. They're responsible for maintaining that posture with tools, processes, educating people. 

What do you both see? You know, I'm interested to get your take, Gary from IBM, but also Kaleb being a, a big producer and consumer of AI technologies at FICO, how do you see the notion of securing that perimeter and securing that domain, which are kind of two different things of AI? How, how, how do you see that inside of FICO?

Kaleb Walton 19:53

Well, I haven't been here long enough to really give that a good consideration at FICO, but in general, that,coming up with a framework. I mean, even MLSecOps - I know we'll be talking about that in a bit - you need some way to just talk about these things, to visualize them, to see the flow of the data, to see it along a life cycle in order to even then identify those risks and then figure out what mitigations you're gonna put in place.

Using traditional DevSecOps and infrastructure security, that's the basic blocking and tackling. And I think that's a really good place to start, because if you don't have that in place, you really, you need to get that done first, and then you can layer on the AI and the ML side of it. 

But from an AI security standpoint, it's looking at your data. So looking at your data ops, your data pipeline, seeing what data you have, where it's coming in, where it's going out.

And then if you have models yet, which many companies do now, some companies don't, but if you're planning a model, you know, you gotta assess what that's gonna look like. What data's gonna go into it, what data's gonna come out of it, what sort of questions are gonna be asked, and then how are you gonna go about building that model? What are the team structures gonna look like? What are the life cycles going to look like? And you wanna borrow as much as you can from what you already do, which is software development and layer, layer this in the best way you can to what you know. Otherwise, you're going to get I think too confused. It's too complex. At least that's the way that I look at it. I tried to bring it into a model I was familiar with.

Gary Givental 21:31

Yeah. To add to that, I think, you know, Kaleb, you alluded to this, and Martin you also pointed this out in your comment, that it's not just the software and infrastructure boundary, it’s also the people.

So the challenge that Kaleb was talking about a couple of minutes ago is that working with data scientists and researchers, there's a dimension to this where software developers and data scientists just want to do their job. Their job fundamentally is, you know, creating the models or doing data science experience or writing software. And the less friction there is in just doing that from a functional point of view, the more efficient and effective they are in their job. 

So the minute you start throwing in a lot of the security types of controls and processes, creates friction, creates inefficiencies aside from the frustration of it, right? 

So I know that in my experience, like those kinds of challenges are fairly significant because the minute that, as Kaleb was saying, the data scientists need the data - well, what kind of data and is that data, not just what's inside of it, but where does it live? Because there can be all kinds of constraints in terms of whether it's GDPR [General Data Protection Regulation] and well, this is data from our European customer, but my data scientists are in North America, or vice versa, right? Well, how do they access that data? 

So you immediately get into challenges about, well, we need some sort of a secure sandbox where they can access the data. And then you get into even more nuance about, well, what's inside of that data, and should these particular people be able to see that particular data, and how are they even interacting with it in a way that it doesn't leave whatever you consider your sandbox? 

And to work through all of these challenges, you know, the comment you started with,  there now is evolving this new type of a functional role where someone should have oversight of that. And I think that that's a great point because without that, it ends up being a hot potato that is extremely challenging to work through depending on the organization, because that's now a constant tug of war. Or, well, we just need to do this job. Well, hold on, you can't access this data. Well now we can't do our job. Who can let us access the data? Well, we don't actually know because we don't have a good process in place for putting it somewhere in a way that's accessible to the right people, but secure. And we can actually have some sort of audit of that, that no, you're not taking this out of the sandbox. And, then you're getting into all kinds of other issues.

The same even in, and I love the comment you also made about the algorithmic perspective. Because It's no longer just about, it's a third party library and, you know, a JAR file or what have you that you're adding. It's now an algorithm, which now makes it even harder because now you need to understand what is the algorithm doing, and if that algorithm is somehow manipulating it in an insecure way because that algorithm itself is open source, well, how do you know it's not malicious? 

Like, you almost now need to have a fairly technical level of understanding of what these things actually are in order to be able to make decisions about “is this okay for my organization to use as we're building our models?” So, it requires a different kind of expertise than ever before.

D Dehghanpisheh 25:24

And Gary, to that end, doesn't it also kind of require, at some level, almost a modification or your tooling, right? Because, you know, like how you're scanning a model is different than how you're scanning other elements of source code, how you're deserializing - 

Gary Givental 25:38

Absolutely. 

D Dehghanpisheh 25:39

How you’re looking at those commands. Similarly, what Kaleb brought up in terms of like the, you know, the ability to see across the environment and understand the interconnected nature of these assets and how they're being used, how they're being shared. This, not just requirement, but almost like precursors and starters of things that actually are what allow a model to do what it's going to do, and thus enable an AI application to behave in a certain way, right? 

Like, you have to be able to see it, you have to be able to secure it, and to that end, you kind of need some new tools. You need model scanning tools, like you need visibility tools, you need auditability tools, you need tracing tools. You need all those things that complement your existing processes I would imagine.

Kaleb Walton 26:24

Know what's funny about that? Oh, sorry, go ahead Martin.

Martin Stanley, CISSP 26:26

No, Kaleb, I was just gonna say, I think it's also really important to connect the problem down and focus on, you know, the particular thing that you're looking at. And that's why, you know, I keep going back to there's like this machine learning pipeline, which gets at all these elements that we're discussing. But then there's these broader, you know, risks, which we need to be worried about as well. 

But you know, from a baseline perspective, if we can say that we've gone the distance and secured the machine learning pipeline and we understand, you know, what's happening with the data that’s being used in our case for a lawful authorized purpose, you know, and in other cases, you know, for under the rules in whichyou hold that data, I think that that goes a long way to get you in the right position to start talking about some of those other kinds of risks. 

And you know, I like the idea of just figuring out how do you secure your infrastructure for this new technology? Sort of full stop there. And that's a huge contribution this community can make.

D Dehghanpisheh 27:25

So along those lines of needing new tools, right, and we've had comparisons, DevSecOps, and one of the big kind of cultural things that was happening in DevSecOps was this notion of having to shift left, right, go upstream, start to secure that. 

How, how do you see the, well, I guess Kaleb, let me start with you and Gary, you're next. Is there a gap in the current mentality of shift left for DevSecOps for AI development? If so, you know, what is it from your perspective, if not, what do you think is needed and where do you start? And same question to you, Gary.

Kaleb Walton 28:02

So I mean, any life cycle has a beginning and an end. It has a left and a right. So this is just more life cycle. With DevSecOps you had the - we've been maturing this over the past 10 plus years. You got your Design, your Develop, your Build, your Test, your Deploy, your Manage, and you can shift left all along that. With AI and ML, you can still look at things in that same sort of a life cycle, but you have different people that you're shifting left to. 

So shifting left to developers is one thing. Shifting left to AI researchers and data science folks, that's different. You're, you're shifting left into a different world than you were with developers. 

Similar, but different enough, even down to basic, like one recent example, Jupyter Notebooks. That's different than a typical IDE for a software developer where they're working on their machine, you know, writing Python or Java code or whatever and executing it. It's a little execution environment where they're doing stuff there. You gotta shift left there where they're working and even when they're working with the data. 

So it's still the same concept, just the execution of shifting left and the techniques you use and just where the solutions need to go. They're just more of them and they're a little bit different.

D Dehghanpisheh 29:22

Gary, what about you?

Gary Givental 29:22

Yeah. I think that point about the tooling you know, just recently there were some vulnerabilities discovered in MLflow, which is one of the tools, right? Like, how do folks who are data scientists do their job where they have access to some data that they can create an experiment with models and version those and, and, and tweak and do feature engineering and now test the models and see if those even perform. 

So like MLflow is one of those tooling frameworks, and when that has vulnerabilities now the entire the entire lifecycle and its artifacts being the model are suspect. So, that's definitely like problem number one. And it feels a lot like the early days of DevSecOps. Like we all kind of have this gut feeling that the right thing to do is to introspect more into, let's say the third party libraries because we're just using them at face value.

You know, everyone used Log4j. Why? Because It does a great job at performing the function that we needed for debugging until all of a sudden it had this massive problem, right? And it feels like we're in that same very early days. Like everyone is doing data science, everyone is creating models or using models off the shelf whether they come from a reputable vendor like hugging face or Unreputable vendor from, you know, somebody just creating some model that does something clever. And it's like a little bit of that wild west of just a lot of experimentation and work. 

You know, the tools are very immature still. I mean, maybe Jupyter Notebook is one that people are using quite a bit because, but it's easy to use it for that purpose. But does it have all plugins and all the safeguards that you would need, like IntelliJ has for, you know, GitHub and for static code scans and that's kind of code level analysis? Well, maybe not, like all of these tools are still quite immature, so I think there's a lot more conversation that needs to happen, a lot more maturation. And these are the best of breed platforms for doing machine learning, for working with the data, for creating the models, for doing the training, and they're very well integrated to all of the security dimensions of how do you scan control, validate, test and apply some of those, you know, software-like lifecycle steps.

D Dehghanpisheh 32:04

So now that we've covered the tools, you know, it's always about the people element that Kaleb talked about, the tools element that you're talking about, Gary. Let's talk briefly about processes and, you know, to keep things tight around here - what would be the one best practice from a process perspective that you or IBM from your experience or that of IBM's perspective - what's the one practice, what's the one process that you would encourage people to start with if they want to implement MLSecOps?

Gary Givental 32:41

Well I, I think for me, and I, from, from my perspective, and I'm certainly not representing IBM in this opinion, but this challenge of, I'm a data scientist. I want to do a good job with the models and doing my experimentation. Where's the data and what can I use to actually do a good job? And once I've done that job, where do I deploy that model so that the model itself is also not accessible from, you know, bad threat actors, right? Like that challenge of someone has the data and has control of it, but someone else needs to use it for their experimentation to pick either a data scientist or some sort of a research arm of the organization and do they have the right access? 

Like working through those frictions and making that much smoother and having some sort of a whether it's a organizational thing with a person who is in charge of that, who can just make those decisions or just having good conversations in your organization about setting up these kinds of secure sandboxes so that interaction is easy and frictionless as much as possible. I think that would be a great place to start.

D Dehghanpisheh 33:58

What about you, Kaleb? What's, what's your one piece of advice, or actually let me phrase it to you differently. What's the one process you hope you can get installed at FICO as it relates to MLSecOps, given that you're so new, you know, you've gotta have a vision of what you wanna do there?

Kaleb Walton 34:15

Well, I think that getting the data scientists, AI engineers, developers, and honestly the DevSecOps folks, because I believe that they're gonna be the ones that are adapting into MLSecOps. 'cause They have all the prerequisite capabilities to build on, getting them all working together. 

Like Gary said, reducing their friction, getting them to feel the same pain that - getting the DevSecOps aka MLSecOps folks to feel what the data scientists feel in their pain and vice versa. The data scientists, they need to know what their, you know, all their dreams and all the things that they're, they're creating and cooking up, how that's gonna then be operationalized and used, and all the pain they're gonna feel. Spreading that around early and often and, and commingling them in some ways. And procedurally I think that's important.

D Dehghanpisheh 35:09

I like that. The process is “spread the pain.” <laugh> And to close us out, to close us out, I'm going to turn it over to Martin.

Martin Stanley, CISSP 35:18

Wow. So you heard it here. I think both Gary and Kaleb did a very nice job describing the community's opportunity here to build out these teams and partnerships either within organizations or or beyond to achieve these outcomes, which is, I think, where the opportunity is. 

Most of the people that are involved in MLSecOps in their organizations have already gone to great lengths to build trust and relationships within the organization, and so they can leverage them to build out these teams. 

And I think most importantly, as a result of, you know, getting everyone working together and, you know, making sure that we secure that machine learning pipeline, as I say over and over again, we can ensure that the first experience organizations have with this technology is a good one. And we all know how important it is that the first experience be a good one when it comes to, you know, these new technologies. So thank you so much both of you for being here today. It was very, very informative. 

Kaleb Walton 36:22

Thank you. It’s a pleasure.

Gary Givental 36:24

Yes, thank you for having us.

D Dehghanpisheh 36:25

Kaleb, congratulations on your new role, Gary, thank you for coming on. Martin, thank you for co-hosting. I can't wait to have all of you back in some form or fashion. 

And if you want to learn more for all those watching or listening or reading the show notes, visit ProtectAI.com to find out more about the tools that we have that help with some of these things. 

We'll have links in the show notes from tools from IBM, any research that Kaleb would like to host, and of course the Secure AI framework notice and things that will come from CISA. We will have links in the show notes to all of those. Thank you very much to Martin, my co-host. Thank you to Gary and Kaleb, our guests, and we will see ou on the MLSecOps podcast next time. Thanks again.

[Closing] 


Additional tools and resources to check out:

Protect AI Radar

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard - The Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for listening! Find more episodes and transcripts at https://mlsecops.com/podcast.

SUBSCRIBE TO THE MLSECOPS PODCAST