<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4373740&amp;fmt=gif">
MLSecOps-favicon PAI-favicon-120423 icon3

The MLSecOps Podcast Season 2 Finale

 

 

Audio-only version also available on Apple Podcasts, Spotify, iHeart Podcasts, and many more.

Episode Summary:

This compilation contains highlights from every episode of Season 2 of the MLSecOps Podcast. Thank you to everyone who has supported this show, including our listeners, hosts, and stellar expert guests.

Stay tuned for Season 3!

Highlights:

Cybersecurity of Tomorrow: Exploring the Future of Security and Governance for AI Systems - Martin Stanley, CISSP

D Dehghanpisheh
So, you're currently assigned at NIST to work on the Trustworthy AI Project and a part of that is the AI Risk Management Framework or AI RMF, right? Can you talk about how those two are nested together? 

Martin Stanley, CISSP
Absolutely. So as I mentioned, you know, running the R&D program, we had this long, you know, this long engagement with a number of partners and that included all the work that was going on at NIST.

When the [NIST] AI Risk Management Framework was released earlier this year, in one of those conversations that I was having with some counterparts at NIST, really the question of, how do we get the word out? How do we get stakeholders to start to engage, get us feedback on how it's working? How do we identify all the different components that we're going to need in order to make it work?

At CISA, what we're interested in from a strategic technology perspective is, first of all, how can we use strategic technology in our mission? So how can we use AI in our mission space? But we also think about how our stakeholders are going to leverage AI and how that may change the attack surface we have to help them protect.

And then the third area is, of course, how adversaries leverage new technologies to malicious intents. And as a result of this, and as a result of this consistent set of engagements that we have with all these partners, it's a real natural collaboration with NIST to help them to evangelize and move the AI Risk Management Framework out, and also provide real world feedback as to how that's going as well as early adopters and other folks are moving in and trying to leverage the different kinds of capabilities within the AI RMF. 

And there's a lot of that going on right now. I can't really talk too much specifically about particular entities that are using it, but there are a lot of federal agencies, there are a lot of large commercial companies that are taking the AI RMF and they're looking at adapting it into their risk management framework.

And I think before we get too far into talking about the specifics of the AI Risk Management Framework, we should really highlight that the AI Risk Management Framework, just like I think MLSecOps is as well, is focused on specifically managing the risks associated with AI. 

There's other mechanisms and other regimes in place for managing cybersecurity and privacy and the like, and they're all related in an enterprise risk management way.

From Risk to Responsibility: Violet Teaming in AI - Alexander Titus, PhD

D Dehghanpisheh
I'm trying to figure out how you would guardrail against potential dual use harm while still taking advantage of the need to kind of do maximum elements.

It feels like that asymmetry is almost impossible to manage.

Alexander Titus
Yeah. That is the debate in the biotech and biosecurity world. And that's where my research is really focused on. Because the concern is these large language models and other generative tools make it easier to create the scary version of a virus or something like that. But we want to be able to have tools that understand how to engineer viruses, something like a phage, while not engineering the things that you don't want them to have.

And so is the paradigm that was introduced in a really great Wired article in March called “Violet Teaming” that I spend a lot of my time focused and thinking on is how do you take AI systems that you're concerned about and actually use those AI systems to try to create ways to limit the downside in misuse or unintentional applications of those systems?

And it's not just a red team or blue team where it's purely from a technology standpoint, because you think about it from a blue teaming standpoint, the best way to ever prevent engineering of a virus is to never do it. But the societal missed opportunities that come with completely ignoring that whole application is hard.

And so how do we actually build in responsibility, security by design. You know, there's a whole bunch of different ways you could describe it, but how do you say let's train AI algorithms to not identify things that are associated with transmissibility, lethality while still identifying things that are better attacking at gram-negative or gram-positive bacteria or whatever that goal happens to be. 

Because there isn't a good answer to your question, D. And there are people on both sides that say, let's just not do it at all or let's not worry about it, because how are we ever going to tackle this problem?

Evaluating Real-World Adversarial ML Attack Risks and Effective Management - Drew Farris and Edward Raff, PhD

Edward Raff
I think that's where in the future we might see something like there was a SolarWinds sort of supply chain attack. It wasn't a machine learning supply chain attack, but I could see that being something in the future where especially with these large language models that are very expensive to train, like, okay, yeah, there's going to be some smaller subset of models that people are all pulling from and using.

And yeah, I could see that being a valid threat model, that if someone could pull off a white box attack there, they might do it because it's actually a supply chain attack to get to everyone downstream. 

D Dehghanpisheh
Which, that has a massive ROI to it, right? In that case, all of the engineering that you'd have to do, i.e. SolarWinds, you know, to get into that supply chain and then have that massive downstream blast radius. That's far more likely than trying to detect one endpoint if it's actually a valid input or not. 

Badar Ahmed
Right, like somebody let's say poisons Llama 2 today, that's just going to have a catastrophic effect downstream. 

Drew Farris
Yeah. Or you think about these code generation LLMs, right? Think about poisoning these code generation LLMs to generate code that has exploitable vulnerabilities. 

D Dehghanpisheh
Backdoor scripts.

Drew Farris
Yeah, right. 

Risk Management and Enhanced Security Practices for AI Systems - Omar Khawaja

Omar Khawaja
Yeah, you know, over the time, as I've spent more and more time with it, I've realized that in the world of AI, we have many of the same types of risks and concerns that we worried about in the deterministic world with standard static applications. It's just the words we use are totally different.

So supply chain is a really good example. When we think about supply chain security in the traditional spaces, we think of it as I'm getting code from someone else. So I'm buying, or I'm accessing maybe an open source library, an open source API, or I'm getting some kind of an application. I'm using it. Can I trust that application or not?

So Log4j, SolarWinds, those are examples of I'm getting some code from someone else, I'm putting it into my environment and it may have the unfortunate opportunity to cause harm. And while yes, we can take that and apply that to the world of ML in a very simple way, we can say I'm getting my models from somewhere else.

That's true. But the other part that's also just as much about supply chain – but even for me, this was like an “Aha!” over the last few weeks – is the raw material that makes up applications is code. The raw material that you use to build a model is data. And so supply chain security in AI, it's yes, you should care where you get the model and provenance and all that, but you also care about the data.

Where are you getting the data from internal and external sources that you're using to now train the model, which essentially is to build the model. The model is literally built based on the data that you feed into it, so that data becomes of paramount importance. 

Diana Kelley
Yeah, and interestingly that's something that, you know, as we have assets and inventories within organizations, we're used to inventorying a piece of software. Okay, maybe a model, but it's going a next step to data inventory and starting to track and audit the provenance of the data and who trained with the data. Right? 

Omar Khawaja
Yeah. You're exactly right. So, you know, Diana, what we think of as supply chain security in the world of AI, many of us likely have heard the term training data poisoning.

Training data poisoning is a supply chain security attack. In hindsight, it feels obvious, but I was hearing those terms and keeping it separate from supply chain until I finally, finally connected the two together. You were talking about lineage and being able to track the data. That becomes important. You know, in many ways it feels like the core concepts that we knew were important in our traditional world are ones that are coming up over and over again.

And we're saying this is important. So just like you mentioned, you know, having the lineage and having versioning and knowing the ownership of our applications has always been something that has been sort of the ultimate vision of asset management. And oftentimes those types of activities were relegated to the long tail of controls that we never got to because we were prioritizing patching and authentication and logging and detection and hunting and all of those things.

And asset management felt like, you know, it's a nice to have. We know the regulators and auditors would really be happy with that. And yeah, there's some use cases where our threat team, and others, and our vuln team is saying it would be nice to be able to connect the dots, but it almost felt like those were a nice to have.

And sometimes that was out of sort of sheer practical reality, which is: I only have this much time and these controls just don't fit within the time that I have. With AI, these controls around lineage and asset management and tracking and treating data as an asset, those no longer are nice to have – they become must haves. So for instance, if I want to know why my model all of a sudden started to misbehave, I need to know, what version of the data was it trained with?

And if I just say I've got this data pipeline coming from these sources, well that's not good enough? Well, how do I know what data came from those sources? Did it intentionally poison the training data set and cause the model to behave in ways that it wasn't supposed to? Was it unintentional? Was it a quality issue? Or maybe our suspicion is wrong, and the reason the model misbehaved had nothing to do with the new training data that it got.

But if we don't know what training data it got when that caused that misbehavior to happen, we don't have that lineage, that tracking, that provenance, we’re in trouble. 

Diana Kelley
Yeah, that's such a great point.

Secure AI Implementation and Governance - Nick James

Nick James
And having continuous improvement, continuous integration following the original DevOps model with CI/CD. I think starting to include, as Protect AI espouses, MLSecOps, and following that infinity CI/CD, so that we aren't hampering innovation and we aren't hampering new releases, but also including checks along the way.

I think that's the best way to go about it, because if you are inserting security reviews, impact assessments, model testing and the like, and I believe that Protect AI, the parent company, has a number of these features, as long as you're inserting it within in-line, if you look at it as an assembly line before the shiny new thing is released, I think that's the best way to do it.

Chris King
Yeah, fair enough. And we've seen a lot of use with some of our tools, like our open source tool ModelScan for just one tiny attack is that you can inject malicious code in a lot of models and it runs the second the model is loaded for inference or anything. So having tools to assess those, sign-off that they're okay before they exit and go into another system, those are, like you said, integrating that DevOps mindset is definitely helpful.

Finding a Balance: LLMs, Innovation, and Security - Sandy Dunn

Daryan Dehghanpisheh 
How do you think about an AI/ML Bill of Materials in terms of understanding the concepts that you just articulated? How vital do you think that is or important that is to begin constructing the types of things you're talking about?

Sandy Dunn
I think it's, you know, nutrition labels, however we want to call them, but I think those are critical resources and tools that us as organizational leaders, being able to look at those and understanding, hey, what does this system have and what do I need to know about? 

ReDoS Vulnerability Reports: Security Relevance vs. Noisy Nuisance - William Woodruff

Dan McInerney
So how do you see, where do you see these ReDoS vulnerabilities? So to give you a background, just real quick on the blog post we're talking about - I'm sure we'll have a link in the editing - it's talking about how ReDoS vulnerabilities are not…that developers should push back against all of these really low quality CVSS and CVE submissions that people will send to them. 

And a ReDoS is a “regular expression denial of service.” It's taking advantage of the regex (regular expression) engine because the [regex engine] goes both forward and backwards. So you can kind of trick it and put it in this permanent loop that ends up exhausting resources of CPU and that sort of thing. 

And so, just to get started, I'm kind of curious about what was the thing, I know you had a trigger for this blog post. I know there was - somebody sent you something and you're like, that's it. I have to just go say, tell the entire world, that this is not how this should be.

William Woodruff
Yeah, there definitely was a trigger. The trigger, at least in my case, was I have a lot of other friends in the open source community. I often sort of get tagged on issues just to pop in and have sort of opinions on, on what's going on. 

And for a while we were seeing like these murmurs in the JavaScript community of these reports where people would file a regular expression denial of service report and, and receive either CVE or some other kind of identifier. And at least in the Java ecosystem, these made a little bit of sense because they would be like, it would be a service facing thing on, on a website, for example, where you could hang the website. 

But two or three years ago, we also began to see them in the Python ecosystem, where oftentimes the domain doesn't make any sense for user reachable input or user controllable input.

And so the one that really set me off was a friend of mine has a library and he received a ReDoS report that was not, not only in completely unreachable code but like seemingly had never, ever been used by anyone. 

So like, not only was the code unreachable in the actual library that was reported to, but also nobody had ever called this API function anywhere ever. He did like an entire survey of GitHub's- used GitHub's code search - couldn't find a single use of it. And so therefore it was like sort of doubly useless. 

Not only is it this, this sort of low value vulnerability class, but also there was absolutely no evidence that it was ever even remotely triggerable.

Dan McInerney
Hmm. So we end up seeing, so we run a bug bounty program too on huntr.com, and I'm one of the triagers there. And so I do end up seeing a lot of these reports. And it feels like, and tell me if this kind of jives with your experience, it feels like a lot of people are using static analysis tools on these bug bounty programs like Snyk, and Snyk is a fantastic SAST (static application security testing).

William Woodruff
Yeah, definitely.

Dan McInerney
Like I don't wanna denigrate it at all, but it does feel like a lot of people are just using Snyk and then getting a finding and reporting like copy/paste. [Something like] “Snyk found an XSS (cross site scripting) in this thing that's not even remotely close to an XSS. 

Is that like kind of what you, what you're seeing as well - is that - where do you think the source of these ReDoS vulnerabilities are coming from? Because I don't think BurpSuite really does that that often.

William Woodruff
No, I think, I think it's a trifecta things. One thing that's definitely detecting these kind of like pathological regular expressions is very easy, statically. It is, you know, you can just sort of search through the input and find the at fault regular expression. So that's one part of it. 

I think another part is that, as we've seen vulnerability programs classify them as a genuine attack surface or a genuine vulnerability category, the people who are sort of scraping the bottom of the barrel for vulnerability sources, latch onto that and say, oh, here's another thing that I can just sort of shove into my automated report.

Securing AI: The Role of People, Processes & Tools in MLSecOps - Gary Givental, Kaleb Walton

Kaleb Walton
So I mean, any life cycle has a beginning and an end. It has a left and a right. So this is just more life cycle. With DevSecOps you had the - we've been maturing this over the past 10 plus years. You got your Design, your Develop, your Build, your Test, your Deploy, your Manage, and you can shift left all along that. With AI and ML, you can still look at things in that same sort of a life cycle, but you have different people that you're shifting left to. 

So shifting left to developers is one thing. Shifting left to AI researchers and data science folks, that's different. You're, you're shifting left into a different world than you were with developers. 

Similar, but different enough, even down to basic, like one recent example, Jupyter Notebooks. That's different than a typical IDE for a software developer where they're working on their machine, you know, writing Python or Java code or whatever and executing it. It's a little execution environment where they're doing stuff there. You gotta shift left there where they're working and even when they're working with the data. 

So it's still the same concept, just the execution of shifting left and the techniques you use and just where the solutions need to go. They're just more of them and they're a little bit different.

D Dehghanpisheh

Gary, what about you?

Gary Givental
Yeah. I think that point about the tooling you know, just recently there were some vulnerabilities discovered in MLflow, which is one of the tools, right? Like, how do folks who are data scientists do their job where they have access to some data that they can create an experiment with models and version those and, and, and tweak and do feature engineering and now test the models and see if those even perform. 

So like MLflow is one of those tooling frameworks, and when that has vulnerabilities now the entire the entire lifecycle and its artifacts being the model are suspect. So, that's definitely like problem number one. And it feels a lot like the early days of DevSecOps. Like we all kind of have this gut feeling that the right thing to do is to introspect more into, let's say the third party libraries because we're just using them at face value.

You know, everyone used Log4j. Why? Because It does a great job at performing the function that we needed for debugging until all of a sudden it had this massive problem, right? And it feels like we're in that same very early days. Like everyone is doing data science, everyone is creating models or using models off the shelf whether they come from a reputable vendor like hugging face or Unreputable vendor from, you know, somebody just creating some model that does something clever. And it's like a little bit of that wild west.

AI Threat Research: Spotlight on the Huntr Community - Dan McInerney, Marcello Salvati, Madison Vorbrich

Charlie McCarthy
So organizational risk that we know exists; there are a couple different pools it sounds like. There is the use of some AI-powered technologies or applications that employees might be using - has its own set of risks. And then also if you are actually building and deploying AI, you have engineering teams, ML engineers that are creating systems - that's kind of another category. So for the, the organizations that are building and deploying, many of them are using open source machine learning assets, and these assets can contain vulnerabilities, right. And kind of, and “enter huntr” here. So if we're transitioning over to that group, what it's there for and how it participates in this ecosystem, who are the participants within the huntr community and how do they contribute to identifying vulnerabilities or bugs in AI/ML?

Madison Vorbrich
So I can speak to part of that. So I feel like most of our hunters are either bug bounty, hunters, hackers themselves, security researchers, pen testers. We've also seen like software developers and engineers come in, hacking enthusiasts, some students that want to learn. As far as how they can contribute to identifying vulnerabilities, I'll pass that over to you two to provide more detail.

Marcello Salvati
I think, I mean, in terms of like, actually, I think the biggest part about this is the scale. Like, if you're able to like, get 10,000 people to have eyeballs on a specific project that's open source, like you're guaranteed to like speed up the whole security life cycle of your product almost immediately. Like it's just, it's just a scale that really helps. Like if you never had like any sort of, or have any security background whatsoever, huntr is actually really valuable in terms like, in terms of finding security vulnerabilities in your open source project because - 

Dan McInerney
Yeah. And it's not like, you know, we're charging for these open source projects.

Marcello Salvati
Yeah, exactly.

Dan McInerney
We add these open source projects because they're open source. And then we just help the maintainers find the security issues.

Evaluating RAG and the Future of LLM Security: Insights with LlamaIndex - Simon Suo

Neal Swaelens
I think, you know, we obviously see quite a few large corporates entering the LLM space, and you've probably seen also the report from Andreessen [Horowitz] on the state of AI at enterprise where you see the massive investment flowing into LLM building, like, built right?

What would you say are kind of the primary security considerations really when you think about developing applications with large language models?

Simon Suo
Yeah, I think the way I think about a, the security concern is really about like the input and output of the large language model. 

Like specifically what input it gets, right? Whether that's only the user data that's currently interacting with the large language model itself, or like you are able to surface organizational information as well, who, which are meant for only a subset of users to see, right? And how is the model, like how are we handling that input before that passed into the large language model, right? So I think there's a lot of things people have like thought about security concern, coming from like prompt injection attack’s ability to reveal like system prompts, ability to reveal user information, organizational data. I think a lot of those make the enterprises super, super scared. 

And then on the other side, right, like on the output or action intake is pretty top of mind as well. Like a lot of these applications are being built are co-pilot or assistant applications which have the ability to kind of do something in the real world as well, right? Not just limited to like sending output back, right? A lot of these are being hooked up to like CRMs or like other kind of APIs, so that can trigger some change in the external environment, right? So since now I'm such a stochastic system making sure that the output doesn't have a bad consequence, it's pretty top of mind as well, right?

Neal Swaelens
Yeah, for sure. Yeah, we also think about it in a similar way. Obviously input and output evaluation being the cornerstone also of LLM Guard and in our view LLM security, but especially if you kinda look at, you know, expanding capabilities i.e. connecting LLMs to, you know, downstream systems, that expanded capability set will also lead to expanded blast radius in case of a breach. So, definitely agree.

Oleksandr Yaremchuk
Yeah. I'm also curious; so we, when working on this guest post for LlamaIndex and integration with LLM Guard, we experimented, like, can you do like prompt injection in like when you ingest the data? For example, like we took three resumes of different people and in one resume, like in the document we actually put like prompt injection which was, like, white color text. And in that prompt injection, we kind of asked the model to promote this [person’s resume], although the person had like the least experience out of all three candidates, and the model actually did it. It chose that person because we kind of asked like, yeah, this person needs to pay like money. They don't have, like, they're really struggling. 

And so yeah, like I would like to understand kind of your take and maybe some stories that you hear from the community about like what are the specific things like with AI security, but like related to RAG?

Simon Suo
Yeah, that's a good question. Like by the definition of RAG, right, you're giving the large language model access to a lot more data and sometimes you don't necessarily control exactly what goes into that. Actually it's user uploaded, right? I think the example you gave is super great. I don't know if necessarily I want to use LLM to screen resumes quite yet. We actually played around with that and felt that like the, the potential for bias is quite high, so <laugh>. But yeah, I think the general example holds right, like a lot of time like these five applications really relying on some semantic search or hyper search to be able to surface contextual information. 

So I think that part maybe is a little bit harder to game by these prompt injection attacks, but I can totally see that even after like the relevant chunks are kind of retrieved like sort of embedded malicious things can be passed into LLM to alternative behavior, right? And like you said, like when you attach more systems, you have a higher blast radius.

Another example I can think of is just like when you use these contextual information to decide an action to take, it can actually cause even more harm in the downstream systems, right? So I think right now, like from people who are currently building these applications, the biggest fear is not necessarily quite these kind of harmful prompt injection attacks yet; it's more about just like data leakage, I think, right? It's like it's not maliciously injecting some instruction to do something, but by accident, like the access control is off and then like I don't know, like this employee gets to see the data that a CEO has put in there and that they're not supposed to see <laugh>. 

So I think those kind of fear and concerns are much more top of mind, at least from the conversations I've had. And a lot of that is really just about like great access control configurations as comes to the data, like actually propagating the, the ACLs coming from the data sources into the implementation as relates to like vector database or whatever search engine you have, so that like those information can be safeguarded and have the right access control.

Practical Foundations for Securing AI - Ron F. Del Rosario

Ron F. Del Rosario
So, I think what's happening in the enterprise nowadays is when development teams, they have an existing use case for let's say integrating an ML model in an existing product, or leveraging large language models to be specific, it's usually down, you can classify into two buckets. 

One is it's an internally developed machine learning model, meaning your development team is responsible from zero to pushing it into production. 

And the second most common use case, which is prevalent right now is like, we're just gonna, we’re just going to consume a foundational model offered by a cloud service provider, right? So, like, ChatGPT.

Daryan Dehghanpisheh
Point to GPT or Azure GPT, or AWS Bedrock, or Gemini, whatever it is.

Ron F. Del Rosario
Exactly, so two different - same use case for leveraging ML model in their product, but two different ways to consume machine learning models. And they have like a separate risk profile, in my opinion, right?

The first, the first scenario, it will take more time from you as a product security, as an AI/ML security lead to guide your development teams in the entire life cycle of creating the machine learning model from scratch, right? Because you tend to, you tend to focus: okay, you need to try to understand a problem and how they plan to solve it with machine learning, and you try to understand how they gather their data sets for training, right? Who has access to the training dataset? Where is this training data set coming from?

Daryan Dehghanpisheh
Open sourced? Did I just pull it down from the open source world? Did I just bring it in?

Ron F. Del Rosario
Exactly. Internal trusted sources - meaning from our systems, our infrastructure - is it, are we, do we have the correct proper permissions to use this if it's coming from customer data? 

Daryan Dehghanpisheh
Right. Controls. 

Ron F. Del Rosario
Versus training data that's being pulled externally. This is high risk. I've seen some scenarios in the past with other companies where developers started using publicly available information, either from GitHub or any well-known repo out there, and use that to train their ML models, right? With no, with no vetting of some sort, right? 

So, what happened is any user with malicious intent can modify that third, that training data originating from a third party source. It's coming into your system, and your developers just rely on the raw output of that of that processing, right? Without vetting. So that's high risk. 

Practical Offensive and Adversarial ML for Red Teams - Adrian Wood

Marcello Salvati
Yeah, so like, can you dive into a bit like each category in your OffSec ML Playbook and tool in Wiki here? Yeah. Just to give us an idea.

Dan McInerney
Yeah.

Adrian Wood
Yeah. The adversarial category right now is primarily looking at things to do with LLMs. But I'm adding more things all the time there. And what we're talking about there is attacks against ML systems and that category is broken down basically by the level of access that you have to have to perform that attack, whether that's API access or, you know, like direct model access and so on. And I've noticed that people are starting to use this in threat modeling too, this category, because the - a team will come to them and say, hey, we want to stick embeddings in this location for like LLM RAG systems; is that okay? And people will say like, sure. Because they don't realize that like embeddings can be fully reversed, basically back to text, but with a project from a guy called Jack Morris, I believe, it's called like vec2text, right? Like, so you can use the, the category to basically either attack something or figure out how someone could attack it for those purposes.

The next category is Offensive ML, which is where we're talking about using ML or leveraging ML to attack something else. So that could be, as an example, there's a great project in there, one of my favorites by Biagio Montaruli which is a phishing webpage generator. So it deploys a local phishing webpage detection model and then generates HTML attributes into your phishing webpage until the malware, sorry, until the phishing confidence goes close to zero, and then you use that version. And I'm getting feedback from people all over the world that are using that project that Biagio made and saying like, it works, it's good. I use it myself. It works, it's good. Same thing goes for, say like droppers, you know, using a dropper to - an ML enabled dropper - to make decisions about whether to drop, right? Because you don't wanna drop into a sandbox.

Supply chain attacks, that's got a lot of stuff in there from both of you, actually from Protect AI.

Dan McInerney
Yeah, let's go!

Marcello Salvati
Pat ourselves on the back there. <Laughs>

Adrian Wood
There you go, there you go. So using the ML supply chain, you mentioned things before, like H2O Flow, MLflow, all those things are in there, but also ways of using the models to attack ML pipelines and more. As well as data based attacks and so on.

MLSecOps Culture: Considerations for AI Development and Security Teams - Chris Van Pelt

Diana Kelley
Awesome. We feel that. Um, so you know, Chris, as the founder, you're at a really unique intersection point that I'm fascinated with, which is, as a CISO you obviously think about security, building security and the security program, but you're also the co-founder of an MLOps platform for AI and ML developers. So what are your points, why do you feel it's really critical that we create MLSecOps, that we build security into the ML lifecycle?

Chris Van Pelt
Well, it's really aligned with our original vision for the product, right? Like, you can't have security if you don't know what the final asset is like in this, in this new modeling world. Ultimately you've got a file full of weights and biases that you're gonna deploy somewhere. And the default mode is like some engineers go, they do a bunch of things and then they produce this big file of this model essentially of, of whatever problem you're working on, and then you run it. But the only way to have any, you know, form of security is to understand how that asset was built. So to really, you know, have in essence like a manifest of, okay, this is the data it was trained on, this is the code that was used to train it, these are the hyper parameters that were used for this specific model. So it was natural for us to, you know, think about security or put that as a core value prop of the product. 

Exploring Generative AI Risk Assessment and Regulatory Compliance - David Rosenthal

David Rosenthal
Yeah. But if they source providers from other areas, they will also be subject, because the AI Act doesn't only - it doesn't apply just to the people who sell in the EU. And you mentioned before Switzerland - we in Switzerland, we are not part of the EU, and the AI Act doesn't - it's not Swiss law. Nevertheless, all our clients basically say that if we are subject to the AI Act, we want to comply with it. And the AI Act basically says that if you provide a product on the EU market, then you are subject to it, full stop. And if you are using it outside the EU, so Swiss company or a US company uses a product, AI, and the output of it finds its way - intentionally of course - to the EU, then you are also subject. So if you have an AI tool at a US company that creates customer letters, and you send that out to customers in Europe, then you are subject to the AI Act if you care or not. That's another question. But the bigger companies, they will care.

Charlie McCarthy
Yeah.

Alex Bush
That's great.

Charlie McCarthy
And to dive just a little bit deeper, kind of jumping back to a really great question that Alex posed about these key compliance requirements for high risk AI systems and you know, the consequences, can you foresee some other challenges that organizations might face in complying with not just the EU AI Act, but AI regulations on a larger field? You know, you've mentioned cost, and you've mentioned possibly some issues with innovation, slowing things down there, but any other challenges that you could foresee within an org - even from the board level down - when trying to kind of create their plan for how they're going to attack and comply with some of these upcoming regulations?

David Rosenthal
There are a number of issues at different levels. One, if you talk about the board level, the main issue we see there is that the board or management doesn't really understand the risks we're talking about. And then you often hear, okay, we've done a ChatGPT workshop with the management or with with the board, and they know now how to do the prompts, et cetera, but that's not the kind of knowledge we're talking about. They in, at least here in Europe and regulated companies even more, they need to understand at these levels what is actually really the risk behind it. The security risks, the whole - the issue of how bias other types of, of risks, drift risks in the models, et cetera, all these kind of things, they need to understand that. But there are not many people who tell them how this works.

So we see actually, and it's a bit surprising, that there is not so much training at this level where board level language is used to show the people and, and get a - bridge actually the gap between the techies, so to speak, and those who have to then say, okay, that's something we want to do.

[Closing] 


Additional tools and resources to check out:

Protect AI Radar: End-to-End AI Risk Management

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard - The Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for listening! Find more episodes and transcripts at https://mlsecops.com/podcast.

SUBSCRIBE TO THE MLSECOPS PODCAST