<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4373740&amp;fmt=gif">
MLSecOps-favicon PAI-favicon-120423 icon3

Unpacking the Cloud Security Alliance AI Controls Matrix

Episode Summary:

In this episode of the MLSecOps Podcast, we sit down with three expert contributors from the Cloud Security Alliance’s AI Controls Matrix working group. They reveal how this newly released framework addresses emerging AI threats—like model poisoning and adversarial manipulation—through robust technical controls, detailed implementation guidelines, and clear auditing strategies. Tune in now!

 

Transcript:

[Intro]

Charlie McCarthy (00:08):

Hello everyone, and welcome back to the MLSecOps Podcast. My name is Charlie McCarthy. I'm one of your MLSecOps Community admins, and occasionally a host on this show where we dive into some of the latest in AI security, governance, and a whole lot more. 

So today we have a really fascinating discussion lined up regarding the Cloud Security Alliance AI Controls Matrix (CSA AICM), which is an initiative from CSA that's essentially a framework of control objectives to support orgs in their secure and responsible development, management, and use of AI technologies, including GenAI. So if you're working in AI or cybersecurity, this is definitely something you're going to want to keep on your radar. And joining us today, we actually have three of the working group members who have been deeply involved in shaping this framework. Let's meet them quickly now.

Faisal Khan (00:59):

Hi, I'm Faisal Khan. I'm a senior software engineer at Protect AI. I work with model security. I also serve as a co-chair for AI Controls Matrix working group for the Cloud Security Alliance.

Sam Washko (01:13):

Hi, I'm Sam Washko. I'm a senior software engineer for Protect AI, and I also lead one of the task groups for the CSA AICM. We wrote some of the control specifications, and now we're working on implementation guidelines.

Marina Bregkou (01:30):

Hello, my name is Marina Bregkou. I am a principal researcher and associate VP with CSA. I'm managing the AI Controls Matrix initiative, as well as coordinating and contributing to all of its five parallel tasks, which we will mention a little bit later.

Charlie McCarthy (01:50):

Okay, everybody, thank you again for being here. This is a treat. Let's, why don't we start with some foundational context for our listeners. So maybe we could start with you, Marina. Can you talk a bit to the audience about an overview of the AI Controls Matrix, things like what it is, why it was created, how long the group has been working on the initiative, and just maybe some of the expert personas that are part of the working group?

Marina Bregkou (02:15):

So the AI Controls Matrix is what Charlie you said a little bit earlier. A control objectives framework designed to help organizations to securely develop, implement, and use AI technologies. It uses the CSA cloud controls matrix as a foundation, but extends it to address AI specific risks. We've structured the AICM around 18 security domains, and it has a total of 242 controls covering everything from model security to governance and compliance. 

This initiative began with a focus on defining AI specific threats and ensuring appropriate mitigation strategies. And it was created exactly to address the growing needs for trust in AI systems. As GenAI becomes more pervasive, stakeholders from policy makers to end users are demanding some accountability. And that's what the AICM provides. Some measurable controls to mitigate risks like data poisoning, model theft, and so on. We launched this project at the fourth quarter of 2023, and we've been working on it ever since. Contributors include industry experts across cybersecurity, AI/ML security, governance, risk, auditors, standardization bodies, and it's a large group.

Charlie McCarthy (03:51):

Excellent. Turning it over to you, Faisal and Sam. Can you talk to us more about what problem or problems this initiative aims to solve, or I guess, you know, why people should care about these things that the initiative is addressing? And maybe from your perspectives, what makes this framework significant for AI developers and security teams specifically?

Faisal Khan (04:14):

Yeah, so I think you have your traditional security requirement for organization. And when and as organizations are rapidly adopting AI, they are bringing in more complexity into their, into the system. Like they are now introducing data pipelines, they're introducing models, they are relying on our external models to build and improve or fine tune their own models. So all these things introduce additional attack vectors that needs specific security controls. So that's where the need of AI Controls Matrix came into being. 

So CSA already had a history of providing cloud-based controls that organizations can implement. And within the last few years, with the, because of the AI and the threats offer by AI and the kind of the new components that the AI is bringing the control matrix needs updated. So all this effort is actually focused on providing those controls for organization for developers to actually see how they can secure securely deploy AI into the organization.

Sam Washko (05:33):

I think it's also very useful in taking the broad ideas of the new threat vectors that AI is introducing and distilling them into the actionable controls that the organizations can actually implement. Because there's been a lot of publications like the OWASP 10 for LLMs or MITRE ATLAS (TM) or things where they lay out the different challenges that AI brings, but they don't stick how you can protect yourself against 'em. And so this is having specific line items of addressing those different threat vectors in, you know, actionable ways that your organization can implement and feel secure about.

Charlie McCarthy (06:20):

You kind of read my mind, the very next question was going to be, why do we need another framework? Because there are so many, you know, framework standards out there that are starting to address the area of AI security. You mentioned OWASP Top 10 for LLMs and GenAI. Actually they're launching an agentic AI security initiative, which is pretty cool. You mentioned MITRE Atlas, there's also the NIST AI Risk Management Framework, but it sounds like, correct me if I'm wrong, it sounds like the key differentiator with this controls matrix is the actionable piece, like that's kind of the keyword. Actually, actionable steps lined out for organizations to take. Is that...

Sam Washko (07:01):

Yes, exactly.

Charlie McCarthy (07:02):

Fair to say?

Faisal Khan (07:04):

I think, I mean, you have actually highlighted some of the great framework that already exists. They all serve specific purposes. So AI Controls Matrix first offer a variation that, like, it actually is more complimentary to what already exists. So for example, the AI risk management framework, that's more outcome, outcome based. It doesn't list like the specific action that needs to, that the organization needs to perform. It will tell you that they need to, for example, the data needs to be secure or encrypted at rest, but they might not have a certain specific controls that organization should implement to achieve that outcome. So in that regard, the AI Controls Matrix compliments that. 

And also NIST is also working on providing additional framework. So that it looks like there is a work that needs to be done because the pace of development in AI is so rapid that we are actually still catching up in terms of security control. So the AI Controls Matrix is a great effort to kind of get ahead of these frameworks provide not only complimentary, but also in a way to be be able to to provide security sooner, like security guidelines sooner than waiting for more frameworks to get updated.

Sam Washko (08:32):

And for the existing frameworks that are out there, we do have mappings from the AICM to a couple of the different frameworks and legislature, like EU AI ACT and like international standards, like ISO.

Faisal Khan (08:54):

So yeah, I think... So if you want to, if you want to see, if an organization implement AI Controls Matrix, the controls and AI Controls Matrix, they can then also be kind of fulfilling the requirement of compliance that's might be needed by any other framework. So it offers a kind of, this good vet between outcome-based frameworks and more control specific frameworks.

Charlie McCarthy (09:20):

Got it.

Marina Bregkou (09:20):

I'd like to add here that what the AI Controls Matrix does is provide you with a list of controls which mitigates different AI threats and also with implementation guidelines, some implementation recommendations on how to implement and what to do. And on the other side, for auditors, some steps on how to audit these actors that are involved in the AI systems as well. So it is a holistic approach, I would say.

Charlie McCarthy (09:57):

Yeah, very comprehensive. And one of the things that I like about it, it sounds like the working group is comprised of a pretty diverse group of professionals with various backgrounds of expertise. Can we talk a little bit more about how it all came together? Like, Marina, could you maybe talk the audience through how the working group was formed and kind of what the collaborative process looks like, and you know, how all of the working group members kind of came to be?

Marina Bregkou (10:27):

So all CSA working groups, all CSA initiatives are usually open and consensus driven. This was the case also for this working group, which is called AI controls framework working group, and which has developed the AI Controls Matrix. We bring together global experts from diverse sectors like cloud security, AI research, tech companies, auditors, policy makers, and all of them collaborate through in-person meetings whenever that is possible, because they are not just from diverse sectors, but also different countries in the world. And at the same time, we also have weekly calls and peer reviews for the work that is being developed. So that is how it usually works.

Charlie McCarthy (11:19):

Awesome. Yeah, that's another really good call out. It's a global initiative. So you've got inputs from various geos and, you know, people thinking about policies that are maybe being formed or shaped within their own geographic location, and they get to bring that knowledge with them as well. 

Any key discussions or maybe debates you can talk about throughout the process? I'm envisioning, you know, when you get a larger group of people together with diverse perspectives or you're trying to understand a problem in order to help help other people understand how to solve the problem, there occasionally can be friction is the wrong word, but like disagreements about how to approach certain things. Were there many of those types of disagreements, or how did the group work together to kind of come to consensus on some of this stuff if there was debate that occurred?

Marina Bregkou (12:14):

Yeah, when you have a lot of people working on the same topic, there is always the possibility and it happens that it gets expanded. You lose focus and because everything has a different perspective and they bring their own expertise and experience you have to, it is very difficult to always be concentrated and to remember what the focus is. So that is one main point that can come up. 

Staying in the lane and remembering what you are trying to achieve, because of course you'd like to to work on this, you'd like to bring something else that another organization is doing, something that you just encounter today and makes perfect sense to develop and address tomorrow's problems. But yeah, you need to stay focused and remember what you are trying to achieve and also be able to communicate it to the other stakeholders, to the other people and make sure that everybody gets along and communicates in a nice and effective way.

Sam Washko (13:28):

Yeah, I think we had some challenges with, you know, having people with very deep subject matter knowledge and being able to distill that into the actionable controls where, you know, we had one person write up five pages for a control where that's more than someone can look at and understand. So trying to take a lot of valuable knowledge from the people in all of these various domains and consolidate it into things is a big challenge.

Faisal Khan (14:08):

And if I could add, like the field itself is in flux. So things change, new things keeps coming in. So for example, few months ago, the agent AI was still in kind of a little bit discussion, but now it's in full fledge. So adding those, so defining those, agreeing on those definitions so that people can understand what actually we are talking about. 

And then providing a very concrete control language that says exactly what needs to be implemented. And then also later providing implementation guidelines. Okay, these are the steps you can take to now secure this. Keeping in mind that we still kind of in early stages of, for some of these things, in terms of risk that we might encounter. So, I think as Marina said, like working together, defining these thing for the challenges of future, that's one of the most discussed topic, or most discussion happen around this area.

Sam Washko (15:13):

Yeah, defining terminology has definitely been a big challenge of having an agreed upon taxonomy of like the different threats, the different parts of the AI system, the different actors in it, having them like model provider, application provider, orchestrated service provider, AI customer, like what the scope of all of those actors in the pipeline is, is you know, that's been a big challenge.

Faisal Khan (15:43):

And also I think for me and Sam, since we have more engineering and development background, so it's a little different to kinda step out of the kind of the weeds of things and go at high level and think about things in more abstract and definition terms. So I think we have enjoyed the last few months that we work with these groups a lot. And also, as you said, like there's a very diverse background, very different perspective. So you learn a lot from what other peoples are working and thinking in terms of AI security. So we can take the feedback from that and learning from that to improve our own own work as well.

Charlie McCarthy (16:23):

So to your point that, you know, the field of AI is evolving pretty much at breakneck speed, agentic AI is now a conversation that people are having even within the last week, the new, I don't want to say hype term, but just term that's getting attention is Model Context Protocol (MCP) and the security implications of that. And in the future, frameworks are going to have to account for, you know, all these new developments and things that come out. Is there a version available of the AICM right now? Like what's the status of it? Can the public see it? And also will there be future iterations? Like is this an ongoing working group that y'all will continue to meet and update this? Or how does that work?

Marina Bregkou (17:07):

Yeah, it'll be ongoing. Right now we have come out of public preview. It ended in February. And we, because we received a lot of feedback, which we're very grateful for, we need to calm all those comments, all that feedback one by one, and apply it to make sure if they are applicable make sure to address them. And come with a final version of the AI Controls Matrix, which will be published in June. It will be together with the implementation guidelines maybe a little bit later that one and the auditing guidelines at the same time, the different mappings that Sam mentioned earlier to different standards and regulations in order to cover compliance with ISO 42001, NIST 600-1, the EU AI Act and so on. 

And all this work will continue afterwards. We will refine this first version. We will enrich it with new upcoming vets because it's still ongoing and who knows where it'll reach, what it'll reach and when it'll arrive. Refine, so refine the work that we've already done that will be published in June. And also progress it and incorporate new things.

Charlie McCarthy (18:37):

Excellent. Another question for the three of you. In the same vein, what were the biggest challenges personally, that you encountered while working on the matrix? Were there any areas that you, each of you found personally difficult to define or any logistical pieces? Just, can you speak to some of your personal experience about the process?

Marina Bregkou (18:58):

If I have to start. I would say that it is a lot of work. Time sensitive very much. And as I said earlier, we have, it's not just one let's say "stream work". We have like five parallel tasks that are all going on together. And one change, if you make one change to the AI Controls Matrix, it has to be applied to the other four and updated there. So we are having a lot of working group calls. I think seven. 

Faisal will get sick of working with me at the end of of June, I suppose. I hope at least not sooner. And, but the other side of it is that it is really interesting and you learn so much every day from others, from yourself, from from this field. So it's also rewarding.

Charlie McCarthy (20:00):

Absolutely.

Sam Washko (20:01):

Yeah. I think there's, it's also, yeah, the fact that it's time sensitive is hard because we want to give all of these discussions the, like, weight and time and like depth that they deserve. But then we also are trying to finish this to be able to have a document to put out while this is still all relevant. And that's been difficult. And because everyone's working on a volunteer basis, so have a limited amount of time and energy that they can devote to this. So trying to get through discussions while we have various experts on the calls, you know, balancing, being able to drill into things, but also get things done.

Faisal Khan (20:51):

And, and if I could add so one of the challenges that I think I face is so there are I think 250 controls that we have in the control matrix, and some of them are actually carried over from from the existing Cloud Control Matrix, but now we need to update them based on the AI specific language. Like if there's a, if there's a control let's say input validation does that, is a controller already exist in control matrix, does that also apply into AI as it is, or do we need to change, update the language? So kind of going to understand the context and around the existing ones. 

So there's some area that we have expertise in, but there are some areas we did not know that well. So trying to get some familiarity with it, going back to the original cloud control matrix and seeing if if there are things that we can read and kind of import into this one.

Faisal Khan (21:50):

So that's a little bit, a little bit of challenge. Because a lot of the traditional security control still apply to the AI, but sometime you just need to update the language to be more AI focused so that people can go back to the existing controls and see if they're also now taking care of these new threats that are merging because of the use of AI. So that was, I think that's still a little bit challenge even now but something we are through discussion and through just reading more about those learning and improving ourselves.

Sam Washko (22:29):

Yeah. There's often the challenge of technically like under the principle, this fits under the existing control, but is it worth giving a new control to give visibility that this is a new thing you need to be doing with AI? Or is it, you know, that technically works under the old one?

Charlie McCarthy (22:50):

Right. Yeah. Thank you all for sharing that. I feel like there are so many publications that roll out on a pretty consistent basis that as a consumer of these types of things, you start to get a sense of like, you know, a new one comes out as like, okay, there's a new publication, there's a new this, there's a new framework. And I think people tend to forget or just not realize how much work goes into the backend and the conversations and how long the process takes and the real thought that's put into these things. And so it is nice to kind of pull back the curtain and talk to you three about what goes on behind the scenes and really see what a massive effort this whole thing has been. I hope you're all very proud. It's pretty amazing. 

Let's talk a little bit about the meat of it, the contents of the AICM, if we can. Are there particular controls within the framework that you'd like to highlight for the audience? Maybe specifically related to security? I'm just going to throw a few things out here. Or data integrity, adversarial robustness, model explainability. Is any of that covered or are there more important things that you would want to draw folks attention to right now?

Faisal Khan (24:09):

So I think the way we initially, when we start thinking about new domains that needs to be added for AI Controls Matrix, we list, we kind of break down the AI full supply chain. So we start with the data, then you have your models, and then you also have your inference application. So we took each of them apart and see what kind of a risk are there that people have encountered or that are potential risk in each step of the pipeline, and then we map them to different domains that already exist. And for places where we couldn't find a domain, we came up with a new domain. 

Especially, for example, the model security domain or MDS model. Is it called model domain security or I think model security. So MDS is just model security. So that's like called new domain that we came up with. And and then they're controls specific for that.

Sam Washko (25:08):

Yeah, I think it's important that we added a whole new domain for model security, the MDS domain, and that covers a lot of the attacks on machine learning models and what controls you should be following for them. So an important one is model scan, like artifact scanning. So scanning like the artifact files for the models for attacks that can be present in that file itself. It might be executed when it's deserialized or on runtime or, you know, be an architectural poisoning. So it's, you know, that's one control that is important, especially for getting a model like from a, it's important for the model provider to be doing it to, to show after training that it's secure. 

But it's probably more important for like application providers and orchestrated service providers and consumers, if they're getting their models from the third party, whether that's like an open weight hub like Hugging Face or like private third party, if they're not developing the model themselves or if they're taking it for a foundation model, you need to make sure that you're not opening yourself up to other attacks because you're taking this foreign asset into your organization.

Sam Washko (26:39):

So that's an important control. I think another one is adversarial attack analysis. Thinking about the different like adversarial ML, things like jailbreaking, evasion where the input to the model is trying to get, like things out of it or outcomes that you don't want in the output. So it's hardening against that.

Marina Bregkou (27:11):

A couple of most critical security controls that we also have in the matrix is the model poisoning mitigation. Also as a reminder, model poisoning occurs when adversarial actors manipulate the training data in order to introduce biases or security vulnerabilities. And we have some controls that help mitigate this. Data poisoning prevention and detection is one of them. DSP provides you with some recommendations on validating data sources, establishing baseline to detect deviations in your data to monitor and analyze variations in data quality and to identify potential poisoning attacks. We also have a control on data integrity check, and another one on data differentiation and relevance, which ensures dataset variety based on the geographical, behavioral and functional factors which reduce biases introduced via voice.

Faisal Khan (28:23):

So we took some of, we also updated some of the existing controls for example, around identity and access management for things like agentic tools. So if you are building agentic tools that needs to make certain actions on your behalf, then they also inherit those access privileges. So you need to regulate that. And there's a specific new controls for that. And some of these things might change, as you mentioned, MCP. So MCP is like very fresh out of the press, so we might actually have to go back and, and tweak the language or make sure that we actually covered it. And I think this kind of, with AI, this kind of race, if I say, will keep going. Like something new comes in and you have to think about the security and this kind of this catch game. Keep going.

Marina Bregkou (29:18):

The mouse and the cat game.

Faisal Khan (29:19):

Yeah, mouse and, yeah. Yeah. So game, keep going.

Charlie McCarthy (29:25):

Nice. How would you three like to see this framework evolve as AI continues to advance? I know you mentioned that you went back in and accounted for some agentic AI considerations. If you were to stare into a crystal ball and predict the future, maybe for the last six months of 2025, is there a particular topic outside of agentic AI or MCP that you'd predict you might need to address or you know, more within those two realms? Just like, where do you want to see this go or where do you think it's going to go?

Marina Bregkou (30:03):

Maybe quantum AI.

Charlie McCarthy (30:05):

Ah, ah!

Marina Bregkou (30:12):

Who knows? I mean...

Charlie McCarthy (30:15):

Yeah, that's interesting.

Marina Bregkou (30:15):

It's moving target, right? So.

Charlie McCarthy (30:20):

Hmm.

Marina Bregkou (30:20):

Yeah. Emerging threats. You have supply chain security for AI models. AI powered deception threats. Everything.

Sam Washko (30:32):

Yeah I think it's just important to have it as kind of an evolving document, being able to respond to the new things that come up as they come in, as we've been doing and, you know, being able to respond and update as needed.

Faisal Khan (30:49):

Yeah. So for me personally I think some of the things might become more standard. So like, for example, the controls around model documentation that could become just a part of model development life cycle. So something that's still in early stages and there's still lack of education. Something that MLSecOps has been doing a good job of. So like people learning from this. So we, so one hope is that this will become, some of these things become so standard, so you don't actually have to go and talk about specific control. You can just say model development lifecycle, and these are the things you need to do so that some of these can get shrink. And then making room for more additional technologies that we can add or address in the framework as we go forward.

Charlie McCarthy (31:39):

That makes sense.

Sam Washko (31:40):

Yeah, I'd like to see that with cryptographic signing too. Like that's a pretty standard part of like making software artifacts. That should be a standard part of making models is to sign it and have the, like providence there.

Charlie McCarthy (31:54):

Yeah, absolutely. Okay. So we've got a few minutes left here team. Kind of a closing question. When the final document is available and companies are looking to implement these controls, they get their hands on the AICM, what's a best first step? And I come at that question from the perspective of say I'm a business leader, I get my hands on this controls matrix. I can imagine some folks looking at it has so much knowledge in it and information, it could be a little bit overwhelming. So when you get that in hand, like what's the best first place to start, would you say?

Marina Bregkou (32:35):

So a company can start by mapping their existing security policies to the AICM in order to identify the existing gaps. This way they will start from what is already familiar to them and what they know and compare it to this new artifact. And then they can adopt controls based on their AI systems risk profile and their compliance requirements as well.

Faisal Khan (33:02):

Yeah, I think, same thing. Like yeah, it's a fair observation that there's a lot to process and digest when it comes to AI security. It's a good place to start is to take the existing controls and, and see if the existing policies and see if the map how much of that has overlap with the AI Controls Matrix and what needs to be done. Especially if you are building and deploying AI applications, there are new threats that you should be aware of. So it's a lot to take, but there's also, there's also a lot of good large attack surfaces you that you might be exposing if you, if it's not being done properly and, and securely.

Charlie McCarthy (33:48):

Yeah. And the impacts if you don't.

Faisal Khan (33:51):

Yeah. And impact.

Charlie McCarthy (33:53):

Could be reputational, possibly legal consequences in the future, financial... Just, sky's the limit.

Sam Washko (34:00):

Yeah 'cause a lot of these attacks can result in just arbitrary code execution on your system and that could be just disastrous.

Charlie McCarthy (34:11):

Yeah. Okay. before we wrap up here, anything that we didn't touch on that we meant to, or calls to action, key takeaways you'd like the audience to leave with?

Marina Bregkou (34:26):

Actually we have volunteers joining every day, people that are interested. As we said earlier, it is work that is ongoing and progresses every day. So new people are welcome. The call for action or the possibility for joining always happens. The CSA website, that is where the cloud controls matrix will be published on soon. And as with all CSA artifacts, it is, it'll be publicly available and downloadable from the CSA website. And that's the place to go.

Charlie McCarthy (35:03):

Oh, wonderful. And it's a free resource? Okay. Awesome. Y'all, this was a fantastic conversation. Thank you again for being here. I know that the MLSecOps Community is going to glean a lot of knowledge from this. We'll include links to the CSA website that Marina mentioned, so if you're interested in getting involved or you just want to see some of the other initiatives and also keep an eye on the June release for the AICM that will be available to you in the show transcript. Marina, Sam, Faisal, thank you for being here, and I hope we get to talk to you again very soon.

Marina Bregkou (35:37):

Thank you for having us.

Sam and Faisal (35:39):

Yeah. Thank you for having us.

 

[Closing]

 

Additional tools and resources to check out:

Protect AI Guardian: Zero Trust for ML Model

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform


Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.

SUBSCRIBE TO THE MLSECOPS PODCAST