Secure AI Implementation and Governance
Audio-only version also available on Apple Podcasts, Google Podcasts, Spotify, iHeart Podcasts, and many more.
Episode Summary:
In this episode of The MLSecOps Podcast, Nick James, CEO of WhitegloveAI dives in with show host, Chris King, Head of Product at Protect AI, to offer enlightening insights surrounding:
- AI Governance
- ISO - International Organization for Standardization ISO/IEC 42001:2023-Information Technology, Artificial Intelligence Management System
- Continuous improvement for AI security
Transcription:
[Intro] 00:00
Chris King:
Hey everyone, I'm Chris King. I'm head of product here at Protect AI, part of driving a lot of our research offerings, some of our open source initiatives into actual usable products. You've seen some of those with our products like AI Radar. You've seen some of our open source work with ModelScan, and here today I'm hosting a podcast with Nick James.
Nick James:
Hey there. Thanks for having me on. My name is Nick James. I'm the CEO of WhitegloveAI. We are a managed AI service provider or a MASP. I like to describe us as a collective of cybersecurity professionals who are highly experienced and very passionate about cybersecurity.
In fact, we have over 130 years of combined years of experience in cybersecurity. And across the team we're very proud to say we have 68 years of total combined active service in the US armed forces. And we've come together to explore both the promises and the risks that AI presents to humanity. Thanks for having me on.
Chris King:
To get started, Nick, I was wondering if you can help set the stage for the audience and give some background or a little bit of backdrop for the rest of the episode and start by just walking us through and talking about AI governance in general.
Nick James:
Yeah, absolutely. AI governance is... I think let's just talk about governance in general first.
Chris King:
Okay.
Nick James:
When a collection of people, processes, and technology come together in order to achieve a unified and singular mission under the guise of a company or an organization, those activities need to be governed. The activities need to be orchestrated and they need to be controlled in a manner, in an orderly fashion, and in a manner that's in keeping in alignment with the objectives and goals of the organization.
So that, I would say, is not the Merriam-Webster definition of governance, but that is my definition of governance off the top of my head. And I think it is fairly accurate. And so when we talk about AI governance or even, let's start with cybersecurity governance, I was really championed by ISO 27001 in the 27000 family of ISO standards. And really, once again, going back to my definition of governance, in particular helps organize all the activities, the people, process, and technologies that help drive towards the outcomes, the business outcomes, and the goals of the security program.
And in keeping, we're just changing from security to now AI, obviously, there's a number of various nuances that make it fairly different than governing an information security management program. And keeping with that, AI governance is, once again, it is the governance of all of the activities related to an AI management system. And driving towards keeping in alignment with certain specific security controls and even controls under ethics, fairness, bias, responsibility, safety, and security of the use of AI, either from an external source and consumption internally, or the development of artificial intelligence or machine learning models within the organization. So I hope that that answered the question.
Chris King:
Yeah, definitely. And building off that, if you look at let's say, governance risk compliance or GRC for short, it's one category of something we see in MLSecOps. I'm kind of curious how you think about that and your team thinks about it at WhitegloveAI to think about AI governance, particularly in the context of the ISO 42001:2023 standard.
Nick James:
Yeah, absolutely. So at WhitegloveAI, I believe what I said earlier is very much the definition we espouse, and perhaps it shared broadly amongst industry leaders in this very nascent and new industry. And I think that the dynamics that generative AI brought to the table and to the world in late 2022 really propelled the need for governing AI in terms of what is unique to artificial intelligence. I like to describe it as, and just zooming out a little bit at a macro level, if we look at all of the industrial revolutions, this is the first industrial revolution where the centerpiece technology that was driving the revolution actually challenges humankind or our species' intellect.
Someone put it to me that way and it really, I had to take a second to let it sink in. If you look at the industrial revolution, the agriculture revolution, both of those challenged our physical attributes, our strengths, our ability to sow seeds across a farmland. And that was the agricultural revolution with the introduction of machinery. But this is the very first revolution that challenged the intellect of mankind.
So it definitely has a number of, even nuances I think is a word that isn't big enough to describe the differences in how we need to govern artificial intelligence. But we're all uncovering and unpacking as we go along. 42001 is fairly new. It was released in December 2023. However, it's been out for comments for a while now, but the final version was just released. So we have a lot of work to do, needless to say.
Chris King:
Looking at some of the questions I wanted to ask, certainly was why is this important? And if you've got a concept where something is, one, providing guidance or information, that's really important. If something is challenging your intellect or requiring you to bring your own really cognitive focus to edit, is this actually correct information or not? Obviously, that has to be one of them.
But with something that critical, what are some of the challenges or differences from how you would approach governance in this scenario versus, let's say, a traditional application or more of a static system that you'd see in the past?
Nick James:
Yeah. I think we would have to look into what those nuances are. Number one, this, I like to look at it almost like an orb, and it's this glowing orb. It's small when it starts. Let's pick on open source small language models like a Mistral or Ollama. It's generally pre-trained on just language. And as you start to feed it, it continuously starts to learn and adapt to that information. If you're talking about fine-tuning. If you're fine-tuning that model, and I'm not talking about building a foundational model from scratch, but fine-tuning a small language model, it's continuously learning and adapting.
There's no other technology that's done that. So that's one nuance that definitely stands out. And then number two, I think, is that if we look at how we can observe and measure how the artificial intelligence arrives at a conclusion or an output or technical term completion, I think that, once again, that's another nuance that we just haven't had to worry about in the past. So we've measured that through humans making decisions, not machines making decisions. Perhaps, and not as large of an issue, was when we started using robotics process automation. But once again, that's very human driven.
Now we're looking at something that's continuously learning and adapting and, if given the permission to, can make its own decisions. And then what comes after decisions is actions. And that's really stepping into the world of autonomous agents. So I don't want to jump into that rabbit hole just yet, but that's another thing. So opacity and explainability, observability.
And I think another thing that really underscores the need for governance is the rapid pace of innovation in this space. It seems like every morning I wake up and there's just some completely new zero to one, in keeping with Peter Thiel's book, some zero to one advancement in AI that completely broke everything we've ever known, and it's very melodramatic. But keeping a finger on the pulse of innovation has never been more important than now. I think the cybersecurity industry definitely brought heat to the need for open source intelligence and threat intelligence and keeping a finger on the pulse.
But now we need to keep a finger on the pulse of innovation because with new innovation presents new and unique risks. And then I think I'll end it with the regulatory fog around it. As an organization, you want to embrace it, you want to leverage it, but then you also have to teeter on the line of, okay, there's looming regulation happening. EU comes out with something. What is the United States going to do? What are the impacts to that? Are we going to have losses of investment because we chose to do something that now is prohibited? I'll end it there.
Chris King:
Yeah, it's fair. And it's something I've certainly encountered working with customers in our profession. We've seen with language models, there's this really cool paper that came out and described this attack where you prompted a language model to repeat poem indefinitely, and then eventually they would start to leak data into the user's response where it was training data. And it was seemingly at random. And so we learned that most language models remember about 30% of their information, at random, just stored inside them.
What was really interesting, if you start fine-tuning any of this giant search of information you fine tune on, any of that now, potentially could get disclosed later. So how do you build those kind of controls is I think going to be an interesting one. Just like you said, from novel risks that crop up. But shifting gears a little bit out of the risk, one thing that we've seen a lot over the past year is a focus on ethical AI. Maybe it'd be really good at the beginning to get your take on exactly what is meant by ethical AI in the context especially of governance and things like 42001.
Nick James:
Sure. I think that in large part, the humans of the organization that have control over the AI need to have leaders across all departments where this AI will touch. I'm using it very ominously as this ubiquitous AI thing. So machine that stretches across the organization. It could be that, but effectively it's going to be looked at as an IT service. And it's probably going to fall under information technology or the CIO, CDO types. But ultimately, I think that when it comes to governance and ethics, we get to influence what it learns and how it adapts to those learnings.
And this comes down to training and training data. And in the case of fine-tuning, what data we use to fine tune the models when it comes to open source, small language models can heavily influence how that model makes decisions. And so I think ethics have, there is no one worldview decision on what it means to be ethical. It varies from country to country and region to region. But whatever ethical guidelines that your organization espouses, promotes, advocates for, and teaches their employees should also be interwoven into the training data that's used to fine tune those models.
Chris King:
Got it. And makes sense really, building on the existing ethical principles you have, you want to carry that into any autonomous agent certainly responding on your behalf or within your space. Have there been any kind of guidelines or ethical standards or guidelines that have really stood out as being useful for you?
Nick James:
I would say there are a number of standards out there that focus on ethics. I don't personally recall. We do have experts on our team that can probably speak more closely to ethics specific guidelines. I know that, obviously, 42001 touches on ethics as well as a section, but I wouldn't say that there's one overall governing ethical body of standards that stood out to me. Perhaps you can educate me on that.
Chris King:
No, this is all new and novel to me as well. So coming in, trying to get up to speed and really understand it. But if I look at one other area of 42001, it was encouraging, innovation within bounds. Curious to get, what's your interpretation of that concept and how would you guide, let's say, a customer for WhitegloveAI on how to navigate the fine line between creativity and security within their AI development?
Nick James:
Yeah, absolutely. I would say answering, prescriptively in the context of 42001, the standard does suggest that organizations pursue AI development with a focus on creativity and advancement. But also while simultaneously respecting the ethical boundaries, the legal requirements that they're encumbered by, and operational constraints. So innovation's a very creative process. Because innovation is typically, I think people out there say necessity breeds innovation. But I also think innovation, it's the process itself of solving a problem.
Typically the necessity is the problem that needs to be solved, but the process of solving that problem is very creative. So I think that the power that is harnessed by artificial intelligence and its ability to take in large sets of data and make inferences and learn and continue, adapt on that data and grow in knowledge and intellect really presents us with a new challenge of how fast we should go and how do we choose to innovate within the bounds and the guardrails of ethics, regulatory requirements, and the operational constraints of our organization.
And obviously, aiming to understand how what you do within your organization and what you're teaching your artificial intelligence, what societal implications and ramifications would it have if it were to start to interact with the world outside of your organization?
Chris King:
And I think that might feed into one of the next questions I had, which was the standard also mandates this AI impact assessment. So not just data for example, but how might that process work to generate an impact assessment of an AI for an organization?
Nick James:
Yeah, absolutely. Within the bounds of the organization, I think conducting the detailed impact assessment is obviously, like you said, mandated by 42001 if you choose to follow it. But it would follow a structured and systematic examination of how AI might affect individuals alone, a group of individuals, and then the broader society across a number of different dimensions, which can include fairness, accountability, privacy, security, safety, and environmental impacts.
Chris King:
Gotcha. And building off that, how does the standard really shape any uniqueness or perceived uniqueness in vulnerabilities to running AI or ML systems?
Nick James:
Yeah, I think that the way that the standard is structured across risk management, the impact performing the AI impact assessment before taking any action, talking about security and resiliency, the lifecycle approach and a key factor, human oversight through governance committees, putting in measures for observability, transparency and explainability, privacy, security. And I'd say the robustness of the system itself, I think, is adaptive enough to address the unique dynamics and risks that AI presents.
Chris King:
Thank you. And if we start to think about, you've talked about AI getting new data, continually learning, continually evolving, similar to what we have, continuous improvement or continuous delivery on AppSec, how might we start to effectively think about implementing awareness of different parts of the lifecycle and continuously improving our security posture there?
Nick James:
Yeah, a plan-do-check-act is nothing new. I call it PDCA. PDCA came from 27001. So if you dissect that, so you plan, you plan, you do the impact assessment, you do, you execute and build out the AI management system, and then you have measurements along the way against KPIs, KRIs to measure how effective the system is and if it's achieving the objectives of the system. That's check. And then after you check you act.
Those check and act, those two last parts of PDCA really address holistically continuous improvement. And I think continuous improvement just simply says that you are going to set out to do something and establish an AI management system. Just establishing it alone and letting it run is not enough. You have to measure its performance and then make adjustments along the way to refine and fine tune to ensure that the management system is performing at the level of the expectations at the outset.
Chris King:
Got it. And going a little bit further, you've got your plan in place, you're assessing it, you're evaluating, especially after the checks. What are some really practical checks that you've seen really help a security posture advance for machine learning or AI?
Nick James:
Sorry, say that one more time?
Chris King:
What are some practical improvements you've seen organizations make around operating machine learning models or AI systems?
Nick James:
I'd say MLSecOps. DevSecOps really influenced the infinity, the CI/CD pipeline. And having continuous improvement, continuous integration following the original DevOps model with CI/CD. I think starting to include, as Protect AI espouses, MLSecOps, and following that infinity CI/CD, so that we aren't hampering innovation and we aren't hampering new releases, but also including checks along the way.
I think that's the best way to go about it, because if you are inserting security reviews, impact assessments, model testing and the like, and I believe that Protect AI, the parent company, has a number of these features, as long as you're inserting it within in-line, if you look at it as an assembly line before the shiny new thing is released, I think that's the best way to do it.
Chris King:
Yeah, fair enough. And we've seen a lot of use with some of our tools, like our open source tool ModelScan for just one tiny attack is that you can inject malicious code in a lot of models and it runs the second the model is loaded for inference or anything. So having tools to assess those, sign-off that they're okay before they exit and go into another system, those are, like you said, integrating that DevOps mindset is definitely helpful.
Nick James:
That, and I'd say AI securing AI. At WhitegloveAI, outside of the framework, our AI security framework, I've come up with something called the AI security triad. It's a streamless infringement on the age-old security triad, which was confidentiality, integrity, and availability. But in the AI security triad, it's a way of thinking about the intersections of artificial intelligence and cybersecurity.
The first side of the triangle I call security of AI. That's really where Protect AI would fall the logo on that side of the triangle. So how do you secure AI within your environment? Doing things like governance and implementing an AI management system, model testing, and things like that.
On the second side of the triangle is security with AI. How do you augment your cybersecurity teams to use AI to provide security services, whether that's to the AI management system itself or the remainder of the organization as they have been for -
Chris King:
Blindly running shell scripts that come out of your language model. That's a good example.
Nick James:
And then just covering off the last side of the triangle is security through AI, which is how do we enable autonomous agents to provide security service to the organization with minimal human interventions, obviously with human checks and human in the loop, but do that at a scale that we haven't been able to do before. And I think that that second and third side of the triangle really act as a force multiplier on how frequently we can do things because human time is finite. Humans get sick. We take PTO. We have federal holidays.
There are a lot of things that stand in the way of us continuously checking, measuring, improving, checking, measuring, improving. But now we have a tool or we have a new being, AI, that can do that for us. Once again, innovation within bounds, as long as we are able to control how it measures something, how it decides what to do based on that measurement, and how it takes that action, we don't have to wait for a human to do it now.
Chris King:
Definitely. And speaking of you guys, and understanding that other side of the triangle, you've recently announced you have your own AI security framework. I was wondering if you could tell a little bit, I guess tell us all a little bit more about the framework and how it might align with any kind of existing risk assessment practices.
Nick James:
Yeah, absolutely. The framework is really distillation of 42001. It really helps distill and simplify 42001 and it also integrates, which there's a very clear crosswalk to the NIST AI risk management framework and how combining those two frameworks together and from a control standpoint, leveraging the OWASP Top 10 for LLMs.
And we're also looking to include the MITRE ATLAS framework. How do we start to unify and harmonize those four standards and frameworks together to come up with something more holistic. Albeit, ISO is agnostic. It's not pitching a vendor or anything like that. OWASP is agnostic, NIST is agnostic, MITRE is agnostic. I think bringing those agnostic standards together and leveraging them for their strengths and bringing all the strengths of all four, that's really what our framework espouses.
Chris King:
Gotcha. And so it's nice to know that this approach is really blending with a lot of other frameworks people might be familiar with. Extensions, like you said, the OWASP list for LLMs or ML is an extension of what you saw from them for traditional AppSec.
With that in mind, is there a specific risk that this type of approach that you're defining really helps mitigate?
Nick James:
There's not a specific risk, I would say. I think there's a collection of risks which include threats to privacy, data security, and potential for AI driven decisions to create biases or unfair outcomes of the model itself. And I think, in whole, I'd say the approach really promotes and advocates for continuous risk assessments and treatment plans and adhering to more technical standards like the OWASP Top 10 for LLMs.
And in particular, I'd say that there are a number, I guess, of unique risks that come with how we can manipulate the behavior or take advantage of a model. Obviously, number one, everything for artificial intelligence and machine learning, everything starts with the data. There's an old adage, garbage in, garbage out. It's similar to how you would raise a child. If you teach the child bad things, the child will do bad things. Teach the child good things, the child will do good things.
And in going back to continuously learning and adapting, if an attacker was able to get ahold of the pipeline of data that's being used to either train or fine tune a model, which is called data poisoning, that's number one.
Number two, I'd say that there's a concept called model theft. So where an organization chooses to fine tune a model with proprietary data and puts in system prompts, before user prompts, to limit the access to such data. If it happens to be publicly facing, threat actors can leverage manipulation tactics to extract that intellectual property from that model and effectively taking control of the model itself. So I'd say, those would be the top two.
Chris King:
Gotcha. It is similar for things that we've seen. If you guys have looked at any of the Protect AI, like threat research that gets published. We look at the ability to attack a particular running language model to extract either particular insights, like you said, the prompts, or maybe to extract enough of the information like reason about the IP. But we've also had a lot of success just hitting the model pipelines themselves.
You probably are picking up very similar automation tools that for what you were using with traditional ML to start fine-tuning your language model. You've got the control to your data. You've got your source control systems. They're all there. And we've seen those be incredibly vulnerable. And so that lets you pretty much just not immediately grab the model asset right out. So it's cool to see vendors are starting to really take a look at those and start to meaningfully improve security. Rising tide lifting all the boats there.
Nick James:
That's absolutely right. And I think that my peers, my CISOs out there, chief information security officers, they're a little bit puzzled because I think that this has caught us by surprise. We are very adverse to risk and we don't like shiny new objects. So sometimes we have a tendency of shooing them away. It's a fade, it's going to fade away. It's just a fad.
Chris King:
Just like the internet.
Nick James:
Just the internet. What was that when, it was like a late night shows, I forget the guy's name was interviewing Bill Gates and he said the internet was a fad.
Chris King:
Famous times we've gotten it all wrong.
Nick James:
We got it all wrong. But I think we have a tendency of feeling puzzled now because our business leaders, our counterparts, our revenue generators in our business are now bringing this to the table and they're being very serious about it. And we were very puzzled. We were like, "I don't know how to control it. I don't know how to secure it. Let's just block it."
And that is the worst answer we can give right now. Right now, if there was a time to embrace and leverage AI, it is now. Otherwise, you will get rapidly left behind. So I think educating CISOs and helping them with the right awareness, education, and tooling so that they become more familiar with how to protect the model pipelines, the models themselves, prevent against manipulation and hijacking. I think that that's why I'm just personally a huge fan of what Protect AI does, too.
Chris King:
Yeah. So of course you were just going through this risk idea for a particular organization where they might think about putting their head in the sand and really blocking access to generative AI models versus having a particular vendor that maybe they've decided they want to align with and they've assessed.
Do you have an example of really trying to walk a corporation or an organization through that risk of totally sidelining themselves as this tech rolls forward versus ways they could assess current solutions to figure out what might be appropriate for them and their risk posture?
Nick James:
Yeah, that is an amazing question, and it tees up really why WhitegloveAI exists. The matter of fact is there's over 67,000 AI ML products out there, and they do a slew of things. They do A to Z. And there's just a lot of confusion now on what's the right one to select because perhaps there's 50 in one category. What makes this one better than that one?
So I think there are, number one, to answer the first part of your question, what's the risk of sidelining it is opportunity. The data already suggests, the forecasts we're sitting in, AI, the industry itself, is going to be sitting at 5.6 trillion by 2032. And if the industry itself is sitting at that, think about how your business could suffer if it's not leveraging the artificial intelligence itself for the business's purposes. And there's a myriad of ways that businesses could use artificial intelligence, either internally or externally.
And then the other one, I think that we have something called the Discovery AI, it is a service offering. We come in and we partner with business and technology leaders with typically there being a champion, someone who even sought us out and said, "Hey, we have a lot of questions about, number one, what questions should we ask? Number two, where can it be applied? Number three, what's mature enough? Number four, what's the SWOT analysis?"
We include an AI SWOT analysis as a part of our service offering. "Number five, how do we perform due diligence on these vendors? Number six, how do we prepare our environment, to prepare it to receive, to start to ingest, consume, adopt, and put this thing in our environment? And then finally, how do we take care of it over the long run?"
What's the reputation of that company? Is it going to be around in two years when there's a zero day that comes out? Are they going to be around to patch it? There's a lot of questions that need to be asked, and I think that that's really the whole reason for the exist. The whole reason I even started the company was because of the amount of fog and confusion that's out there.
I really want us to be that trusted AI adoption partner and walk you through, shoulder-to-shoulder through the process and make sure that we are taking the right steps and taking the right moves and steps to get you to a point where you are mature enough to adopt artificial intelligence. Because, boy, I think, I don't know if you've worked at really large companies, but on the outside it looks like they have their stuff together. But when you walk in the front door, it's a mess.
Chris King:
I think that's a consistent one that we all keep bumping into. And building off that, we have this podcast, it's out. We're really trying to target this community and grow it of MLSecOps. And so we've got this broader audience. We've got all the folks that are on Slack. Is there one takeaway item you really want to recommend and make sure people focus on and take away from this episode, especially as they think about really incorporating AI into their production systems?
Nick James:
I would say be cautiously optimistic and eternally curious. Two things. The pace of change is just so rapid that if you don't stay eternally curious, and I always like to say I'm just a child of the industry. I go to work. I'm just playing. I'm tinkering. I'm finding cool new tools. I'm testing them. I'm poking them, kicking. I'm just hip checking the door. Be eternally curious, but also cautiously optimistic, meaning don't just see the shiny thing and fly towards it and then get zapped. You have to be cautious.
You have to have controls in place to measure, number one, the impact, and number two, the risks of adopting it. So I would say those two things, be eternally curious and cautiously optimistic.
Chris King:
Got it. I think that's pretty sound advice working in this space. So thanks certainly for being on. And once again, everyone, I'm Chris King. I want to thank all of our listeners, the broader support of the MLSecOps community. It really is our mission here to continue to find AI security professionals to keep improving the material that we're offering, make sure we're putting material, like this podcast, other resources, some of the other open source efforts. Thanks again to our sponsor, Protect AI, my employer. And really especially, thanks again, Nick James, CEO of WhitegloveAI for being our expert on today. And be sure to check out the show notes. We'll have links for Nick's contact details, any other resources that we may have mentioned throughout here. And we'll see you all next time. Thanks, Nick.
Nick James:
Thanks, Chris.
[Closing]
Additional tools and resources to check out:
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
Thanks for listening! Find more episodes and transcripts at https://mlsecops.com/podcast.