Audio-only version also available on Apple Podcasts, Spotify, iHeart Podcasts, and many more.
Episode Summary:
In this episode, Charlie McCarthy sits down with Chris McClean, Global Lead for Digital Ethics at Avanade, to explore the critical intersection of responsible AI, AI governance, and security. They discuss how to integrate responsible AI into business processes, develop a robust governance framework, and balance innovation with risk management. The conversation covers actionable strategies for enhancing compliance, managing ethical risks, and preparing for evolving AI regulations—making it a must-listen for organizations aiming to harness the power of AI while safeguarding against potential pitfalls.
Transcript:
[Intro]
Charlie McCarthy (00:08):
Hi, everybody! Welcome back to the MLSecOps Podcast. I'm one of your MLSecOps Community leaders, Charlie McCarthy. It's great to be back with you this week. Let's dive in and meet our new guest.
Chris McClean (00:18):
Hi, everyone! Great to be here. My name is Chris McClean. I am Global Lead for Digital Ethics at Avanade. Avanade is about a 60,000 person consulting firm. And in my job, I really have two roles. One is internal and one is externally facing. On the internal side, I look after everything that we do related to responsible AI and AI governance. That includes policy, principles, training and awareness, all the integration with backend systems, and things like that. And then on the external side, I lead our practice around AI governance and responsible AI, helping our clients build responsible AI and AI governance programs. I really look after all things responsible tech and responsible innovation. It just happens that AI has been the most salient, the most interesting area of technology in the last two years.
And then as far as background, I spent about 12 and a half years at Forrester Research. I was a research director and covered governance, risk and compliance. So primarily helping companies build their risk and compliance programs. And ethics was a big part of that. I have a master's degree in business ethics and compliance, and I'm currently a candidate for a PhD with the University of Leeds studying applied ethics with the focus of risk and trust relationships. So, great to be here.
Charlie McCarthy (01:33):
Chris, thanks so much for joining us on the show. It's great to have you here for the first time—hopefully one of maybe additional appearances. Let's dive right in. Part of the inspiration for this episode was an article that you published earlier this month called "Responsible AI, Good Governance with or without the government."
Chris McClean (01:53):
That’s right.
Charlie McCarthy (01:54):
Yeah. This is very relevant to this particular audience because we have several topics in the community that we talk about, like trusted AI, which is kind of a term that we enveloped to include terms like responsible AI, ethical AI, that sometimes get used interchangeably. So I think this will be a really special episode for our audience. You and I have been hearing these terms discussed, tossed around, we probably have a little bit more context than some of our early learners in the audience.
So for some of those earlier learners, can you kind of help level set how you define responsible AI? And then, leaning on the article a bit, you emphasized that good AI governance is just good business. Can you elaborate on what that means and how it contributes to business success?
Chris McClean (02:45):
Sure, it makes sense, a lot of these topics are used interchangeably. A lot of these words and phrases are used interchangeably. I think that's okay. I don't get really caught up in fighting over how these different words are similar or different. What I think we're all trying to do is figure out how do we create technologies that work better for everyone? That are fair, that are secure, that are transparent, that are accountable and so forth.
So, in my mind, responsible AI is the process of instilling those types of values throughout the development lifecycle through implementation, operation, and eventually offboarding. AI governance for me is bigger picture. So, AI governance for me is basically how do you from start to finish, think about as a business, what are we trying to accomplish with AI? What are the objectives?
And then how do we create the kind of framework, structure, and really decision criteria so that throughout that, you know - design, development, implementation, operation - throughout that life cycle, that we're doing a good job really steering the AI toward these objectives. So within there, you would find ethics and you'd find responsible AI, but AI governance would be the big picture.
Charlie McCarthy (03:55):
Right. Awesome. I want to double-click on one of the words you just mentioned: framework. What can you tell us about some key components of a robust AI governance framework that organizations should consider implementing?
Chris McClean (04:11):
Yeah. So there are a lot of ways to kind of segment or categorize, I think of like people, process, technology, and then like the oversight that facilitates those three. But if you want to dig in specifically to AI governance, I think there are five components that are really kind of helpful to think about.
So the first would be your AI principles and your policy, your guidelines or standards, anything that helps people basically to set expectations for their behavior and their contribution. So that's number one.
Number two would be the processes and oversight system. Kind of what everybody's expected to do and when during that kind of lifecycle that I explained earlier. So in design, what kind of questions are we asking? In development, in training, and in testing, what kind of questions are we asking? And then who's accountable at each step for making sure those things are happening?
Chris McClean (05:01):
And that usually includes some sort of oversight body, an AI governance committee or a task force. Maybe it's a single individual depending on the team. But that would be the kind of structure and process.
The third component would be the assets or the artifacts that you're using to make these decisions. So like your risk assessment questionnaire, your control framework, your impact assessment. You know, if you're doing security, it would be things like your let's say, how do you run vulnerability tests or pen tests? Like, what kind of questions are you asking? Or, you know, what kind of methodologies are your red teams doing? Right? Like, what kind of processes are they carrying out? So that would be the artifacts and the assets.
Fourth would be the training and awareness and the aspects of culture. That's more about behavior, right?
Chris McClean (05:47):
What kind of incentives do people have? What kind of metrics are they being judged on? And things like that to make sure that they have the space and time to do the right thing.
And then finally, the technology. So, I guess I'll put technology into two categories. One would be the technology that facilitates good governance. So your workflow automation kind of technologies, asset inventory, you might have reporting, you might have a GRC platform that kind of tracks these requirements and the controls. And then there are also technologies that actually monitor and enforce those controls. So that's where you would have your security, your access control, your content moderation or content filtering. You might think of, you know, different tools you're using for explainability or for fairness testing. Those all fit in that last category.
Charlie McCarthy (06:34):
Okay. Five categories. That's—
Chris McClean (06:37):
It can be a lot. It doesn't necessarily mean that you have to have 20 people in each category and you have to have a million-dollar budget for each thing. But you should be thinking along the lines of: I know that I need policies and principles; I know that I need an oversight body; I know that I need these assets and artifacts; and you need to be doing something in each category, even though it doesn’t have to be like level 5 maturity for each of those right away.
Charlie McCarthy (06:59):
Right. For folks who are consuming this episode and find some of that a bit overwhelming - because it can, it can sound like a lot - what is your perspective on you know, is this all just brand new stuff that they need to implement? Or are the things that you're talking about basically add-ons that can be built into some of the processes that you already have around GRC?
Chris McClean (07:22):
I love that question. Add-ons as much as possible. So you are most likely - if you have an organization that has any kind of regulatory oversight—any kind of compliance organization or a risk management group, or a security program—you’re already doing risk assessments, you’re already doing some kind of screening for applications. You know, what is the size of this application? Does it use PII in any way? Does it have access to our backend financial systems? These are kind of basics of risk management.
Those would also inform the AI risk conversation. You are probably most likely doing business continuity impact assessments. Like you're going to reuse some of those questions for AI. Definitely any privacy program you have will, there will be overlap there. And even the process that an application would go through, again, from start to finish through design and development and implementation, it's gonna feel a lot like GRC or, you know, security programs. That muscle memory should feel familiar.
Your audit team will also start to incorporate, you know, AI questions into their audit program. Some new questions. And especially when you get into the ethical impact, those will be some new questions. Because we don't think - security professionals have not tended to focus on things like fairness or accounta-, well maybe accountability, but not like fairness or bias in the same way that we might with these like AI ethical questions. But again, the processes, the technologies, the workflows are gonna feel similar.
Charlie McCarthy (08:44):
Okay. So this is good news. TLDR this is good news - hopefully like enhancing processes you already have in place with a lot of the team members you already have in place.
Chris McClean (08:54):
Absolutely. And you're still thinking about, you know, the same kind of objectives, right? You want to build solutions that people feel comfortable using, they trust with their data, they trust, you know, adopting and engaging with that system. That's still the key objective here.
Charlie McCarthy (09:07):
So, coming back a little bit to the article you wrote that we referenced, this next question is from the community. You discussed the role of government regulation in AI. How can companies proactively establish responsible AI practices in the absence of comprehensive governmental guidelines?
Chris McClean (09:26):
Yeah, that's a very salient question right now. I think on one hand, there is government oversight and government requirements. It's just not in the US right now. We have a very comprehensive EU AI Act that's enforceable as of next week. So there's a lot of impetus—if you're working for a company that’s doing any business in Europe, just like with GDPR, your EU AI Act requirements will still apply. Most of the really complicated requirements actually don’t come into force for another year and a half, but next week - so this is February 2 [2025] which is next week - there will be requirements in the AI Act that will be enforceable.
But in the States, what we're seeing is kind of like with privacy, kind of this hodgepodge of different states that are kind of pushing different pieces of regulation, and they will have some impact on AI development as well as implementation. But even without those kind of regulatory requirements, there are still standards out there that I think are very helpful. So the two that come up most often are ISO 42001 and the NIST AI Risk Management Framework.
Charlie McCarthy (10:32):
Yes. We know a lot about those.
Chris McClean (10:33):
Yeah, they're great. I mean, they don't do everything. I have, you know, quibbles with each, but for just risk management, I think NIST does a really good job giving you kind of a checklist of things not to forget. It does, it still requires you to do a whole lot of work. Like how are you going to accomplish each step along the way?
Charlie McCarthy (10:51):
Right, the “Map, Measure, Manage, Govern,” and everything underneath those.
Chris McClean (10:54):
Yeah, it doesn't say, “here's the methodology that you should use to map.” It just says, map your organization. It's helpful to remember that. It's helpful to have that reference point and you can say, okay, we align to NIST so your customers or your partners will feel, you know, better about engaging with, with your organization, but you still have to do a lot of work. ISO I would say is a little bit more comprehensive. So ISO 42001 is more comprehensive for AI governance. So that kind of the management, things like budgeting and the objectives and things like that. What you would normally think of as good IT governance, even good corporate governance. ISO has that kind of feeling. Risk management is just a piece of it. So you might even think about using those two in conjunction.
So yeah, there is no comprehensive US regulation yet, and I don't expect to see one anytime soon. Kind of like with privacy. But in the meantime, if you are getting questions from customers, or if you are expecting to do business in Europe or some of the other countries that are talking about regulation, there are some really good kind of industry standards that you can start with.
Charlie McCarthy (11:57):
Okay. Do you see any of the states within the US kind of leading the charge on forming some of this policy? A past guest that we talked to pointed out that like within the privacy space (GDPR), California was really quick to jump on the wagon, and then federally, we kind of used California as a foundation for building out the regulation at a federal level.
Chris McClean (12:19):
Yeah. I don't track all of this in a great deal of detail, but I know New York, for example, has AI regulation regarding anything you're doing with HR—so things like hiring, resume screening type of work—where if you're doing that, you have to have some kind of transparency, and I believe you have to have consent and an opt-out capability.
Colorado has more comprehensive AI regulation, and that actually looks a little bit more like the EU AI Act. It's risk-based, and it requires a comprehensive set of risk mitigations. I believe it was Illinois that had like video surveillance use of AI that was regulated. So it is very much a hodgepodge right now, but again, very much like what we saw with California and privacy, I think that's what we should expect over the next few years.
Charlie McCarthy (13:07):
California—I can't remember the number of the bill, but we'll find it and link it in the show notes [SB 1047 vetoed]—but they did have a bill last year that ended up not being pushed through, and one of the hot debates around that was like regulation stifling innovation. So, can you tell us a little bit about your view there in terms of federal regulations or corporate self-governance as it relates to AI, and like, what is the ideal relationship there?
Chris McClean (13:37):
Sure. And actually, just very quickly before we got to that, there was one big California regulation that was vetoed, but there were, I think, four others that pertained to social media and other aspects of the use of technology that will touch on the use of AI. And that's actually one thing that I forgot to mention earlier, is that we have all kinds of regulations already that will be used to regulate AI. So it is currently not legal to show any kind of prejudice in the workplace with respect to who you hire, who gets promoted.
So if you're using AI and that happens, it's still illegal, right? So a lot of the enforcement agencies have been saying over the last couple years that we will use our existing laws to regulate AI based on existing rules. So there are things that companies need to think about in that aspect.
Charlie McCarthy (14:33):
That brings up another question. I know we just asked one; we’ll come back to that, but just double-clicking on what you said: so, for example, a firm that might be using AI-powered technology to review resumes and push them through to hiring managers, that sort of thing—we know it's illegal to discriminate job applicants based on these protected classes and that sort of thing. Do you have any thoughts about where the liability lies or shared responsibility model for if an AI algorithm does end up executing bias and some people are harmed by that outcome? Like whose fault is it?
Chris McClean (15:15):
This is the kind of question that's going to come up a lot. We have no precedent for that particular question. But we have precedent for all kinds of questions like that with, let's say, an automobile, which is made by 40 or 50 different companies along the way. You have a company that makes the steering wheel and a company that makes the seat belts, and you have a company that makes the engine. And so it's gonna be, I think, something similar, which is every one of those pieces of the puzzle there's a contract behind it, there's an assumption of liability, or there is a clause in the contract that says, we are responsible for this part of the function and not for this other part of the function.
So I think there's going to be something similar in the lifecycle of AI that would be where somebody's making some kind of claim about this is what this AI system can do and cannot do.
Chris McClean (15:59):
And if it seemed to be used in ways that are not part of that stipulation, then the user could be liable. If it is used in that way and it malfunctions in some way, then the company that produced it is liable. So we don't know for sure if that's how it's going to play out. I think it will come down to lawsuits.
And, you know, if there's a company that has been using AI to screen resumes over the course of a period of time and somebody can show they've been harmed, and it's because of a systematic error in the process, then we will see some kind of litigation and we will see, okay, the precedent has been set that the company who implemented this AI into this system—I'm not advocating for this, I'm not predicting this—but one thing that could happen is, you know, the court system would find that they had put this system in place and they didn't have complete oversight or not sufficient oversight, and it was a systematic harm that they were doing to a population of people that would be setting precedent.
Chris McClean (16:57):
We’re just not quite there yet, but that's the kind of way it might play out, based on how it's played out in a lot of other industries.
Charlie McCarthy (17:03):
Okay. That makes sense. So, back to the innovation versus corporate self-governance and federal regulations...
Chris McClean (17:10):
I can see how people might be concerned about this. I think there are ways that regulation can be maybe heavy-handed, maybe costly to adhere to. You know, in the last week there's been a lot of buzz in the AI industry about DeepSeek and some of the constraints that they had, and maybe they innovated because of those constraints.
So we often see that it's not that you are just freewheeling and you have access to every, you know, possible dataset that you want and all the infrastructure you could ever need and all the energy you could ever need. You're just going to innovate forever. There are all kinds of ways to innovate with constraints. That doesn't necessarily mean regulation is going to help there, but I think a lot of the reasons that I am in favor of AI governance and better oversight is because governance requires you to set objectives, set priorities.
Chris McClean (18:02):
You're going to use decision criteria to eliminate a lot of use cases that probably don't make a lot of sense. I know for us, you know, we're a 60,000 person consulting firm, but we do a lot of innovation ourselves to build new AI. We had very strict AI governance policy and practices for just about everything. But we set aside some resources and kind of an environment for people to experiment. That was a governance decision. And we couldn't have done that if we just said, okay, everybody can do whatever they want. We would, you know, the whole system would fall apart. Right? But if we make very conscious decisions about where we are spending our resources, what kind of basic guardrails do we need? You know, what kind of data are we okay experimenting with? Or what kind of platforms are we comfortable with? Within that ecosystem, within that sort of set of decision criteria there’s a lot of innovation happening. So there are definitely ways to do that. I, again, there's gotta be a balance with heavy handed regulation versus good quality oversight. But there's all kinds of innovation with restrictions that happens all the time.
Charlie McCarthy (19:04):
You just said the magic word: balance.
Chris McClean (19:06):
Sure yeah. Ideally yeah.
Charlie McCarthy (19:08):
Right. Okay. Awesome. These are fantastic insights. Let's talk a little bit more, Chris, about risks resulting from ethical mishaps with AI. Can you talk about some of the ethical concerns that are top of mind for you and how companies can go about identifying them—maybe a little about mitigations if we are even there yet, if we have enough real-life examples?
Chris McClean (19:36):
Sure. It depends almost completely on what your use cases and applications are. So, many AI systems will just have a whole lot of privacy risks, right? We already know this; we've seen this play out in the industry. We saw very early on with ChatGPT when it came out, there were very large companies that said we are starting to see some of our employees use ChatGPT and prompt information that we would consider company confidential.
And so they had to have what—just blanket policies saying nobody can use large language models or generative AI for the time being. So these are kind of missteps because companies didn't anticipate some of these tools and didn't really have good governance or policy. As far as the dangers, maybe the ethical concerns that we haven't seen in the past, you know, privacy and security feel not completely familiar.
Chris McClean (20:26):
There are some new kinds of nuances with AI, but things like fairness, things like content moderation, things like transparency—they haven't really been part of our normal kind of vulnerability assessments or security framework or control framework. So those are the newer things that are coming up. I think for me, there's so much power that we're putting into the hands of AI tools—not just for recommending, you know, what video we watch next, or who do we talk to online, or just kind of basics like that—but, you know, we're starting to ask AI to make decisions for us, maybe to actually take action for us. You know, we have these new, like agentic kind of AI systems that will just go off and accomplish a goal for us, that's a lot of power. And the more power that we put into the hands of AI, the more it can go off the rails.
Chris McClean (21:13):
So if we're saying, "Okay, we're going to allow AI to drive a car," that's a significant amount of power. And I don't think the risk assessment for that is very different from the risk assessment for a recommendation engine. So, when I think about, okay, what does our risk tolerance look like? Or even what does our risk framework look like? I tend to go very broad and say, here's a whole category of risks. Here's a whole set of categories of impacts. I think our responsible AI impact assessment is 50 points. So we look at ethical impact to individuals, to society, to the environment, and then a 20 point control framework of how we control all of these different potential impacts.
So it is quite broad and it is completely dependent on the type of AI you're doing. Again, are you trying to make AI that's going to fly a drone across the city? Or is it going to be a chatbot that talks to people about what insurance plan they should have? Very different risk profile between the two.
Charlie McCarthy (22:10):
Yeah. Totally depends on the use case. Okay. Can we talk a little more about the pitfalls or consequences if there's not sufficient AI governance in place? We've talked a little bit on the show about, you know, possible legal consequences when there are more regulations in place, but also financial, maybe reputational. Are those, would you consider those to be the main three? Or are there others that right now should be a primary concern for?
Chris McClean (22:36):
Yeah, I haven't put it in those terms, but that feels right to me. I think I have probably seven categories of impact that have felt like different to me with generative AI. So let's see if I can do all of them. Information security and privacy, like I said, are not completely new, but there are definitely nuances there that are different from what we have done in the past. What goes into a prompt, what comes out of a prompt? Even the way that AI tools now, like in the workplace, can kind of scan through all of your different file shares and things like that—you know, that's a common problem that we've dealt with for the last couple decades. But the way that AI grabs that information is different. So that’s a new consideration.
Chris McClean (23:16):
There's transparency, which again feels kind of new to me, which is, if somebody is dealing with a customer service agent, do they know that that agent is a computer versus a human? And there's just that kind of discomfort if maybe this is part of the reputation damage, but discomfort if they're interacting with somebody and they think it's human and it's not. And maybe that computer system was programmed to show something like empathy and it is saying nice things, or trying to say, "Oh, you know, I've been through this before." And then they realize, "Okay, this is a computer. They obviously haven't felt what I'm feeling right now."
Charlie McCarthy (23:49):
There’s a bit of an ick factor there.
Chris McClean (23:50):
Yeah, so that's, it's like reputation damage, but it's more of like a personal harm. There's intellectual property protection, which is again something that we've dealt with in the past, but this is not a typical kind of IT security risk category. So how were these AI systems developed and trained? What data was used to train them? And am I infringing on somebody else's copyright by publishing this material? But then also if your organization publishes material, you know, books, magazines, research—in at least in the US, my understanding is you cannot copyright it if it was primarily generated by AI. So that's a different kind of risk if you start publishing things with a copyright, expecting that you can make money off of it, for example—that's a risk factor that we hadn't really considered before.
Charlie McCarthy (24:35):
Right.
Chris McClean (24:36):
And then there are bigger picture impacts—just categories of impact: human impact, social impact or societal impact, and then environmental impact. So I don't talk about those necessarily in terms of risks. They could be benefits and they could be detriments. But with the human factor—that is, you know, the ick factor. Am I being surveilled at work? Is this making my job or my customer experience better or worse? Or my well-being, better or worse? On the societal side, you could see AI play out in disinformation, phishing, scams, or things like that. That's a societal impact. That's not just one or two people being impacted; that's like the way our social fabric works. Things like politics and finance could work, or healthcare.
Chris McClean (25:22):
And then for environmental impact, again, I don't always think about that as a risk because it's a known harm, it's a known cost. Like we know that these tools just have a tendency to use a lot of water, electricity, even the way that the hardware was built to support these systems—there's a lot of environmental extraction damage. So that should be considered an impact, even though it wouldn't be a typical risk factor.
Or if you're thinking about companies that do voluntary carbon emissions reporting, I believe this would factor primarily into Scope 3 emissions, like are they tracking the emissions as part of their carbon footprint, knowing that AI actually has a fairly big carbon footprint.
People talk about hallucination—so like the accuracy of the content. There's also a tendency for bias, as we know, and then just inappropriate content. So like, is a chatbot that you are using as a customer service agent exhibiting bias or inaccuracies? Or is it saying things that are inappropriate. That’s -
Charlie McCarthy (26:16):
Like harmful output.
Chris McClean (26:17):
Totally. That's a new consideration that has not normally shown up in, you know, our, like COBIT or ITIL or, you know, ISO 27000. This is a new kind of category of risk consideration.
Charlie McCarthy (26:30):
Okay. So say I'm a business owner, Chris, right now I have some concerns around all of this. What are some actionable things that I can do this week, next week, this month to start to prepare for things like, future regulations or shifts in public expectations around AI and the impacts?
Chris McClean (26:50):
Yeah. I think the best place to start for me is getting some kind of a catalog for how your organization is using AI. We know two years ago, just over two years ago, AI went from being kind of a back office, data science team kind of thing, to all of a sudden every single function in the organization has really cool ideas about it, how AI could transform their business or their process. And so people are downloading tools or AI is starting to show up in tools that you didn't even know it was gonna show up in.
Charlie McCarthy (27:22):
Shadow AI, I think some people call it.
Chris McClean (27:27):
Yeah, exactly. So getting that kind of a catalog of where AI is showing up and how, I think, is the first thing you have to do. You can't possibly have a conversation about risk or reward or objective or anything like that.
Chris McClean (27:36):
So for a governance perspective, you have to say, okay, what do we want out of AI? But also, where are we already? And I think most of the regulations and standards, the next step is generally going to be something like, what are we trying to achieve? And then what are the risks of us achieving that?
So the risks, I would like to propose that most of the time risk standards, even NIST to some extent, are very inwardly focused. Most enterprise risk management programs are inwardly focused. What is the risk to the organization? Like you say, reputation damage, legal, strategic, operational—it could be like IT risk. But there's this other category of risk when you talk about responsible AI, which is the ethical impact that I mentioned, like the human impact, societal and environmental. So I would say if you can incorporate those impacts into your review, that'd be my preference to see.
Chris McClean (28:29):
But, yeah, that catalog to begin with, the risk and impact assessment—what are we trying to do with AI and where could it go wrong, and then even a basic control framework to say, what are we doing to make sure that we are, as much as possible, mitigating some of the biggest risks? And then hopefully you're actually enhancing some of the benefits. And I know that there's so much we can talk about for two hours just about what fits into that sort of control framework or the process and technologies. But for me right now, so much of it comes down to training, awareness, and culture.
So we know that there are technologies for things like content moderation and access control, and maybe even IP protection. But every one of those, there are ways to get around. So if your people don't feel like they kind of understand the risks of AI, and they don't understand why they have a part to play in making sure AI doesn't harm people, none of those technology or technical controls will work. So I would say that's the, so the cataloging, you know, the IT asset inventory of AI, the risk and impact assessment, and then the cultural element would be the three things that I'd focus on to begin with.
Charlie McCarthy (29:43):
Beautifully said. Why don't we end on security. Machine Learning Security Operations Podcast; if I can toss a little security question to you—securing AI systems or making sure that GenAI-enabled technologies that you might be using in your organization don't end up causing harm through maybe harmful outputs or outputs that you don't want, or, I mean, just all of these risks—do you consider the security piece to be a part of that governance build that you were talking about before? Just that whole end-to-end kind of assessment of what you're doing?
Chris McClean (30:26):
I think it has to be. So when we were at Avanade two years ago when we formed our AI governance committee, security was one of the five pillars to begin with, and maybe is, the most important. You know, as I mentioned, security has most likely the most experience doing this kind of thing in the organization.
So creating that inventory, conducting the assessment—everything that I just mentioned—security understands how to do all of that. Even the training and awareness, you know, doing things like phishing attacks. There are definitely ways to get people involved in security and in privacy so that we can take that muscle memory and apply it to AI. So I would say security is very high on the list of stakeholders that need to be involved.
Chris McClean (31:09):
And maybe even lead the charge in some ways. Most of the time the questions that we're asking, as I mentioned earlier, are augmenting some of the risk assessments we're already doing. The vulnerability or pen testing that we're already doing—we're just gonna think through a little bit of how the attack vector's gonna look slightly different, or the risk profile will look slightly different. There will be a whole realm of these ethical considerations, like the human impact that I mentioned earlier. HR should be involved in that, and your legal organization should be involved in that. But the more technical aspects—those people have no idea how red teaming works or how penetration testing works, or just the simple data elements of where the data resides and how it gets used for training—security's going to be front and center in all of those conversations.
Charlie McCarthy (31:55):
Yeah. You literally just read my mind just as you were talking and you mentioned stakeholders. I was gonna ask a follow-up question about—we talk a lot about building your MLSecOps dream team and the stakeholders that need to be involved, making sure that end-to-end you're sufficiently governed and you've got that security. Are you able to share a little bit more about Avanade's process or the additional stakeholders in addition to the security team? Like who you pulled in for those conversations? HR, legal, security?
Chris McClean (32:26):
All of those.
Charlie McCarthy (32:27):
ML engineers I'm assuming, or—
Chris McClean (32:28):
Yeah, yeah. Data science for sure. That's a broader category because I am not a data scientist, I'm also not a security professional, so…
Charlie McCarthy (32:37):
We don't know these things.
Chris McClean (32:39):
Exactly. So I can talk all you want about frameworks and policies and things like that, and I can design oversight programs, but as far as -
Charlie McCarthy (32:47):
There are knowledge gaps we need to bridge.
Chris McClean (32:47):
Yeah, absolutely. And I do feel like security is a massive portion of that ecosystem that needs to be part of the conversation. Even just if you look at why we want to do AI well. Why would we have AI governance in the first place? The core value that we're going to get from AI is only going to come from the data, right? So if you think about why security has been valuable to companies over the last, let's say, 25 years, it's because data's being used to make decisions and run the organization. That's happening more and more now, right? So we're using AI to do things that we haven't before, like drive cars or, you know, plan, like, let's say your next five years of strategic objectives, right?
Chris McClean (33:33):
The integrity of that data—like where it has come from, who has access to it—all of that is more important now because of the power that we're putting in the hands of AI. So if anything, the sort of value drivers of security are enhanced because of AI.
And even the basics, like if we are doing something like a workforce enablement type of AI tool that, as I mentioned earlier, would scan through all your shared drives and things like that, the security—let's say the data integrity, even data classification, the access controls, permissions, all that whole gamut of controls—AI doesn't work unless you're doing all that well. It's not valuable unless you're doing that well. So again, it feels like the business drivers for AI - the business drivers for security - are actually enhanced by AI.
Charlie McCarthy (34:24):
Well, these have been fantastic insights. I'm so appreciative that you took some time to sit with us today. Any key takeaways that you want to leave the audience with about governance? Anything from your article or just top-of-mind things for you?
Chris McClean (34:37):
Yeah, I would say, I mean, there's a lot going on. There's so much research that we should be doing. There's all kinds of, you know, news articles and new technologies to keep up with. I would say a lot of what I've seen, good AI governance and the security and risk aspects, the ethical aspects—it comes down to how much motivation does a company have to put the work in, the investment, the effort, the energy into doing this right. And I've seen, you know, security kind of go through its own transformation of being in the back corner and having to scream and yell and saying, "Hey, how come nobody's paying attention to security?" And I think over time, security's very much front and center in a lot of conversations about, you know, what we do as a business and how we protect it.
Chris McClean (35:20):
Privacy has gone through that transformation in the last 10 years. I would say, as we're kind of waiting to see what happens with regulation—some of these big risk factors we don't clearly understand yet. So a lot of the momentum, a lot of the impetus for doing good AI governance and good responsible AI will be grassroots.
We've seen this across a lot of companies where they just say, like, somebody on the data science team says, "Wait a minute, I've thought about the impact of this new technology," you know, resume screening or this new kind of healthcare system that we've built that will help determine what kind of treatments, you know, people should get based on their symptoms, diagnostics. There are all kinds of really cool uses of AI, but if we get that wrong, we know that people could be harmed substantially, right?
Chris McClean (36:02):
And we have seen that, you know, whether it's criminals being put in jail longer because of the algorithm that's monitoring recidivism rates—like there have been these harms. There's one, if you look up MiDAS, there's a Michigan unemployment authority. They had a system in place that over the course of five years misidentified tens of thousands of people as fraudulent. They garnished wages, they cut them off from federal funds in a lot of different ways. The impact was, I mean, it tore families apart—literally tore families apart.
And the same thing happened with the Dutch Tax Authority. It was an algorithm that was supposed to identify fraud. And after a couple years they're like, "Oh, this system's working really well. We're identifying all kinds of cases of fraud." And then it comes out again—tens of thousands of people were misidentified as fraudulent, cut off wages, garnished their wages. They put kind of red flags saying, "You know, this person's fraudulent." So they had a hard time getting access to other government services. And it was more likely to flag people if they had names that didn't look quite Dutch, or if they had multiple visas—basically had come from another country—so people that were already potentially marginalized were more at risk because of these algorithms. Anyway, all of that to say, we know that some of these systems are very beneficial.
Chris McClean (37:32):
There's a lot of reasons to keep pursuing AI, but we also know that because we're putting so much power in the hands of these systems, there's also, you know, a big responsibility for us to do the right thing. And again, a lot of that attention will have to come from the grassroots—like the people that are getting their hands on the keyboard, programming, doing the data science, doing the sort of systems engineering work and software engineering work—until we have a more solid regulatory framework and better guidance and that tone at the top that's really driving these efforts.
Charlie McCarthy (38:08):
Okay. Awesome. Fantastic use cases that you've shared and case studies—we'll have links to a lot of these references that Chris made in the show notes at mlsecops.com. Thank you again for joining us today. Thank you for our sponsors at Protect AI, and we will see y'all next time.
Chris McClean (38:28):
Thank you, all.
[Closing]
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.