<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4373740&amp;fmt=gif">
MLSecOps-favicon PAI-favicon-120423 icon3

AI Governance Essentials: Empowering Procurement Teams to Navigate AI Risk

 

 

Audio-only version also available on Apple Podcasts, Spotify, iHeart Podcasts, and many more.

Episode Summary:

In this episode of the MLSecOps Podcast, Charlie McCarthy from Protect AI sits down with Dr. Cari Miller to discuss the evolving landscapes of AI procurement and governance. Dr. Miller shares insights from her work with the AI Procurement Lab and ForHumanity, delving into the essential frameworks and strategies needed to mitigate risks in AI acquisitions. They cover the AI Procurement Risk Management Framework, practical ways to ensure transparency and accountability, and how the September 2024 OMB Memo M-24-18 is guiding AI acquisition in government. Dr. Miller also emphasizes the importance of cross-functional collaboration and AI literacy to support responsible AI procurement and deployment in organizations of all types.

Transcript:

Intro 00:00

Charlie McCarthy 00:08

Hello, everyone and welcome back to the MLSecOps Podcast. I'm delighted to be here with you today; my name is Charlie McCarthy. I'm the MLSecOps Community Manager and I am thrilled to be hosting, yet again, a very special guest who some of our longtime audience may recognize; Dr. Cari Miller, who joined us in our very first season of the podcast back in April 2023. Dr. Miller, it's a pleasure. Thank you so much for joining us again on the show.

Dr. Cari Miller 00:36

I am so glad to be back. Thank you.

Charlie McCarthy 00:39

Yes, it's a delight. So, before we dive into a couple of the things that we wanto to talk about today, you've been very busy, up to a lot in the realms of, you know, AI governance and procurement and - to some degree - [AI] ethics. Can you highlight some of the projects you've been working on over the last year and a half, maybe some initiatives, and give the audience a little insider peek into what you've been up to?

Dr. Cari Miller 01:05

Yeah. You know, it's funny, this space, there is no shortage of work <laugh>. There's always stuff to be done. So I tend to focus on a couple of primary domains. My biggest domain is workplace technology that spans the entire employment lifecycle from hiring all the way to separation. I don't only focus on hiring. That kind of makes me a little nuts when we do that. So I like to look across the whole lifecycle. That includes productivity, monitoring, coaching, benefits, delivery, all sorts of things. And so one of the projects that I'm completing is an AI audit criteria catalog with ForHumanity. We've been working on that literally for 18 months. I think there are 61 use cases that we're covering in that criteria catalog. So I do focus on that as a domain, but I also focus on functions.

Dr. Cari Miller 02:06

And so the two functions I focus on are AI audit, obviously, but procurement is the other function that I focus on. Every time I look at this stuff, I keep coming back to, well, you know, why would you let the bad stuff in? Maybe you should just do a little procurement practice there and not let the bad stuff in. So I tend to refer to procurement and audits as like why these should be cornerstones in every AI governance strategy. So procurement's been a huge focus and so much so that I started a nonprofit organization with one of my research partners, Giselle Waters, and we called it the AI Procurement Lab. Because we're not as creative, I guess, as we could be <laugh>. And so that's been keeping us pretty busy.

Charlie McCarthy 03:00

Amazing. And yeah, let's double click a bit on the work at AI Procurement Lab. Maybe a bit about the mission of that organization and some of the tools and frameworks that y'all have been working on developing; you and Giselle.

Dr. Cari Miller 03:14

Yeah, sure. So Giselle actually is the chair of an IEEE standard that's been in development for three years. I joined her about two and a half years ago. And the standard is "procuring AI." And so when you work with IEEE, you get this collection of people from all across the globe that come in to build a standard based on consensus. So it's pretty rigorously debated. So we're just about to publish that standard. I bring this up because - as we have been working on the standard - IEEE, we've been getting calls from across the globe, governments, Australia, UK, Europe, Brazil, everywhere, saying, is it done yet? Can we see it? We're like...there's something here. People need information. They were craving the process. What do you do? How do you do it? What do I need to teach my people? And so that was what prompted us to stand up the AI Procurement Lab.

Dr. Cari Miller 04:14

And so the mission basically is to help organizations build capacity to be able to buy AI and make sure that what you're buying is good and healthy and, and kind of, you are able to mitigate the risks that are inside of it. We know none of it's perfect, but you should be able to mitigate risks or at least, you know, spot them and deal with them. And so that's what we've been doing there. One of our cornerstone tools that we work with is a risk management framework for procuring AI that we co-wrote together.

Charlie McCarthy 04:55

Awesome. One of the things that I was hoping to chat more with you about today, Dr. Miller, was the AI Procurement Risk Management Framework (RMF), which can be found on the AI Procurement Lab's website. Something that we chat a lot about within the MLSecOps Community and on this show is not only securing the AI lifecycle from end-to-end, but a lot of other considerations for Machine Learning Security Operations like GRC (Governance, Risk, Compliance) and also Trusted AI. [Trusted AI is] kind of a term that we coined to envelop other terms - like Responsible AI, Ethical AI - that are sometimes used interchangeably, and there are a lot of those responsibility and ethics pieces when it comes to responsible AI acquisition or procurement. The procurement risk management framework that y'all developed, you know, I've looked at it several times and it's an elegant governance control mechanism I think that a lot of the audience could benefit from hearing more about. Could you maybe walk us through, I think there are five steps in the framework, just at a high level & maybe using an example of a real life business need to help boost some of our understanding about how other organizations might use it.

Dr. Cari Miller 06:08

Yeah, you know, I purposefully designed it to follow what we're all familiar with in I think technology risk management. And that is like the COSO/ISO 31000 very traditional risk management framework. So we have NIST AI RMF and, you know, they serve their purposes, but this [AI procurement] risk management framework, I just wanted to be as close to traditional as I possibly could. And so what I did was I took some core components. So as I talked through, I think everyone will recognize these. I did sneak in one prerequisite, and so I'll talk about that one first because I think it's important. The prerequisite is that you have a legitimate business need and that there, you've defined your problem that you're trying to solve. Because I think we're finding more often than not, unfortunately, that organizations are being approached by vendors and they're given the opportunity to pilot something or let me just do a free trial with you, and then they end up, there's adoption without a rigorous procurement process And so they don't even know if they have a need for this tech. It just sounds good and we'll try it, and then all of a sudden, here it is. So my preference would be you have a legitimate business need. Okay. So that's the prerequisite.

Dr. Cari Miller 07:18

 And then inside of the framework yes, five steps: risk appetite, risk aware solicitation, risk assessment, risk controls, and risk monitoring. And so when we do the risk appetite, this one was really important to me. There is a calculator inside of this thing that helps people sort of think about - it's a thought exercise - and it helps you think about the risks that each and every procurement brings to you. Because they're all different. So we can't just say, oh, in medical, that's always going to be high risk. No, it's not. It's really not.

Dr. Cari Miller 08:15

Now, if you're going to x-ray my lungs and find out if I have cancer, I'm gonna tell you that's high risk, for I think all the reasons we could all say that that's gonna be high risk. That's, you know, life and limb right? Now, if you're going to run a pilot study with AI to try to pulse disposable goods gauze and, you know, ear wax removers and things to the hospital and that's just a small pilot and you just want just in time inventory; it's also in the medical domain, but I would not call that high risk. It's a secondary use. It's not direct to the patient. So this is why we do the risk appetite to understand where we are on the risk scale so that we know how rigorous we need to be with our controls.

Dr. Cari Miller 09:03

And so that brings us to the risk aware solicitation requirements which is really important as you go to market asking for solutions from the vendors. Because what we're finding is there are no cookie cutter solutions. We're not commoditized in most of this stuff right now. Everybody's coming up with new ideas to address the same kind of unique need. So we get the risk aware solicitation requirements, which means ask smart questions, make sure they understand your privacy needs and your security requirements, and who your stakeholders are and why explainability is important to those stakeholders. Put that in the solicitation so they know what you're working with.

Dr. Cari Miller 09:43

And then you go into the assessment, which is evaluating the responses. It's as obvious as it sounds. And then the other "gotcha" that we're seeing in our research is the risk control area. And another way to say that is "use your contract wisely." And so we like to put into the contract some obvious clauses, like, what is the purpose in the use of your (the vendor's) AI [solution]? And don't change that without notifying us or preferably, just don't change it. And other things like by the way, don't do a significant change without us first approving and reviewing. So use your contract wisely to either arrest the risk, share the risk, somehow control the risk. You can do that with the contract. And then monitoring, which you all know very well, <laugh>, what that side of the coin looks like. I don't need to go there with you.

Charlie McCarthy 10:39

<laugh> Right. So you bring up a really good point, Dr. Miller, about keeping a close eye on your contract. And you mentioned something quickly about shared responsibility. And just from my own personal curiosity, I was wondering if you have opinions or recommendations or, how you view the world in terms of shared responsibility, risk liability, between the vendor and the user?

Dr. Cari Miller 11:04

Thank you. That's a great question. So it kind of brought my mind to an area we like to all refer to as incident management. I think at this point, everyone's familiar with incident management. The challenge I see with incident management is that it is borrowed, the term is borrowed from IT, traditional, we've been using this term for two decades, right? And it kind of means, oh, my password didn't work. Oh, the, you know, there's a system outage. What, so we've, we have this old school notion of what an "incident" is. When we use it in AI, we might say, oh, there was an incident. And we kind of look back to the tech team and say, "well, you all will fix that, right? Because it's an "incident."" However, in AI, I would like to introduce another phrase, "appeals." So when something is off the rails with a system, yes, it may be an incident, maybe someone or a class of people are being rejected for a loan, but the appeals process also needs to be handled. So all the people that were then rejected because of the incident that occurred, all those people need to, their appeal needs to be processed and dealt with. So that is a shared risk responsibility. The vendor has something to do, and the person that was impacted has something to do, and the buyer has something to do in that equation. So it's a little trifecta of shared risk right there.

Charlie McCarthy 12:39

Makes sense. For some of our audience who these concepts may be newer to them, you know - a lot of them are going be familiar with them because we've been talking about it for a while - but when we're talking about assessing some of this risk, one of the things we like to point out is that buyers face certain types of risks with AI powered tech, which could be you know, financial consequences as a result of an incident. Or you know, in the future as more regulation comes around, potentially legal consequences, or a - one that's been big to date is the reputational consequences of, you know, I would say it's a, a hiring platform and their algorithm does, is found to have some sort of bias in it that is, you know, keeping certain folks from being considered for positions or even their application being reviewed by a recruiter, or the example that you've mentioned home loans, you know, AI being used to assess people who are applying for loans. In those examples of where there might be bias, I'm, I could be thinking about this wrong, but in my mind it seems like a lot of the responsibility in those examples might be more on the vendor because it is their application.

Charlie McCarthy 13:56

And so, you know, they're responsible for training these models, essentially, and the algorithms that are behind some of these machine decisions. And then the buyer has still some responsibility to be auditing perhaps and making sure that the systems are giving output that is not going to cause any harm. But like, how, how, where does that all come together? I mean, it seems so convoluted, and I know I'm kind of just waxing on about this here, but any insights there? It's a lot.

Dr. Cari Miller 14:33

No, it is, and it's complicated. And this is one of the roadblocks that we're running into and why the AI Procurement Lab it - it's one of the barriers that we need to address because it is complicated and it feels very overwhelming. And so here's what we tell people, how to start to address this. The first thing would be to understand the system, understand the underlying parts of the system. When you go through the procurement, when you're doing your assessment, you understand, and you're asking all the questions you need to ask. Where was the training data? Was it representative? Was it robust? You know, all of the questions. You gather as much information as possible. To the extent you're finding risks - which you will - everybody's new at this. And no machine is perfect, nor is a human right? 

Dr. Cari Miller 15:24

And so you're gonna set up KPIs in your contract. That's what we recommend. I recommend them as a separate schedule. So don't bury them into clause 43, Section F you know, D9iii. Like put them out as a separate schedule so you can refer to them so that your team monitoring this can, can manage them and get them reported on routinely, monthly, quarterly, whatever makes the most sense for the use case. And keep an eye on them. And here's the trick. When they are triggered, when you've breached them, when you approach them, when you're within 5%, start a trigger response action plan so that you know, I better do something here. I better go back and check that data. I might have to shut the system down for a minute and take a real hard look at what's going on here. So those are the things we can do in the short term until these systems get better and better. But that's the best we can do right now. At least that's our advice.

Charlie McCarthy 16:26

Right. That makes perfect sense. And, and we're all kind of in it together, you know, learning together and improving upon practices and policy and advising. Okay. So this has been a beautiful way for us to set the stage for what I was really interested in diving into with you today. All of your knowledge and expertise within this procurement space. There was a memo that was released last month, September [2024], you know which one I'm talking about already. <Laugh> from the Executive Office of the President in the US Office of Management and Budget. And gosh, I have my note here. The official title of the memo was "Advancing the Responsible Acquisition of Artificial Intelligence in Government." So I'd love just to get your overall take on it. Does it align with what you see as best practices for acquiring AI systems responsibly just, you know, across the board and also within governments? Any initial thoughts on where it's taking us?

Dr. Cari Miller 17:26

Yes, it does. Okay. So you're referring to OMB Memo M-24-18. So yeah, it was a really fascinating memo because it's a continuation of, you know, we started back with the "Blueprint [for an AI Bill of Rights]" and then there was Executive Order 14110 [Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence] I believe, which started to set up structure: have an AI officer and, and put some structure in place. Which is, you know, when you think of governance, you have to have people, policies and then processes. So the Executive Order started to set up the “people” part. M-24-10 [Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence], which also came from OMB started to set up some “policies” about not this kind of AI, but yes, we're gonna watch this kind and hey, by the way, get your reporting house in order. And now we're finally at the “process” part. So that's what M-24-18 is like rubber on the road, like this is how I want you to do this stuff.

Dr. Cari Miller 18:26

And so yeah, it really lined up with a lot of what we're doing, what's in the RMF that we published. Several other organizations have published on this topic and it lines up with them. There is a chorus of us in this space, in the AI procurement space. We have the same hymnal and we are singing in tune together. And we have been talking in our various parts and pieces to OMB and they have been listening and what came out, we were all like, they got it. Good job. So is it perfect? Is it whole? No, but they got it and they got it in a measured way, which was probably more appropriate than we were being very aggressive as advocates. You know, they gave the government a little leeway to say, well, if it's this kind of a use case in my kind of a domain in my situation, I might relax this or that, or I might be more stringent about this or that. So it's, I thought it was pretty good.

Charlie McCarthy 19:28

Okay. Well that's, that's reassuring to hear that so many of these groups are all, I mean, obviously one would assume that you all know each other and are cross-collaborating and talking with everyone. So that's good to hear. One of the big themes in the memo, Dr. Miller, is about cross-functional collaboration in AI procurement. And there are some calls to action within the doc and - I'll share a link to the doc in the show notes, or the memo in the show notes, so that our audience can read it - but one of the recommendations was formalize cross-functional collaboration to manage AI performance and risks. And I'm just gonna finish kinda reading out part of the quote here for the audience. "Each agency must establish or update policies and procedures for internal agency collaboration to ensure that acquisition of an AI system or service will have the appropriate controls in place to comply with the requirements of this memo. And that the agency's use of the acquired AI will conform to the memo," and this is the big piece, "within 180 days of issuance of the memo agency Chief AI Officers must submit written notification to OMB identifying the process" - that's a piece that you mentioned - "made toward implementing this requirement and identifying any challenges."

Charlie McCarthy 20:45

What do you think are the best ways that these agencies are going be able to approach breaking down the silos in their teams like procurement, IT, security, even like budgeting, legal, civil rights in order to ensure the smoother AI adoption and that they're kind of meeting the requirements of this memo together? I mean, where do you even start when you're trying to get everybody on board for something like that?

Dr. Cari Miller 21:13

This, I applaud them for doing this. And by the way, this is not unique to government. This is what all organizations, you should know what AI you have and your groups - it's a team sport. If you think one functional area is gonna handle all this, okay, good luck. No, it is absolutely a team sport. So I applaud what they did here. They also call for knowledge sharing and like very deliberate knowledge sharing, collaboration stuff. So I look at this as a very traditional change management situation. I'm a certified change manager and I'm like, okay, what do we do here? Well, first thing you do is explain to everybody, "Huddle up. Why are we doing this? Well, okay. Because there are things that could harm people and that's not what we want to do with our taxpayer dollars."

Dr. Cari Miller 22:07

If you're a regular organization, you kind of just don't wanna harm people. It's not good for your reputation, it's not good for your bottom line, it's not good for your employees. So you explain the change; why it's important. You make sure that you have some specific goals, which in this memo, they outline the specific goals, so that's helpful, and milestones and things like that. And then you designate. You name names. You know, where's the belly button I'm gonna push to make sure this happens? And you put together a plan that's reasonable. I think the most important thing when we go through exercises like this is to follow the feedback. You know, and when we do change management, we sometimes forget that part, like we, it's okay to be nimble and agile and like something might not work. Like maybe where we're putting our knowledge bank isn't like, accessible to everybody or, you know, "so and so" felt left out. Oh, okay. Bring them in. Like, it's okay. We can be flexible. Yeah.

Charlie McCarthy 23:11

Yeah. I'm kind of, I'm gonna put you on the spot with this next question because it's a little bit outside of the prep that we did for the show, but have you seen examples in real life maybe with clients or orgs that you've talked with who have begun to embark on that knowledge sharing process or change management process, and like what things they're doing specifically that have in your mind been successful as part of that knowledge sharing? Like is it outside of that initial huddle? Is it, you know, months long training? Is it sending them to third party workshops or any recommendations there for how that knowledge sharing could be facilitated most effectively? It's such a new thing that -

Dr. Cari Miller 23:57

No, it is, but not, not new to some government organizations. So, I'm drawn to the Department of Homeland Security and the IRS. And so the Department of Homeland Security started this journey probably two years ago. I mean, I'm sure they would be like, oh, we've been for way longer than that." But in my opinion, <laugh> about a couple of years ago, they really huddled up, set up some teams, did some exploratory work. They really started experimenting. They're pretty disciplined about how they go about their research and what systems are gonna work and whatnot, and how they want to procure those things. And they have their house in order. And they're, it's your example of like, are they doing workshops or whatever? It's, I feel like what they're doing is everything, everywhere, all at once. Like, it's just a little bit of everything for everybody.

Charlie McCarthy 24:58

It's just attacking it from any angle that you can.

Dr. Cari Miller 25:00

Yeah, they really are. Yeah. and they have great people, like they've hired really well in, you know, butts in seats, in the right spots and the right kinds of people. The IRS has done something similar, too, but they've narrowed their scope on what they really want to deep dive into. So they're, you know, they're sitting on piles of data and so they are, I call them getting ready to get ready. So most of what they're doing is data stuff like cleaning up data and making sure the data's going to be good enough so that when they get to be able to use AI that can tell them, here's trends, here's patterns, here's whatever, that they've got their data house in order. So they're kind of, they're doing the same thing, but you don't need the entire organization yet because you're just focused on organizational hygiene for this one area so far. Those are the two.

Charlie McCarthy 25:56

Got it. Okay. That's a good seg actually into my next question. The memo, the OMB memo, also places emphasis on privacy, security, and data ownership. Can you suggest any strategies or describe anything you've seen from managing those types of risks during the AI procurement or acquisition process? I mean, you might have touched on this a little bit when you were talking about contracts but those things specifically; like data ownership and security and privacy.

Dr. Cari Miller 26:25

Yeah, so part of it is, again, it's all use case specific. And so it depends on if you're using someone else's data or if they're going to use your data. So understanding how good your data is looking. But at the very beginning, if it's gonna be rights impacting or safety impacting, which is what the memo, the OMB memo, mainly focused on. They don't call it "high risk" like they do in the EU. They called it "rights impacting" and "safety impacting." So if we're in that territory, then you really should do an impact assessment first. And that's before you start writing your solicitation. It's just a good exercise to understand, like, this is the territory I'm about to walk into, or in other cases the spider web I'm gonna walk into with my mouth open. Like you just, that's not something you wanna do. So let's just do the first exercise. And then that helps you write a good thoughtful solicitation, right? So you can ask the vendors questions and lay out some terms and conditions. And then yes, put it into the contract and then you know, you conduct routine audits, which people are like, oh, I don't wanna do that. But you have to, if it's risky, you know?

Charlie McCarthy 27:39

Right. It's important.

Dr. Cari Miller 27:40

Yeah.

Charlie McCarthy 27:43

Okay. One last memo related question before we kind of wrap up here. You've given the audience a couple really great examples about some high risk areas like related to hiring programs and also healthcare. There was another really interesting call out in the memo related to acquiring generative AI (or GenAI) and also AI-enabled biometric systems, which could be super high risk areas. Could you help me help the audience understand, Dr. Miller, why those technologies specifically like AI-based biometric systems could be higher risk and what some suggestions are for considerations that agencies should be thinking about when acquiring those types of technologies? Like biometric systems, you know, if I were to try to explain that to a relative, there are some of them that just don't know what that is. So maybe we could start at that level and then kind of help us understand like where the risk is there.

Dr. Cari Miller 28:46

Yeah, absolutely. And so for the record, this, you hit on the two use cases, GenAI and biometric that as I was reading the memo, I'm like, that wasn't in the playbook. Where'd that come from? <Laugh>. So I, those of us that have the hymnal that are singing, you know, in choir, that wasn't in the playbook, but I'm so glad that they called that out. Okay, so biometric, gosh, they, these systems were popping up everywhere, right? We use it to open up and unlock our phone. We use it when we clock in at McDonald's with our fingerprint, or we're driving Uber and we need to clock in for Uber. They will use face print sometimes. So these, these systems are taking any kind of feature that's unique to you, no one else, and -

Charlie McCarthy 29:33

Hence the "bio" prefix.

Dr. Cari Miller 29:35

Exactly. Yeah, exactly. And then they're using that for, typically it's some kind of a security measure because it's unique to you, right? So yeah, they put that in the memo and they put some parameters around it as in to say, be extra, extra, extra, extra careful if you're gonna go down this path. Now the EU was kind of like, whoa, whoa. Not even sure about this. But the US was like, okay, but you gotta be really careful. And mainly because it is really suspect for discrimination issues. So think about ADA (Americans with Disabilities) issues. One exact feature that biometric can pick up is think about if you're on a platform for a train station, you might have a camera that watches the platform to make sure there's no shenanigans going on, right? Well, as it's watching the people on the platform, you could have someone come through with a severe limb, maybe they have cerebral palsy and the camera could flag that person and then you just, it just flags that the police officer needs to go check on someone. That seems kind of unequal and unfair. So there's discrimination involved in these kinds of systems.

Dr. Cari Miller 30:52

And then the other thing is data. What are you doing with it? How are you keeping it? Were you gonna keep my face forever? What if I grow a beard? What if I'm, you know, growing old as a lady, I could grow a mustache. You don't know, like <laugh> that [image captured via AI-based biometric technology] might not be my face for very much longer. So there's all kinds of data issues that we need to be concerned with, with biometrics. Can I talk about GenAI for a second? Because that one surprised me the most I think.

Charlie McCarthy 31:21

Please, please.

Dr. Cari Miller 31:22

Oh my gosh. Okay. So this one, their main gripe, I guess - I'll say it that way, because I can - in the memo was vendor lock-in. This happens all the time in government no matter what kind of contract it is. That vendors, you know, you always, as a government contractor, you're always trying to angle for like, oh, switching costs and you're gonna stay with me forever. And so they took that and looked at generative AI and they said, you know what, actually we don't want you to just adopt ChatGPT for whatever it is you're doing there. We want you to compete and we want you to go out and look at the generative AI systems that are available. And we want you to really understand what your use case is. And we want you to really understand [for example] is ChatGPT the right one for me, or should I use Llama because Llama gives me more privacy or Llama's training data was better for my use case. And so they're kind of pushing this requirement for competition. And in the process you get transparency. More visibility to pricing terms and control over pricing possibly. They're requiring better visibility to licensing agreements, understanding of security requirements, definitely data controls and you know, control over future change management and things. So I thought that was great because it says, you know, stop just blindly adopting stuff and treat generative AI as you would treat anything else you're gonna do in the government.

Charlie McCarthy 33:00

Oh, interesting. I'm glad that you called that out. I had overlooked that piece when I was reading through.

Dr. Cari Miller 33:05

Yeah, it's a good one.

Charlie McCarthy 33:06

Yeah. As we look toward 2025, what do you see as our biggest hurdles to jump? Or I might reframe that and say, what are our biggest opportunities in AI procurement and governance and what should organizations, you know, government and otherwise be prepping for in your opinion?

Dr. Cari Miller 33:28

Yeah, it's a single answer for both challenge and opportunity. And that is literacy. AI literacy. So there's levels of literacy, right? I think we have all established some terms that, you know - fairness and equality and responsible AI and you know, oh, Siri is AI and Google Maps is AI. Like, we get it. Now we need to move to level two of Donkey Kong here which is our domains. So if you work in procurement, if you work in marketing, if you work in HR, you need to understand the AI and the issues, concerns, risks, and opportunities specific to your domain. And so I hear companies talk about upskilling all the time and I never hear this level of detail. Like we don't have training programs for this yet. So that was part of why the AI Procurement Lab popped out, is because what we can do is train procurement professionals. There's things that you need to know. You're not changing your job. You're adding to it. So I gotta get you over that learning curve so you can keep sailing. So that is an opportunity and a threat. We don't do it. It's a threat. If you do it, it's a great opportunity.

Charlie McCarthy 34:47

Yeah. And I love, love, love the idea of these procurement professionals being trained on this and, I mean it makes sense, them as an entry point for these applications that could be coming on. Like if the procurement professionals understand what some of the risks are, even what they need to be looking for, then they're armed with the knowledge to loop in relevant stakeholders within the organization and make sure that these things are being done responsibly. They can bring everyone together.

Dr. Cari Miller 35:13

That is the trick. We're not teaching them everything, we're just teaching them to know how to go fish over there, and go fish in that pond because you need koi today or salmon or - yeah, absolutely. It's a team sport.

Charlie McCarthy 35:24

Yep. I love it. Okay. Can you share anything else with us, Dr. Miller, about what's next on the roadmap for AI Procurement Lab or how the audience can stay up to date with what you're working on, and where you are, where you might be speaking?

Dr. Cari Miller 35:43

Well I say follow me on LinkedIn; it's cari-miller. And yeah, follow the AI Procurement Lab. My next, let's see, what do I have next? Well, so we have the IEEE standard is going to publish at the very early part (we're hoping) of next year, so that's gonna be pretty significant. I'm also working, we're getting close on publishing the, it's about a 200 page document on audit criteria for automated employment decision tools. That's gonna be big. And <laugh> something that's under my skin that I'm probably gonna do a giant rant on is regulation stifling innovation, because I have had it up to here with people talking about that regulation is a good thing. It spawns innovation more than it stifles innovation. So someone is probably going to hear from me on that front.

Charlie McCarthy 36:41

That's been a debate for a while now. I mean a couple of years, especially since the chat GPT explosion and all of this. Yeah, I'll be very interested to read that. <Laugh>. I'll keep an eye out for it. I hope that you do it. Alright, well once again, thank you so much for your time. It was a super treat for me to get to chat with you again 18 months after the first time. And I would also like to give a shout out and thanks to the sponsors of the show Protect AI & all of you listeners at the MLSecOps Community. You can go find the very first episode that Dr. Miller joined us for at community.mlsecops.com. I'll link to that in the show notes as well as many other resources there, and until what I hope will be next time, Dr. Miller, thank you again so much.

Dr. Cari Miller 37:25

Thanks for having me.

[Closing] 


Additional tools and resources to check out:

Protect AI Guardian: Zero Trust for ML Models

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.

SUBSCRIBE TO THE MLSECOPS PODCAST