<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4373740&amp;fmt=gif">
MLSecOps-favicon PAI-favicon-120423 icon3

Unpacking AI Bias: Impact, Detection, Prevention, and Policy

 

What is AI bias and how does it impact both organizations and individual members of society? How does one detect if they’ve been impacted by AI bias? What can be done to prevent or mitigate it? Can AI/ML systems be audited for bias and, if so, how?

The MLSecOps Podcast explores these questions and more with guest, Dr. Cari Miller, Founder of the Center for Inclusive Change and member of the For Humanity Board of Directors.

This episode delves into the controversial topics of Trusted and Ethical AI within the realm of MLSecOps, offering insightful discussion and thoughtful perspectives. It also highlights the importance of continuing the conversation around AI bias and working toward creating more ethical and fair AI/ML systems.

YouTube:

 

Transcription

Introduction 0:08 

Welcome to the MLSecOps Podcast presented by Protect AI. Your hosts, D Dehghanpisheh, President and Co-Founder of Protect AI, and Charlie McCarthy, MLSecOps Community Leader, explore the world of machine learning security operations, aka MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. This is MLSecOps.

Charlie McCarthy 0:38 

Hi, welcome, everyone. Thank you for listening to this episode of The MLSecOps Podcast. I'm Charlie, D is here with me today, and we're talking to Cari Miller. Cari is the Founder of The Center for Inclusive Change and also on the Board of Directors of For Humanity, which examines and analyzes risks associated with AI and automation. Cari, welcome to the show. 

Dr. Cari Miller 1:02

Thanks for having me. 

Charlie McCarthy 1:04 

Absolutely. So you've been involved in a variety of activities, including with For Humanity, something called Independent Audit and Governance of AI Systems. Can you tell us a bit about what that actually means in practice? 

Dr. Cari Miller 1:19 

All right, so Independent Audit of AI Systems is really an approach that Ryan Carrier came up with. He's the founder of For Humanity and basically what we do: we first took a look at GDPR [EU]. That's the leading law that kind of started a lot of this stuff.

And we said, how do companies know to follow this law? So he decided to put together criteria, sort of a checklist, if you will, of things that companies need to do. And then he said, well, what good is this going to do? We should train people to be able to go and look at a company's practices and certify that they are following these practices sort of in the same way as you would audit for an accounting approach.

So you want a third party, independent, completely isolated, objective person coming in, or people coming in to say, yes, they're doing what they're supposed to be doing. That way there's a trust factor involved. It's a public trust situation. So that's the premise behind an independent audit of AI systems.

So we started with GDPR, and then from there, we looked for other laws that made sense. So we looked at children's code. So there are some AI laws surrounding children's code and what you're allowed to do when you're working with children on these platforms.

We've done the EU AI Act, so we're leaning forward on that one. And as they make changes, we adjust the criteria. And a few others. We just finished one working with a company or an organization called PEAT, which works with disability and inclusion.

And so we follow ADA and those things to make sure that companies can comply. There's all sorts of things with AI and disability that would just shock you. Very surprising stuff. So we've helped put criteria together for that, too.

D Dehghanpisheh 3:23

That’s great. So one of the things that you mentioned was this kind of Trusted AI, right? And I would imagine that part of Trusted AI is talking about the bias in AI systems. And I guess that brings us to our first element for today, which is defining what bias in AI is.

What is bias in AI and can you give us a sense of how it's different from, say, human bias?

Dr. Cari Miller 3:52 

It's interesting that there's a suggestion that it is different. It's not different, actually. It's very much the same. The challenge with it is AI is machine based, which means it can scale.

So the example that I like to use is if you have one hiring manager that is biased, in other words, let's say that they have this thing where they feel like, I really don't want to hire someone who's older than, I don't know, 55, because they're probably going to retire or they'll have health problems or whatever it is in their mind, that they would be prejudiced against that person. And so they just don't hire those people. That's one person making a decision. And so the impact that one person can have is relatively small.

When you parlay that over into a machine and scale it to millions of resumes, we’ve got a problem. So that's a difference that AI is making in this space.

D Dehghanpisheh 4:55 

So it’s really about kind of the scale and speed, I guess, at which that bias could run rampant through an organization or a system or however one thinks of it. Right. It's not just scale.

But I would also imagine that there's a speed component as well, because humans can only operate so fast, whereas machines seem to be able to operate on an infinitely faster curve.

Dr. Cari Miller 5:23

It's not just scale and speed. You're exactly right on those two fronts. It's also hidden. The ability to detect whether that is going on or not is nearly impossible to the individual that was harmed. You just literally don't know.

If I'm sitting eyeball to eyeball with that one person and I'm over 55, I'm going to get a vibe. I'm going to get - they're asking some weird questions. There's something going on that I'll pick up maybe on that once it's inside the machine, I may never know.

And how will I ever be able to go back to the EEOC and make a claim on that? How on earth are we going to figure that out? To say, oh, yeah, there was something going on there with all the people over 55?

I mean, that part is very hard, which makes it extra, extra problematic. 

Charlie McCarthy 6:16 

Cari, how prevalent would you say is bias in AI, and how do we know? What information do we have to support that prevalence? Do we have numbers? What are the numbers? 

Dr. Cari Miller 6:30 

That's a trick question. It's use case by use case and developer by developer. So, for example, there are some developers that they're very aware, they're very evolved, and they really try hard to mitigate. So they have processes in place, they have ethicists in place.

They really challenge themselves on every single question. I know, like, for example, I've talked to some people at Modern Hire, and they really go out of their way.

They have I/O psychologists on staff, and they really think through, okay, are these people giving us this purposefully or involuntarily? What does that mean for how we should let the machine treat that information? I mean, they are very deep in what they're doing.

There are other developers that, they're only schooled in churning data and so they don't bring in that psychology or the sociology of what's behind the data. And those are going to present more harms because they haven't been as thoughtful about how they're preparing their system.

Now that's more - I'm going to use a phrase here - structured data is one set of problems. The audio and the video data is a different set of problems depending on how it's trained.

So, for example, NIST set out and did a set of tests on facial recognition providers and what they found was even if they were very diligent about what they were doing, decay happens on images. So they trained the images on a bunch of really good photographs and the training set was really robust and very dynamic over time.

The decay happens because people age and you wear glasses and you grow a beard and you get a little saggy in your face and all of a sudden the facial recognition doesn't work as well. And then your pigment changes and pigment issues are always going to be a problem in facial recognition. So you asked a trick question, so I gave you a very long answer. So you're welcome. 

Charlie McCarthy 8:51 

Okay, so AI is biased. People are biased. Regarding AI bias, why should people care? Why should the public care? Why are we considering AI bias to be harmful and maybe more harmful than human bias?

You mentioned the scaling issue. How are people affected by this? How are they going to be affected by this?

Dr. Cari Miller 9:13 

Yeah, so the EU is great, they're sort of the front runners in this, and what they've done is they've drawn a line, a red line, and they said some systems are okay, and some systems, we're going to qualify them as high risk systems, which is very interesting.

So, for example, a system that's only going to be used to maybe paint a car door, and it's not going to have flaws in the paint, is that high risk? No. Is that going to hurt someone? No, it's not.

Here's the high risk system: does it affect someone's education, the opportunity to get education, health, housing, employment, security, policing? These are high risk systems.

So anything that's going to affect your dignity, your human rights, your economic status, those things are high risk. And so the reason we care is because there are no laws about this stuff right now.

There's some regulation, but there are no laws that are really good that say, hey, developers, when you create a high risk system, we're going to need you to go ahead and make sure you don't do this, that you do do that, and whatever those things need to be.

And there is a lot, actually. And we want you to register your system. We want you to have a checkup every year, whatever the things are. So that doesn't exist right now. So we're wholly banking on people doing the right thing, which is fine, except for we haven't always taught them to do the right thing.

And sometimes it's expensive to do the right thing. So you're kind of like, we'll do that later. We just want to get to market. We put a lot of money in this.

So there's a lot of reasons why we don't do the right things all the time, which means the risk gets passed right onto the buyer, and the buyer just trial sounded good when I bought it. And we all accept the risks.

D Dehghanpisheh 11:20 

So Cari, given the absence of, kind of, policy if you will, at least in the US market, maybe others, and constantly balancing that against kind of the Darwinian nature of corporations, how do you think about getting organizations to care about this in the absence of requirements?

In the absence of laws, in the absence of - I had a mentor once who said companies do things for fear, greed, or regulation. Those are the only three motivators - how do you get companies and organizations to care about doing the right thing before they're being told that they must do the right thing? 

Dr. Cari Miller 12:02

I think whoever told you that was exactly right. It's either fear, greed, or regulation. And that's not going to change. So the only thing that I can think to do: I'm an ethicist, I'm an advocate, so I'm going to push the fear button for them. That's all I can do because I'm working on the legislation part.

But it takes a while. It's a marathon, not a sprint, and we need a lot of learning to happen there. So I have to work on the fear side.

So what I can do on that side is I can train procurement people to start asking some really hard questions. You know what, procurement people? Here's what we're going to ask. We're going to ask to see your model cards. We're going to ask to hear about your training data.

I'm going to arm them with the toughest questions, and then I'm going to help them understand how to adjudicate those responses, and we're going to kick out the ones that aren't answering correctly. That's one way.

The second way is if I can find my way to the insurance companies, that's my next line of defense, because insurance companies are going to start paying out claims.

Because when you do get to the legal side of this stuff, the fines are hefty, and by that I mean EEOC, FTC, those are in the US. Those are the two bodies, I think, that are backstops right now.

The FTC is basically saying, don't put junk out into the market. Don't mislead people, don't do dumb stuff. We've seen a few examples of them saying, yeah, hey, you did something dumb. That's it. I'm taking your algorithm. Or here's. Fine. And so that's the backstop on that.

So if we can get the insurance companies to understand, hey, if you don't have a proper set of governance structures in place, we're probably going to have to charge you more insurance. Now we've hit him in the wallet. Now we get some traction that way. 

D Dehghanpisheh 14:06 

Right. Makes sense. That also plays on the greed. 

Dr. Cari Miller 14:09 

Yes, exactly.

Charlie McCarthy 14:12 

Let's move to talk about impact, maybe, for some of these organizations in two fields specifically. You've talked to us, Cari, about AI bias in two domains ahead of this recording; employment and education. Let's dive into those a bit.

In a recent LinkedIn post of yours, you highlighted a publication from your organization, The Center for Inclusive Change, and that publication was called A Taxonomy of Employee Data. Let's talk about employees and employers.

How are employees impacted versus candidates when we're talking about employment decisions and AI bias? How do we think about biases in AI across the employment journey? Clear from spotting a possible role to being promoted…fired?

Dr. Cari Miller 14:58 

That's a great question. This is a whole area. California is attempting to tackle this a little bit, helping employees access their data. So there's a power imbalance here, right?

The employers have all the data and the employees, it's really their data, but the employers control it, which is a little bit dangerous in this day and age where algorithms can make decisions about who gets promoted, how much money you make, what work assignments you get. That may build your resume up or not build your resume up, what your quota is going to be.

The algorithms can do a lot of things that impact you as a person financially and from your well being. If an algorithm is deciding what your quota is every day and the algorithm doesn't realize you're a human instead of a cyborg, that can make you nuts.

I mean, you can really go out of your way and all of a sudden you're like, I'm so burned out, I just can't deal. So the data that these systems collect up and use becomes - it's the old saying, data is the new oil.

And so the importance of an employee having access to their data, understanding what you have on them and the employer, understanding their responsibilities and being careful with that data, being respectful of it, that was the point of that piece.

And what I see happening is there are very large service providers in the industry that they're collecting up a lot of this data.  And when a service provider has a lot of data, there's always this phrase that we keep seeing over and over and over.

Companies will say, we'll use your data to improve our service for you. Which is a fun way of saying we'll use your data to make more money for ourselves. 

So they tend to use the data to create new products they sell back to companies. Well, basically translation they have used your employees data to create another new product that may or may not harm your employee.

They may not disclose all that's inside of that, and that can be a problem. So employee data is this little black box of stuff going on. There's no legislation around it except for California is starting to do some things and it's just this ripe field of like, what are we doing here, guys? We need to look at this and be careful. 

D Dehghanpisheh 17:56

How do you think about data in the context of employment activity as, like, metrics for thinking about output and workload balancing and kind of just understanding?

There's a lot of useful signatures and useful prediction components that could be used in these AI systems. It's not just all harm. I would assume that there's really good components that could also be used.

But in both those worlds, in both the good world and maybe the not so great world, I would assume that companies kind of have a right to what that employee is doing within their context of the employment. Does that hold or not in this case? Is there something different about AI versus, say, a human inspecting how many phone calls were made if you're a sales rep, or how many meetings you went to, or how many leads you generated?

How do we think about the difference between standard collection methods and the application of AI in those performance evaluations? 

Dr. Cari Miller 19:12 

Oh, you got me.  The reason I say that is because as an AI ethicist, I have this awful tendency to be like, oh my gosh, the world's on fire. I can't believe it. It's a dumpster fire. And the reality is there are so many good uses of AI, especially with ADA activity. I mean, AI has helped a lot of disability situations, but it can also be harmful there, too.

So to answer your question, yeah, that is a real interesting dilemma. It doesn't have to be a dilemma, but it often is a dilemma.

So you're right. I get that a lot. It is, well, wait a minute, it's the company's data. And the answer is both can be true. And both are true.

Frankly, that's fair because it mainly comes down to who has hold of the data and what are they doing with it. And so if the company it is the company's data in the sense that what did you produce?

And if you're going to project resource demand and resource need out into the future, and how many bodies do you need and all that stuff, that's fine. That's my inner operations girl coming out. It's not okay when it's - here's where the problem comes in.

Let's say that a large service provider has hold of the number of widgets that you sold ten years ago at a different company, and you're now at a new company. And let's say at the old company, you had an awful boss and they gave you the worst leads and you could have done better, and you missed your quota all the time, and you literally quit working there.

Now you're at this new company. The problem is I don't know what the algorithms are going to do if they will pick up. Like, oh, Cari. Yeah, she's always been in sales, so I'm just going to use all of her data to figure out what we should be paying Cari, what we should make Cari's quota. Should Cari be promoted? How has Cari performed over the life of her career as a salesperson?

That's where it gets more dangerous, and the visibility to that stuff is not always available. 

Charlie McCarthy 21:34 

The organizations that employ AI to help make those types of decisions that you just described; what is the risk there in terms of bias? Risk to - you've talked a little bit about it already, risk to the individual - but risk to companies at large.

Dr. Cari Miller 21:50 

And that is another great question, because right now the risk sits wholly on the employer. The risk is not necessarily shared with the service provider that actually may have created that algorithm. 

D Dehghanpisheh 22:08 

So just to be clear, when you say service provider, you could be talking about, say, an HR platform, or processing platform or an employment platform.

I think it's just important to nail down when you say service provider, can you give a little bit more context as to how you think about that?

Dr. Cari Miller 22:25

Yeah, you're exactly right. Those are the extra the companies you hire to help you get the HR work done, the payment providers and the people that help you process payroll and people that help you deliver service award programs and things like that.

D Dehghanpisheh 22:44 

So one of the things that you had talked about was the way in which you're capitalizing on fear, greed and regulation to get companies to care, right?

And you had mentioned this notion of kind of like, hey, if you're going to be sued, an insurance company who's insuring the board or has directors insurance or whatever the case may be, is carrying some liability, obligation, Asian in some way.

It's like, hey, we'll backstop you. Only if you do A, B and C. Which raises the question, are there particular cases right now that are in the news or cases that you're aware of that you can speak to on the record that highlight this?

Kind of task tension that's building in the system between suppliers of systems that are used in a variety of business functions, employment functions, decisioning functions, the businesses and the employees. Where an employee or former employee feels that they've been wronged by an AI system and are now taking that to court.

Are there any particular lawsuits that really are catching your eye here that might be precedent setting? 

Dr. Cari Miller 24:01 

The one that comes to mind right now is Workday. And it is interesting because like I just said, normally the EEOC would say the employer or the hiring party -

D Dehghanpisheh

Company X

Dr. Cari Miller

Exactly - is the responsible party. So this is an African American gentleman who is, I think, older than 40, I believe, who felt as though he had put his resume in, wasn't hired due to age discrimination, I believe was the case or is the case and decided to sue.

And I could have some of these facts wrong, but decided to sue Workday because of it. And so that would be very precedent setting. And the premise is Workday was acting as a recruiter, in which case then the recruiter would be liable, so treating the algorithm as

D Dehghanpisheh 25:01

Yeah. I think for our listeners, that's the Derek Mobley case, right? The Derek Mobley case. Derek Mobley v. Workday. Right. And I know that the EEOC, to your point, is actually taking a position in this. I believe.

So that'll be interesting for our listeners to continue to watch if they haven't been watching that. And if you dive into that case, are there any particular things that you're really interested in watching? What the court might rule on or what the settlement is like, if this settles, it's probably never going to have anything.

So are you hoping that it kind of runs its course through the courts, or are you anticipating a settlement that comes along? 

Dr. Cari Miller 25:46 

I can tell you what I've looked at that caught my curiosity is, first of all, I'm glad that it's out and it's causing some conversation, because I think that's - just that alone is important in the space of governance. Right. We have to talk about these things.

What caught my attention is, well, wait a minute. Workday’s, they're huge. What are they doing? They can't possibly have just buried their hand or their head in the sand and said, oh, we're not doing anything to mitigate bias.

Like, surely they're doing something to mitigate bias. And they do say that they're mitigating bias, and I think in earnest they do attempt to mitigate bias.

That's the interesting part of algorithms. You can't 100% mitigate bias, but they don't disclose that publicly.

And the stuff that they do disclose publicly, I found to be - it's not as strong as it could be based on my experience. And so I wonder if they could be a little better at what they were attempting.

But they put a lot of their explainability behind a firewall, so you can't really see what truly they're doing. So there's room for improvement. But it's not like they weren't doing anything. I mean, they weren't sitting on their hands. They were trying. 

D Dehghanpisheh 23:14

Yeah. So we've been talking a lot about impact of, you know, impact and to a lesser degree, defining how we think about the scope and scale of bias and AI systems.

And I want to maybe shift gears a little bit into kind of detection and prevention. How should employees, maybe parents, students, corporate leaders, how do they think about bias in AI, and how do you even know if you're impacted?

I mean, I guess the question in the case that we were just talking about would be, how did the plaintiff even know if he was impacted or not? 

Dr. Cari Miller 27:55 

Right, exactly. I know it's a gut feeling. I swear. It's the most human aspect of life. It's like, wait a minute, I saw your job description. I know what my resume is. I know I'm a ringer for this, and yet no call, no nothing.

Not even like, I'm in the top 50. Like, literally nothing that's suspicious. Right. 

D Dehghanpisheh 28:19 

So one of the things you mentioned was, if I can get into the insurance industry and we say, hey, I want you to give me the model card. I want you to prove to me kind of that trust but verify maximum of, like, yeah, I trust that you're doing the right thing.

Maybe give me a sense of, kind of, what kind of frameworks for particular policies or procedures are you and your organization or other organizations like the two that you are a key member of?

What are they putting forth to help companies kind of navigate this very obtuse area that is, quite frankly, very abstract. What kind of frameworks do you direct companies to and entities to when you come across these conversations?  

Dr. Cari Miller 29:12

Well, it's always unique to each company. It always has to do with their level of risk tolerance and risk appetite and what they're willing to assume and absorb,  even from the government's perspective.

I was literally just at a conference yesterday, and they were talking about procurement. Procurement is a linchpin for companies. That's where it either gets in the door or doesn't get in the door. Right.

And so this is government talking, and the guy in the panel says, our government has a new law, one of the few that says procurement people should be trained on how to buy.

AI is still very new, so not everybody's trained. And so he says in this panel, well, some of them are still learning and that's fine. And sometimes this was the catch. Leadership just says, hurry up and get it in.

So even in government, you would think as public servants, the risk tolerance would be very low. So what works for some doesn't always work for others. Good frameworks; we would want to look towards NIST.

The NIST (National Institute of Standards and Technology) has a fantastic AI risk management framework and a playbook that sort of walks you through each aspect that you need to look at.

The For Humanity audit criteria. The very, very foundational course that we offer for just audit criteria for just auditing AI is a great framework for understanding what should the governance structure look like.

It talks about - have an ethics committee, have an algorithmic risk committee. There's some just basic stuff in there. Here are some policies you should have. I'm working right now on a - this is for procurement but a - rubric to identify the AI governance maturity of a supplier. Are they high, medium or low? And that's not to say should you use them or not.

It's just to say, this is how much risk you're going to be accepting. And it might inform how you write a contract. You know, to make sure how you're, you know, how much clause, how much you want to protect yourself in that contract.

Charlie McCarthy 31:49 

Right. Very cool. Okay, so we have talked a little bit about how to define AI bias, Cari - its impact, detection. Let's shift to the policy front for a minute. Guardrails would help everybody, right? So, how do you think about that? 

Dr. Cari Miller 32:07 

That would be wonderful. I am loving what I'm seeing in Europe. It's absolutely not perfect, but bless them. I mean, they are moving and shaking and doing. They have the same problem we have in the US. With big business coming and lobbying and making tweaks and changes that soften and dilute their policies and procedures.

But like I said earlier, they are defining this is a low risk system. This is a high risk system. These we will not tolerate.

We're never going to allow this stuff. So they have a few things going on in Europe. I would love to see some of that start to happen at the federal level. Here,  I actually am seeing some state movement, which on the one hand I love.

If I were a business, I would be freaking out right now. Because when states start moving out on their own - as a business, you might as well just throw in the towel because how on earth…

D Dehghanpisheh 33:15 

There's too many rules of the road.

Dr. Cari Miller33:16 

Yeah, it's just a recipe for disaster for business. It's just ridiculous. But the states I looked at AI regulation in the employment space, there are six states doing stuff on quotas, like warehouse quotas that they don't allow. I call it the Amazon law, where they're going to disallow algorithmic quotas to work in certain ways. They have to be checked. There has to be humans in the loop, all kinds of stuff like that. The New York City stuff. I mean that's at the city level even. 

D Dehghanpisheh 33:56 

Yeah. So hey Cari, honestly, sometimes when you were saying, hey, businesses don't like to have a fragmented regulatory environment, and amen to that, I guess the question then turns to, okay, well, how do businesses think about self governing models, right?

Are there any self governing models that you have seen be successful in, say, non AI spaces that maybe could be adopted in an AI domain?

Or is there a particular industry that you think is thinking about this, say, more forward thinking in terms of AI bias and trusted AI systems than other industries?

Or is there, just we're all in the same bad boat together?

Cari 34:45

I mean, none come to mind. The only thing that came to mind when you were asking that question are the handful of people that I've talked to.

And it's not like a large sample size or anything, but I've talked to people that have exited some of the larger players in the AI space coming out of the big social media firms and - the Googles and stuff like that - and I have had people say their approach to self governance is well, we've seen some of it.

I mean, when you see companies start firing their content moderation teams and their responsible AI teams because they, quote, found something or whatever, that tells you what self governance at some of these companies look like.

But I have had people tell me, oh, yeah, their approach is they just set aside a big chunk of money and then they hope they don't get sued. But if they do, then they’ve got money set aside. That's their literal approach right now because it's really hard to detect.  Whether or not you've been harmed and then what? You got a law, go ahead and bring it.

There's no real laws around some of this stuff. And if there is a law, who is there to actually investigate this stuff? We have a shortage of these types of people. I mean, the risk is like, I'll set aside some money, whatever.  That's where we're at today. Very unfortunate.

But there are people that need to look themselves in the mirror at night, that have kids in schools, if they're doing school technology. I mean, there are honest and good people out there. I don't mean to say that everybody is like that, but you don't know which from which is the problem, which is why procurement is your linchpin. 

Charlie McCarthy 36:43 

Right. There are a lot of people out there looking to have these conversations. They want to do the right thing. They want to start implementing best practices around this stuff, around AI, bias and mitigating risk. So I guess the takeaway Cari, what would you say? How should companies specifically get started in addressing this? If there is one simple thing that they could do to get started today, what would that recommendation be? 

Dr. Cari Miller 37:08 

Learn. Start learning. You would be surprised at how many people in your organization kind of need to know this stuff. It's not just the IT guy, it's your HR. Person needs to know your merchandising. People need to know. Your procurement. Person needs to know. Just learn, just start. Take a webinar, go to a little conference, do a little coursera, just start learning. Crawl, walk, run.

Charlie McCarthy 37:37 

Any specific places you would send them for those types of resources? Good…

D Dehghanpisheh 37:43 

Besides The MLSecOps Podcast!

Charlie McCarthy 37:45

Obviously [laughs]

Dr. Cari Miller 37:47 

I mean, honestly, podcasts are fantastic. People - they exercise, they walk, they drive to work. Podcasts are - that’s how I've learned a lot. Just listening to people use the language in context, it's really helpful. Coursera is good. There's a lot on Coursera. For Humanity. Classes are free. We charge for the certification, which is a pretty minor fee. But you get a lot out of just taking some entry level classes. 

Charlie McCarthy 38:15 

I've dug around in there on the For Humanity website. There's some really valuable stuff.

Dr. Cari Miller 38:19 

Yeah, even just the basic class is really good.  Associations usually are having - I think associations are starting to understand they need to offer this kind of information, just upskilling kind of stuff. So you can start there too. 

D Dehghanpisheh 38:41 

So the message is get started, get learning, get talking. And we hope that happens all on The MLSecOps Podcast. Cari Miller, our guest today. Thank you for talking to us about Trusted AI and bias in AI. A fascinating topic that we're all surely to be impacted in one way or another by. Thanks again, everybody.

Closing 39:03 

Thanks for listening to the MLSecOps podcast brought to you by Protect AI. Be sure to subscribe to get the latest episodes and visit MLSecOps.com to join the conversation, ask questions, or suggest future topics. We're excited to bring you more in depth MLSecOps discussions. Until next time. Thanks for joining.

Additional tools and resources to check out:

Protect AI Radar

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard - The Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for listening! Find more episodes and transcripts at https://mlsecops.com/podcast.

SUBSCRIBE TO THE MLSECOPS PODCAST