<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4373740&amp;fmt=gif">

The MLSecOps Podcast

AI Audits: Uncovering Risks in ML Systems

May 03, 2023 26 min read

Shea Brown, PhD speaks with MLSecOps podcast on "AI Audits: Uncovering Risks in ML Systems"

Shea Brown, PhD explores with us the “W’s” and security practices related to AI and algorithm audits. What is included in an AI audit? Who is requesting AI audits and, conversely, who isn’t requesting them but should be? 

When should organizations request a third party audit of their AI/ML systems and machine learning algorithms?

Why should they do so? What are some organizational risks and potential public harms that could result from not auditing AI/ML systems? 

What are some next steps to take if the results of your audit are unsatisfactory or noncompliant? 

Shea Brown, PhD; is the Founder and CEO of BABL AI, and a faculty member in the Department of Physics & Astronomy at the University of Iowa. 

YouTube:

 

Audio Only:

 

 

Transcription

Introduction 0:08 

Welcome to The MLSecOps Podcast presented by Protect AI. Your hosts, D Dehghanpisheh, President and Co-Founder of Protect AI, and Charlie McCarthy, MLSecOps Community Leader, explore the world of machine learning security operations, aka, MLSecOps. 

From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. This is MLSecOps.

Charlie McCarthy 0:38 

Hi everyone, and welcome back to The MLSecOps Podcast. My name is Charlie. I'm here with my co-host, D, today, and we are talking to our guest, Dr. Shea Brown. Dr. Brown is a faculty member in the Department of Physics and Astronomy at the University of Iowa and founder and CEO of BABL AI. Shea, welcome to the show. 

Shea Brown 1:02 

Hello, thank you for having me. 

Charlie McCarthy 1:04 

Let's start by learning about your journey to the AI and machine learning space. So your background is in physics and you have a PhD in astrophysics from University of Minnesota. Talk to us about how those disciplines intersect with AI/ML and what drew you to machine learning. 

Shea Brown 1:23

Yeah, so they don't necessarily intersect. And I think the thing that really got me into artificial intelligence and machine learning is that the volume of data that we get from telescopes nowadays is so huge that it's literally impossible for us to have humans looking through that data. And this wasn't always the case. It's kind of a relatively recent advancement. 

But now if, I mean, if we had everybody in the Earth staring at that data for their entire lives, we wouldn't be able to actually look through all of it. And so it kind of necessitates some sort of automation. And people have been doing automation for a long time, but machine learning has become so powerful that you can't ignore that as a possible option for sorting through those. So I got into machine learning for my research and trying to automatically detect and classify things in the sky. 

Charlie McCarthy 2:20 

And so then what about BABL AI; your motivation for founding that company and a little bit about the company's mission. 

Shea Brown 2:28 

I’ll be honest, my original motivation was I just wanted to do something with AI. I knew it was the future. I was excited about it and I figured starting a company would be a great way to do that. And my original intention was to actually sort of build artificial intelligence to operate things like space stations, or operate robots on Mars, and that kind of thing. And I didn't know anything about business. And so I learned about this idea of customer discovery, which is go out and talk to people and figure out what they need. 

Well, as I went out and started talking to people, I branched out outside of the aerospace industry and started talking to companies that use AI. And I quickly realized that there was this really urgent problem out there. And this was around the time, 2018, around the time of Cambridge Analytica. ProPublica had come out with their piece on recidivism algorithms being biased. And I sort of pivoted and decided that I think what was urgently needed right now is not robots on Mars, but people to start looking out for how we use, and develop, and deploy AI here on Earth. 

And so BABL AI became – and it stands for Brown, which is my last name, and then the Algorithmic Bias Lab. That's the ABL of BABL. People often ask why because it's kind of a random acronym. And so algorithmic auditing became the thing that I felt like my brain–the scientific part of my brain–could really tackle well. And I happen to know a lot of ethicists, and so it felt like a very natural pairing. 

D Dehghanpisheh 4:13 

So with that Dr. Brown, bias in AI and the need for understanding these AI systems–AI auditing, if you will–is entering the mainstream conversations, thanks to ChatGPT, large language models, and it's really kind of getting a lot of buzz. And you talked about the recidivism predictive rates that ProPublica was talking about. What are some harms that you're concerned about right now that you think more AI auditing needs to be applied towards?

Shea Brown 4:46 

Almost anything that you can think of, there is a potential for harm. And so, I'll speak very generally real quick, because I think often times people will focus on a particular use case. For instance, recidivism algorithms, or AI used in hiring processes that could be biased, or giving people loans. Those are all things where it's very obvious that if you have a biased algorithm, that it could cause harm, could discriminate against people. If you have an algorithm in some cases, like facial recognition, where simply misusing, it doesn't have to be biased even–and often they are, of course–but it could simply be misused and it could cause harm. 

But I think in general, if you take a process that has typically been done by a human in the past, and then you try to automate that process, almost by definition you have these filter effects that happen because you're taking something where you have this very holistic decision that was made by a human and you have to squish it down and conform it into some automation. Select certain features. Select information and turn it into an automated process that is going to give rise to some sort of biases or some sort of filter effect. Almost without fail. You can't avoid it. 

And so I think all of these systems, whether they use machine learning or not, those sort of things have to be considered and typically they haven't really in a kind of rigorous way. So in my mind, almost any automated system that makes some consequential decision about a human's life–whether they get a loan, or a job, or get social services has to be inspected through this lens of potential for harm.

Charlie McCarthy 6:34 

Switching from what we might consider to be public harms to more organizational risk. What types of risks do organizations face without audits and risk assessments of these AI/ML systems and/or their automated decision making tools that are powered by algorithms? 

Shea Brown 6:53 

The biggest one is probably reputational risk. The biggest one meaning the first one. I think the biggest risk that happened early on was reputational risk. Someone would write something about an algorithm that was doing something wrong that was not good for the companies that were operating those algorithms. And that was sort of the first wave of interest in doing something about this. 

Recently, there has been another wave of liability risk. People are starting to sue companies for this, and now regulatory risk is rearing its head. Because the EU is very active, local laws in the United States, federal laws or federal enforcement agencies are starting to talk about these issues. And so for an organization, these are kind of the top level risks. 

There's also a whole lot of things that could happen on the ground, so to speak, where you try an AI initiative, you spend a lot of money on it, it turns out to be not good for a variety of reasons, and you end up wasting a lot of money. And so there's some sort of organizational efficiency and money lost basically, for not having considered potential downsides of these tools as opposed to just focusing on the upside. 

D Dehghanpisheh 8:11 

You've talked about reputational risk. You've talked about, let's call it brand risk, or what used to be called “New York Times front page risk,” right? And you also talk about the risk, if you will, of capital, time, and human beings capabilities largely may be wasted in doing these types of audits. 

I guess that brings back the center question of who is asking for AI audits? Especially since they're not necessarily being mandated yet. Who in organizations are raising their hands saying, “You know what, we absolutely need this, we must invest in this,” whether it's time, people, technology, or something else?

Shea Brown 8:55 

Yeah, good question. So people are asking for them, and it kind of comes with a similar wave. So the first wave that we saw of interest, where people really wanted this done, was coming from that reputational wave. So people who had reputational pressure, it was very obvious that they had to do something about it in a way that was thoughtful, and increased transparency. And auditing or assessments of their algorithms by external parties is kind of a clear way to go. And now there are laws like New York City Local Law 144, which requires auditing for hiring algorithms– that's forcing a lot of people. A lot of people we're working with right now have to comply with that law and need to get audits done. 

But what we're also seeing is procurement. So, these bigger companies, the enterprise companies, Fortune 50 companies which have had some reputational risk and are seeing the regulatory risk on the horizon, are now in the procurement process, asking tougher questions of their vendors. So, someone who produces an AI and is trying to sell it to an enterprise organization, they're going to have to answer some tough questions about, have you tested for bias? Have you taken care of potential adversarial or cybersecurity risks that are unique to AI? And those sort of things are being required of– these large enterprises, are requiring them of the vendors. And so we're starting to see this market push of trying to push the ball down into the vendor's court and have them actually do something.

D Dehghanpisheh 10:42 

So that makes me think that there are really, kind of, four corpuses, if you will, of people that are asking for this. You have kind of the people in charge of protecting the company's brand and image. So you've got chief marketing officer, head of sales. And the more digital channels they have that are exposed, there could be risk there. You have the finance aspect and the CFO who's saying, hey, if my algorithms to help for financial reporting or filing, my 10k’s are off, I've got risk there, or closing of the books. 

You've got kind of the hiring policies where maybe there's some legal components. You've got this weird intersection of almost chief legal counsel and chief people officers or human resource executives. And then you have one of the things you talked about, which is the system itself, AI and ML security. Is there anyone in particular that you think is leading the charge here, or is it largely dependent on the business you're playing in, the regulatory environment that you're playing in, and your technical maturity? How's that balance? 

Shea Brown 11:49 

Yeah. It's a tough question to answer because it has been so heterogeneous. Sometimes it's a product owner, a product person who comes to us and they've had their clients reach out to them and say, we're worried about X, Y, or Z about your product. And then they reach out to us or people like us to try to help them. A lot of times we see chief general counsel. We'll see the legal side, where they're clearly worried about liability and regulatory risk. And so I'd say if I had to count the most common one, it would be the legal teams. 

D Dehghanpisheh 12:32 

Interesting. And is that a function of, I like to say that there's kind of this Maslow’s hierarchy of capitalism, which is fear, greed, or regulation. So, is it the regulatory instinct that is coming down and pressuring that?

Shea Brown 12:50 

Yeah, I would say yes. I think there's enough rumblings in all the jurisdictions that these big companies care about of regulations coming down the pipeline. EU AI act, and at the federal level, there's all of the enforcement agencies, the EEOC, the FDC. They're all talking about this as a priority for them. 

D Dehghanpisheh 13:20 

Yeah. So we've seen things in the EEOC’s commissions, and the FTC in particular, about you can't stand behind the vendor saying, oh, well, it's the vendor of my AI. It's like, no, you chose to deploy that. There's a lot of debate there over where that exists. How does that work its way back into, say, the technical supply chain of resources that are required, especially in the era of ML, where so much of it is dependent on open source technologies? How do you think about that?

Shea Brown 13:52 

It's hard to think about that. I think that there's a lot of confusion about who's responsible and who is going to require what of whom. And so, we've seen lots of cases where we've worked with vendors who have a product. They are relying on, let's say, an API from some big organization or potentially open source. The people who created that sort of model have done no testing whatsoever. They don't have very little documentation about the training data. 

And so these vendors are kind of forced to grapple with, how do we audit this? How do we test for the right kind of things when we have missing pieces of information? And then they're getting pressure from their clients to start doing things. And there's no top level guidance on what can you ask of people in the supply chain. 

D Dehghanpisheh 14:57 

Interesting. So there's a couple of standards. I know that the IEEE is working on a purchasing kind of framework. There are some things happening at the federal level, and then there's things like Executive order 14028 for software supply chains. But it feels like all of these things are almost independent from one another in terms of how you understand what really is a need for a machine learning bill of materials, if you will, with some auditability that is held across that supply chain so that you can figure out who does what, where, when and how, right? Am I interpreting that correctly?

Shea Brown 15:37 

Yeah, I think you're absolutely right. Part of the problem is a lot of the people who are working on this don't actually understand the interdependencies there and how that supply chain actually works. That's one problem. And so they don't necessarily know on the ground what are the urgent issues in terms of what kind of transparency you need and what kind of obligations and contractual clauses you might want to have or what kind of information needs to flow down that chain to allow for auditability or allow for appropriate testing and mitigation of risk. 

And so you do get a lot of things coming from very different angles, requiring things at different levels. Some are really high level. Like, transparency is a great principle that you want, but what does transparency mean in detail? And so, there's a ton of confusion and I think it's probably the businesses themselves are the ones who are sorting through a lot of how that actually– 

D Dehghanpisheh 16:37 

You mean like lines of businesses or the enterprises? 

Shea Brown 16:40 

Yeah, like business at all levels. People who are ultimately going to be deploying the tools, the people who are creating and packaging the tools, the people who are creating and developing components that will go into those products, and the procurement people who are working sort of between the lines, trying to put this all together. 

Charlie McCarthy 17:07 

Okay, so we've talked about who is requesting these third party audits, or who should be. Let's switch gears a little bit and talk about what they should expect once they perhaps hire a third party group to come in and do this for them. Shea, on BABL AI there are a number of fantastic videos, and resources, talks that you give during lunchtime babbling that break down an AI audit and kind of explain some of BABL AI's capabilities. You talk about functions of a process audit, an algorithm risk and impact assessment, and a responsible AI governance gap analysis. 1s Are these things independent or do they work together? 

Shea Brown 17:54 

Yeah, so they are independent. Let me go through the most superficial and then go down into the deeper levels. So an AI governance gap analysis is a consulting service. So it's not an audit, per say, in the sense that we don't have– At the output of that we are going to give you recommendations, which is not typically what we want to do with a more formal third party independent audit. 

So what we're basically doing is because we've done a lot of research and we've been in a lot of companies who have implemented AI governance, what we're going to do is we're going to look at what you currently have in terms of the way you're governing, the way you're testing and monitoring your AI, and just identify where you have potential gaps and ways in which you can fill those gaps based on your industry and the use case that you have. So, that's a consulting service. It's fairly high level, and it's really just to get people started on where do we fill those gaps? 

We have a compliance audit, which is what we call a process audit. It's really a criteria based audit, which is a third party independent assessment. And we have criteria that we use, we have our own criteria. It depends on the law. It depends on what you're being audited for, but it's really an assurance mechanism. So what we do, essentially, is there's a list of things which you need to have done as an organization to manage, govern your system, test your system. Those things are spelled out very clearly. And what we do is we come in and we assess whether you've done those or not. 

So typically, you will submit documentation about what you've done to test your system from bias, for instance, or govern it or the risk assessments you might have done as an organization. And our job is to assess that documentation, to figure out does it actually fulfill the criteria. And then the next important part is claims verification. We have to then go in and make sure that you actually have done what you say you did. And then in the end, we essentially submit a letter saying that we have done this engagement and we have reasonable assurance or limited assurance that you have actually done what the criteria say you've done. 

The risk and impact assessment is something that's sort of our bread and butter, the algorithmic risk and impact assessment. And that's a very deep dive, typically very expensive process where we actually go in and we will test your system sort of by hand, right? If you don't have data that you need to test it, we'll find the data, we'll figure out where we need to get data to test your system. We'll be looking at all the stakeholders. We'll be talking to stakeholders. We'll be interviewing your development team. We'll be interviewing people, the end users who are using it, people who it's getting used on. 

And the main goal there is to do a holistic assessment of what is the potential risk– not just ethical, but reputational, liability, safety, regulatory risk associated with your use case. And then we're going to give a lot of recommendations. And so, that's not an audit per se either. It's a consulting engagement. But it's meant to really uncover the places where your algorithmic system, the sociotechnical system, could fall over and cause a lot of problems. And so that's the range of things that we do. And recently it's been a lot more of these sort of compliance based audits because of regulation. But the consulting is still really critical. We just can't do it with the same clients, of course. 

D Dehghanpisheh 21:44 

So, it sounds like there's a very intensive, for lack of a better term, human in the loop element to all of this. And I'm curious how we think about that manual process, those third party auditors. How do they objectively evaluate an AI system and how do they do that on the elements of, say, performance, accuracy, fairness, and even security? Like, what's the benchmark that a third party auditor is using to make any stated claims in that? 

Shea Brown 22:22 

Yeah, so that's the million dollar question. And that's where, that's where there's a serious gap because we don't currently have industry-wide accepted benchmarks for almost anything. Now, there are a lot of frameworks, like you mentioned IEEE is doing a lot of work on this. I'm a fellow at this nonprofit called For Humanity that their main goal is to come up with these sort of benchmarks and criteria. But none of them are accepted globally or even regionally as the gold standard. Same goes for the technical testing, the things about accuracy and robustness and whether you've covered your security, you have the appropriate security measures, or resistance against adversarial tax, those aren't really decided upon. 

And so the way we at BABL are filling that gap is that we have to rely on some standard. And so, it could be different for different organizations. And it might be that, okay, you decide that this is, let's say, NIST, the NIST risk management framework. You want to hold yourself to this standard. Well, what we do then is say, okay, you've held yourself to this standard, demonstrate to us that you have. And what we're going to do is come in and make sure that you have done what you say you're going to do. That's the classic assurance sort of mechanism. 

D Dehghanpisheh 23:53 

So in other words, it's in consultation. So at the start of this journey, it's in consultation with your clients and customers who then say, hey, you know what, this is the framework that we're using. And it's kind of like saying, hey, in the absence of a standardized notion, here's actually something that gives us a starting point. And then you work backwards from that. That's the benchmark that you're then establishing. Are they complying or not complying and where are they in that space? 

Shea Brown 24:20 

Yeah. And that's not always the case. So for instance, for New York City, it's very clear. New York City is the one that is governing hiring algorithms. They have a very clear idea in the law of what it takes to do a bias audit, for instance, and what needs to be disclosed in those cases. And our criteria that we're using is based on those legal requirements. And so there, they don't have a choice. And when we audit for that particular law, they have to have fulfilled everything that we said and that, based on the law, is required. And it has some metrics, like the four-fifths rule is in there, but it's not necessarily a requirement. I think in the ideal world, there would be some, more like financial accounting, really hard coded limits that everyone can follow. And so you–

D Dehghanpisheh 25:20 

Like a gap standard for AI with a follow-on regulatory and compliance structure, like a Sarbanes-Oxley or something that is kind of bolting up the standard of how you measure, the measurement, the calipers themselves, and then how you're reporting on said measurements, right? If you think about it, from– 

Shea Brown 25:37 

That's where we want to go, eventually. The auditors want to go there because that limits our risk. And the companies will eventually want that because they want to know what to do. In the end of the day, they just want to comply with the law and they want to do the right thing, for the most part. And it's much easier when it's very clear exactly what needs to get done. 

Charlie McCarthy 25:58 

After one of these audits, Shea, suppose something like bias is detected or the results of the audit are unsatisfactory, what are the next steps for addressing things like that after a company brings in a consultant? 

Shea Brown 26:13 

That's also a tough question and something that we're struggling with. So in the case for these really strict compliance based audits, the only answer really is you got to go fix it. So, if you have a deficiency in a particular area, there's no way around kind of going and getting that fixed. Now, speaking of Sarbanes-Oxley, we do follow Sarbanes-Oxley, and so we're very strict about not consulting with the same client that we're auditing. We don't let ourselves do that. 

And so part of the problem is that there just aren't that many auditors in the world currently. Algorithmic auditors, let's call it that, because it's a relatively new thing. And there aren't a huge number of consultants out there that are specialized and knowledgeable enough that have the technical skill and also have the AI ethics and more broad kind of governance skills to come in and help these organizations. 

So, the ecosystem is young enough that it's actually kind of difficult to point an organization to people that can go help them that aren't us, for instance. It's not to say that there aren't out there. It's just that there's a limited number and there are many, many companies. So, that's actually a problem. We need more auditors and we need more pre-audit service providers who understand the standards, or at least the emerging best practices that can help these organizations kind of get on the rails and right the ship, so to speak. 

D Dehghanpisheh 27:48

So then, along those lines, that challenge of kind of limited talent, certified auditors, et cetera, how do we get to implementation? And I guess what I'm asking is how should we be thinking about what's programmatic versus what is manual and human inspected? From your view, what should be programmatic in these spaces? 

Shea Brown 28:09 

Programmatic should be– Monitoring, and testing, and the auditability of your machine learning systems should be programmatic. 

D Dehghanpisheh 28:23 

Meaning like the programmatic attestation of that supply chain and how it was tested. Going, what I would say–not to take words and take liberty with your views–but I'm assuming it's like, you have to go well beyond a model card, is what I kind of hear you saying. 

Shea Brown 28:41 

Yeah, model card is good, but that model card has to be verified. So, those are words on a piece of paper or a PDF, but the things that we need to look for is you said that you're using Model 1.0 and that model used training data, whatever you want to label that training data. That training data was collected at this time. You were using an API from this organization. 

We have to be able to see all that and see that. Who signed off on it? Who said that was okay? When did that get deployed? How much production data went through that during this period of time that you said it went through? And there's a lot of things to track, and there's no way of getting around that eventually being automated and made easy for auditors to follow that. 

D Dehghanpisheh 29:34 

So, then let me put out a hypothetical, okay? You've got this explosion of large language models and API and service layers like ChatGPT, which is the front end of the GPT foundational model. And you have a whole bunch of other things that are becoming increasingly more API services. And the level of scale that they're getting to and the level of impact that they're having is going to be massive. And it seems like it's only a matter of time before an automated auditing tool that is built on some large language model foundation comes to market, if it hasn't already. 

Those might be in the control of the vendors themselves, right? So now you've got kind of the old Dracula in charge of the blood bank problem. How do you prevent that? And are you worried about that at all as you think about auditing of AI systems, where inherently more of the AI functionality is drawn back into these massive players and the question of independence becomes now loosely affiliated? 

Shea Brown 30:43 

Listen, I think there's no way of getting around this kind of automation. The real question is building– how do we build an infrastructure of trust around those systems? And, okay, you could have an auditing system or a monitoring system which is AI monitoring AI, or machine learning monitoring other machine learning. So what we have to do is, we have to figure out what is the sufficient accuracy, robustness and accountability of this AI that's doing the monitoring. What does that have to be? What bar has to be set for that in order for me to trust the output for these other AI. 

And it will not be a house of cards all the way up. At some point, there will be people and those people are going to be shepherding these systems. And they could be AI auditors, they could be regulators, they could be at some point, people within organizations who are doing internal audit and keeping track of these sorts of things. And I think eventually there will be a few people who are managing many systems. And those systems have to be designed in such a way that that can happen because that's what efficiency is, and that's what the market will drive. 

But right now, we're at the stage of that first layer. Because when I first started, BABL, everybody said, why aren't you building a platform? Why aren't you automating this? Why aren't you having some platform? And that makes total sense. I'd probably have a lot more VC money right now if I did that. But what I said is, I know that's where we're going. But right now, people have to do the hard work to figure out what the hell are we doing first, and what's effective, and what's going to work. 

And then after that, we automate that in a kind of an incremental, trustworthy way. And then we test that. We test that. We test that. We get confidence in that. We have trust. And then we maybe go up another level. Let's try this. Test it, test it, automate and have trust. And so this is eventually going to happen. And I'm not worried about it per se. I just think that we need to do it in a way that does build that trust and that trustworthiness in the whole infrastructure. I don't know if that makes sense. 

Charlie McCarthy 33:14 

It does. Yeah. And my follow up question would be building the trust into the infrastructure. It starts with the people, right? And it sounds like there's already, maybe, an insufficient number of auditors in the market. So, when we add people to this process, or presumably as more people seek to become certified, to become auditors, how can we ensure that auditors remain independent and objective throughout the auditing process? 

Shea Brown 33:44 

Well, I think we got to do it the same way that we do it with financial auditing. And I think a lot of people criticize this because they say, look at all the problems we've had with financial auditing. And, yeah, there's been problems with financial auditing. And you're not going to come up with a system that can't be subverted in some way by a bad actor. That's just not going to happen. But such that it is, the financial auditing system is quite successful, modulo those limitations and a few bad actors, at maintaining independence. 

And so, what we have to do is think about incentive structures and make sure that the incentive structures are such that you are going to limit the likelihood of collusion or people working together to subvert the system. And so financially, if I'm going to audit you, I can't get any money from you for anything other than the audit. And because of that, I have to be very, very careful and strict about whether I pass you or not, or whether I give you a stamp of approval, so to speak. Because I'm not getting a huge amount of money, basically, I'm wearing all that downside risk of false attestation. 

And so it's going to be in my interest to fail you if you are not doing something right, because otherwise I'm not going to be in the game anymore. 

D Dehghanpisheh 35:14 

It's the Arthur Andersen/Enron timeline now in the age of AI, right? That makes a lot of sense. So given the need for more auditors, Dr. Brown, and this need to have a variety of skills, what is needed for someone who might be interested in becoming an AI auditor? What is needed? How do they go about the process of becoming an AI auditor, and where do they start? 

Shea Brown 35:40 

Yeah, really great question. I think this is something that we're struggling with as an organization ourselves. One thing you can't avoid, which I think a lot of people wish that they could avoid, is having the technical understanding of how these systems work. That's a critical piece, because you need to be able to speak the language that developers speak. You need to understand what their concerns are, what their typical workflows are like, and where the risks lie in those algorithmic systems. And so that's sort of component number one. 

The second component, or there's a broad range of other components which include the ability to identify risk in a system. So, you have to understand where things have fallen over in the past and where harm has happened. And so that's a critical skill which takes time to get. And you have to do– currently, the path for that is mostly your own research. Increasingly, there are more and more courses, potentially certifications, around things like AI ethics that will have a survey of typical algorithmic harms. 

And then there are other things, like understanding AI governance, or governance in general within organizations. And then the sort of on the ground auditing skills, things like independence. How do you actually engage with clients, and how do you engage with systems? And then technical testing, which I think is different than just being able to use machine learning or develop machine learning, is how do you interrogate a system for bias, for accuracy? And that's something that often gets missed in typical data science education. 

When you produce a number, what's the uncertainty in that prediction? It's not a typical thing that a machine learning boot camp will talk about. How do you estimate that? How do you estimate your certainty in the results and the robustness of it? And so, that's a starting point. And it's not easy. Like, where do you start? That's a good question. I mean, I can turn that back to you. 

D Dehghanpisheh 37:51 

The MLSecOps Podcast is probably a good place to start with conversations like Dr. Shea Brown. 

Shea Brown 37:56 

That's right. And we actually are offering what's called an AI and Algorithm Auditor Certificate Program, which is starting this May, which I'm totally nervous about. And we're really trying to crack this code of how do we take people from a variety of backgrounds who are interested in this–and there are many, many people who are interested–and upskill them so that they have, like, a baseline level that they can kind of get entry level positions in this sort of industry, which is just the industry itself is just growing. And so it's an exciting time to be doing this. Also, we're standing on the house of cards as we're building it, so we need a lot of glue. 

D Dehghanpisheh 38:44 

So along those lines, Dr. Brown and again, thank you for taking the time to be with us today. Fascinating conversation. As we leave our listeners and readers, if you're reading the transcript, what is the call to action that you want to give people who are reading or listening to this? 

Shea Brown 39:04 

Well, so I would say if you're an organization that uses AI or develops AI, my call to action would be if you have not started on this journey of understanding what AI governance is, what auditing might be useful for you, or risk assessments, you need to start now. And there's no way of getting around it, the regulations are coming, and so you need to figure out some path that is going to get you to start that journey. 

That would be my call to action for organizations. And for individual people who might be thinking about this as an industry I would say, yes, do that, because everything is getting automated now. And the one thing you're not going to automate away are the shepherds of these algorithmic systems. The people who are looking out for everybody else and making sure the harm doesn't get propagated. That is a job that you aren't going to automate away, at least not in the near term. And so, if you're thinking about job security, getting into this field is something that just makes sense, in my opinion.

D Dehghanpisheh 40:22 

Well, everybody, dr. Shay Brown from Babble AI and Astrophysicist extraordinaire. But thank you very much for joining us on the MLSecOps Podcast. And until next time, everybody, be well! 

Closing 40:10

Thanks for listening to The MLSecOps Podcast brought to you by Protect AI. Be sure to subscribe to get the latest episodes and visit MLSecOps.com to join the conversation, ask questions, or suggest future topics. We're excited to bring you more in depth MLSecOps discussions. Until next time, thanks for joining.

Additional tools and resources to check out:

Protect AI Radar

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard - The Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for listening! Find more episodes and transcripts at https://mlsecops.com/podcast.

Share This:

Supported by Protect AI, and leading the way to MLSecOps and greater AI security.