Exploring Generative AI Risk Assessment and Regulatory Compliance
Audio-only version also available on Apple Podcasts, Google Podcasts, Spotify, iHeart Podcasts, and many more.
Episode Summary:
In this episode of the MLSecOps Podcast we have the honor of talking with David Rosenthal, Partner at VISCHER (Swiss Law, Tax & Compliance). David is also an author & former software developer, and lectures at ETH Zürich & the University of Basel.
He has more than 25 years of experience in data & technology law and kindly joined the show to discuss a variety of AI regulation topics, including the EU Artificial Intelligence Act, generative AI risk assessment, and challenges related to organizational compliance with upcoming AI regulations.
*In this transcript, mentions of the "EU AI Act" or the "AI Act" refer to the EU Artificial Intelligence Act.
Transcript:
[Intro] 00:00
Charlie McCarthy 00:08
All right. Welcome back everybody to the MLSecOps Podcast. Thank you for joining us today. I'm excited to be jumping back on to host again. I'm your MLSecOps Community leader, Charlie McCarthy, and today I have the privilege of being joined by and introducing a couple first timers to the show, although definitely not new to this AI rodeo. So I'll start first by introducing my co-host for this episode. Her name is Alex Bush. Alex is Head of Marketing at Protect AI. Alex, thank you so much for joining me to co-host today.
Alex Bush 00:40
Yeah, thanks for having me, Charlie. I'm really excited to be here. And as Charlie said, I'm head of Marketing at Protect AI and looking forward to this conversation with our, with our guest; David, I'd love it if you could give us a quick introduction of yourself and a brief overview of your role at VISCHER.
David Rosenthal 00:57
Yes, sure, sure. I'm actually a lawyer by training. I have been working on tech law, data law, everything that has to do with bits of bytes for over 25 years at another law firm. And now here at VISCHER where I'm heading the data and privacy and some other stuff that's going on here. What may be a bit special in my case is that I started off as a journalist and as a software developer. So I have also a technical background, although I'm happy that I'm not responsible for security at VISCHER, and I don't have to deal with those kind of issues here, but I do of course like these technical aspects. And the same is true also with AI, where I actually jumped into also the inner workings of machine learning models, et cetera, because I have a bit, not only the legal interests, but also in technology. And as I had to find out in the case of, of artificial intelligence, a lot about is about mathematics.
Charlie McCarthy 02:04
Fantastic, thank you. So let's dive right in. I think one of the first things we wanted to kind of level set on for today, David, is regarding the EU AI Act. And I know you're based in, is it, it's Zurich, yeah, and Switzerland is not a member of the European Union, but I think you must be very familiar with this act because eyes all around the world kind of have been watching it as this landmark body of work that will presumably influence a lot of regulation moving forward in the world of AI.
So could we start with maybe you walking us through the classifications? I think they're unacceptable risk, high risk, limited risk, and minimal risk, and maybe explaining for our viewers what examples might be in each of those categories or giving us a bit of an overview there.
David Rosenthal 03:06
Yeah, sure. So I think the first point to note with the EU AI Act is that it is not a general regulation of AI. So lots of the stuff that we're talking about, for example, that AI systems should not be discriminating people - not have bias - so these general things that we talk about; a lot is not the core of this EU AI Act. The EU AI Act is more like something that wants to regulate certain higher risk products to say so, and to restrict how they can be put on the market. So it has a much more narrower focus than many people believe.
There are, as you rightly said, different categories in there. There are a few applications where the lawmakers basically say, we don't want to have that. And so it's simply prohibited. And that could be, for example, social scoring. That's one of the examples. But what may be more important for most people is that there is also a prohibition in there to use, for example, emotion recognition at workplace. So if the boss at at a company wants to know, are people happy actually and wants to have AI find out that, then that is not okay, and that is not something that may be used. There are of course, always a lot of questions around that. Is it only about doing, having biometrics involved or is it, would it also be to do, for example, a sentiment analysis through text? Is that also covered? I personally believe rather [it not], but it's, it's - those are these kind of questions. So there is this prohibited category. That's one part, and you will always have to test if you have an application, is it in one of these categories?
David Rosenthal 04:53
And the other big bucket is basically the high risk application bucket. And those high risk applications are a list which is defined. So it's a, it's a conclusive list. It's not something where you come and say, you have to make an assessment. Is it high risk or not? It's basically a defined list of use cases, and if you are in those use cases there are still some exceptions to that, but then basically you have to follow certain rules. And examples of those high risk applications or use cases would, again, for example, in the workplace environment, be assessing job candidates. So if you go and apply on a, on a company website and put in your CV [Curriculum Vitae or résumé], et cetera, if AI then automatically assesses [it] - which happens today at many companies - that would be a high risk application. Or if a bank would [use an AI to] judge your credit worthiness, that would be a high risk application.
David Rosenthal 05:54
There are also a lot of high risk applications and prohibited applications with regard to government use cases. But I think that's less of an issue now for us here, because we more have the, the commercial setups, and there are some other cases, but it's not a too large list. One type of application also is included in that list - those are all those products that are already today very strictly regulated. So if you, if you have a medical device, for example, that needs to be certified, have the CE marking in in Europe, and you then use AI for it, then that also becomes automatically a high risk application. And if you are in that category, then you have to do a whole lot of things, which many would say, we already do that because it's, it's nothing special. And the AI Act is very generic in these requirements.
David Rosenthal 06:51
For example, it says you have to have a quality management system. It says you have to have risk assessments. Of course, in the area of security, it says it has to be resilient, it has to be tested, it has to be secure, you have to do these red team approaches. So all the kind of things that you would say is basically good practice is in there and you need to document it, and you need to, to have certain things registered. So there is also quite a lot of red tape around it. And that mainly applies to those who provide those systems on the market. So it's to a much lesser extent to those who use it. And that's basically the core. Then the AI Act also has, for a selected type of application, some additional obligations but those are, I say, only transparency obligations.
David Rosenthal 07:50
So for example, if you have a chat bot that interacts with people, that people have to understand this is AI, which in most cases anyhow will understand. Or if you generate synthetic text or images, then it has to have a watermarking. And some people say, we don't know how to do that with regard to text because it can be easily taken out. These kind of questions. There you see, if you look at the AI Act that they have, basically, they have seen a lot of concerns about this technology, and they have said, okay, if we have concerns whether that text has been machine generated, then let's just require the providers that they have to watermark it and let them find out whether that's possible and how this works. So we have then a set of transparency obligations, and that's most of it. There is a number of other stuff in there. For example, a literacy requirement - you need to train people - but that's essentially the core of the AI Act. So most products that you will have will actually not really be impacted. We also have some rules about models but they are not all too strict, and that's somehow like a separate chapter that has also introduced only lately. So the expectation is about maybe 5-10% of the applications of a large corporation could be in that area, but it's not the majority.
Charlie McCarthy 09:23
Oh, interesting. Only 5-10%.
Alex Bush 09:26
Yeah. Yeah. It, it's interesting and it's complex and it's new, right? And you talked about the, the companies who have those, those high risk AI systems. So what are some of the first couple of steps, David, that, that companies should take to ensure that they they meet these standards and are there any consequences that they might face if they don't do that?
David Rosenthal 09:47
Consequences are always in there when you know about the EU regulations, you know, it comes with hefty fines. So up to 7% of your global annual turnover could be a fine, although the normal fine would be 3% or up to 3%, of course, and that's a bit lower than under the GDPR [General Data Protection Regulation]. So, and I don't expect that fines will be relevant in the beginning. Don't forget, the [EU] AI Act will come into force on August 1st [2024], but most of the provisions will only be active or, or relevant within - after - two years. So there's still quite some time to prepare. The prohibited ones will be relevant as of next February [2025]. And until then, I think what most companies should do, and what we see them already doing is to get an understanding where are they actually using AI. And also the question, what is AI?
David Rosenthal 10:49
Because if you look at the definition in the AI Act, it is very complicated. And you will find out if you do a bit of a deep dive that you already have a lot of AI running around or being used. And that will be probably the initial step that companies need to do, and where we also see them finding it quite difficult to find out about that. So creating some kind of inventories and being then able to assess whether they are in the AI Act or not, and if they are in it, in which role? So are you simply a deployer, meaning you use it, or are you a provider which means you have to do much more. And this type of "put it in the right bucket" exercise, that's what I think companies should first do, after of course understanding what the AI Act is all about.
Alex Bush 11:49
Yeah, yeah, for sure. And there's obviously a lot of complexities there. And so I mean, how do you see as we start looking at when these regulations come in, how do you think that's gonna actually, for those who are the providers of AI and ML who have to provide that inventory and make sure that everything is compliant, how do you see this act and - impacting the development of AI in the EU and beyond? Right?
David Rosenthal 12:16
I think it will impact indeed beyond the EU because everyone who offers and creates some AI tools will want to sell to the EU. And what I would also expect is that the big buyers of technology, so the big companies, they will come and say, you have to be AI Act compliant. We don't want to run in a problem just because you, you haven't done your homework. And that leads us exactly to what you say, do the homework. A lot of it is about what you would already do in, in a diligent manner. You will have to document more, you will have to show that you have asked yourself all these questions. And there is a lot of judgment calls involved in that because the AI Act is very, very generic. So if it says you have to do adequate testing or things like that, then what does that mean?
David Rosenthal 13:12
And we will see a lot happening in that area because the AI Act basically says there should be something like an AI Office of the Commission, so an official body that will publish templates that help you do that. But I fear that that will take quite some time and the templates won't be very practical. So there will be a lot of work for advisors. And I also expect that there will be a lot of work, especially for the audit firms and those companies that basically come to you and say, you are a company. You want our stamp of approval. I am "AI Act Fine," and you spend a lot of money, and we give you that for what it's worth. And that's, that's something we already see coming, even though I believe that many, many companies will probably spend more than is necessary in that regard, because they're just afraid and they don't know how to deal with that, that we always see.
Alex Bush 14:12
Mm-hmm. <Affirmative>
David Rosenthal 14:12
I have some concerns about startups. How will they deal with that? And we already see that in other areas. Now the people who have drafted the AI Act always say, this is for innovation. I believe it's for the opposite. I don't think it will make innovation easier. It will make it more costly. And, and they may come up and say, we have things like the sandbox, so you can go to an authority and ask them already early on to find out whether it's okay, what you do. From all the experience I have with authorities that will be worthless, so they, they won't be able to tell you what to do, or it will be terribly unpractical. And so therefore, I think it will add cost to it. And it, it won't, it may change the products to a certain extent. They may spend some more time indeed testing and these kind of things. But I see a lot of costs coming.
Alex Bush 15:13
Mm-Hmm <affirmative> Yeah, for sure. And then, and it might stifle innovation even more, as you say within the EU and and force people to source providers from, from other areas for -
David Rosenthal 15:24
Yeah. But if they source providers from other areas, they will also be subject, because the AI Act doesn't only - it doesn't apply just to the people who sell in the EU. And you mentioned before Switzerland - we in Switzerland, we are not part of the EU, and the AI Act doesn't - it's not Swiss law. Nevertheless, all our clients basically say that if we are subject to the AI Act, we want to comply with it. And the AI Act basically says that if you provide a product on the EU market, then you are subject to it, full stop. And if you are using it outside the EU, so Swiss company or a US company uses a product, AI, and the output of it finds its way - intentionally of course - to the EU, then you are also subject. So if you have an AI tool at a US company that creates customer letters, and you send that out to customers in Europe, then you are subject to the AI Act if you care or not. That's another question. But the bigger companies, they will care.
Charlie McCarthy 16:39
Yeah.
Alex Bush 16:40
That's great.
Charlie McCarthy 16:42
And to dive just a little bit deeper, kind of jumping back to a really great question that Alex posed about these key compliance requirements for high risk AI systems and you know, the consequences, can you foresee some other challenges that organizations might face in complying with not just the EU AI Act, but AI regulations on a larger field? You know, you've mentioned cost, and you've mentioned possibly some issues with innovation, slowing things down there, but any other challenges that you could foresee within an org - even from the board level down - when trying to kind of create their plan for how they're going to attack and comply with some of these upcoming regulations?
David Rosenthal 17:31
There are a number of issues at different levels. One, if you talk about the board level, the main issue we see there is that the board or management doesn't really understand the risks we're talking about. And then you often hear, okay, we've done a ChatGPT workshop with the management or with with the board, and they know now how to do the prompts, et cetera, but that's not the kind of knowledge we're talking about. They in, at least here in Europe and regulated companies even more, they need to understand at these levels what is actually really the risk behind it. The security risks, the whole - the issue of how bias other types of, of risks, drift risks in the models, et cetera, all these kind of things, they need to understand that. But there are not many people who tell them how this works.
David Rosenthal 18:32
So we see actually, and it's a bit surprising, that there is not so much training at this level where board level language is used to show the people and, and get a - bridge actually the gap between the techies, so to speak, and those who have to then say, okay, that's something we want to do. The other challenge, the second challenge I see is that there is a lot of a fear, uncertainty and doubt (FUD) about AI. There has been this case about the Air Canada chatbot that gave a wrong response. You may have heard about that. Now people say these chatbots are dangerous. I think the main issue that went wrong there is that it actually got to a court case and gave them so much negative PR that this has been much more instructive than the actual case. They should have paid that person who received the wrong answer.
David Rosenthal 19:30
You have every day in a call center, people who tell you things that are not correct. And it nevertheless saves you a lot of money if you use those chat bots. So you have to start getting a bit more reasonable about where are actually the risks, what is the use case. And a lot of people are so afraid today, even in big companies, that people could put in the wrong information in an AI system or so that they simply prohibit them. And we need to overcome that. We need to get more used to it, and this will actually happen.
So in the beginning - I've already been using or been working on internet regulations and all this kind of thing for a very long time. There has been a time where people said, the internet - "Dangerous. You cannot connect your work systems with the internet. This is something, this is an absolutely no go." You're smiling , but that - I had these discussions. And then came the cloud where people said the cloud...never.
And now you have this situation where you say, AI, okay, actually what is actually AI? I mean, if you look at it, technically every system that has been trained is ai. My phone contains AI, my, the fingerprint reader is AI, and do we have a problem with - no, we're used to it. So this type of getting used to it, that's something that will have to happen. And then people will start to think about the actual use cases. And is it a problem if my chatbot makes a mistake. If I calculate and say, okay, I want to pay those people who got the wrong advice, then [employing a chatbot] may still save me something.
David Rosenthal 21:18
And the last point where I see a lot of discussions is about these concepts that we talk in regulations where we have, because we feel uncertain, we say we want to have explainable AI, for example.
Alex Bush 21:36
Mm-Hmm. <affirmative>
David Rosenthal 21:37
People say it has to be explainable what it is. And that makes them feel better because then they come to the techies and say, now you, you tell me why is this the output that's there? And they have to tell you, we don't know. We don't know exactly why a neural network creates a certain response. And that is where then the lawmakers and the lawyers, and so they, they clash with the others because they want to have a clear cut world where they can say, you can tell me that everything is explainable, then I can tell you it's okay. And that's not how it works, because if I show you the picture of a cat and I ask you then now why do you know that that's a cat?
David Rosenthal 22:20
Then you will have to think about why that's the case, and you cannot explain why you knew that this is a cat and we need to get the custom with that as well. And we have a lot of regulators that come up now with such requirements, and that's something that we need to get acquainted with and maybe come down a bit with certain requirements which just have been created out of fear while not looking at other requirements. You're doing security, for example. Security is much a bigger concern in my view than, for example, bias. Because you can do so much things with these machine learning models, with these chatbots, et cetera, which I fear much more than certain other aspects that most people jump on. And that's something we need to, to get people focused on the right risk aspects, which is something that is done but it takes some time.
Charlie McCarthy 23:19
Absolutely. And that is a perfect transition into my next question. So as we're talking about risk and trying to understand, you know, the AI attack surface and where those risks lie for organizations who are either deploying AI and machine learning or using AI powered applications within their organizations: VISCHER, I think was it late last year, released the Generative AI Risk Assessment tool?
David Rosenthal 23:48
Yeah.
Charlie McCarthy 23:49
And I might be getting that name wrong. Okay, good. Can you talk to us, David, about -
David Rosenthal 23:54
<laugh> Don't worry.
Charlie McCarthy 23:55
Generative AI Risk Assessment tool. Okay. Will you tell us a bit about the purpose of that tool and is it something that businesses can leverage to prepare for and kind of mitigate some of the risks associated with their AI powered tech?
David Rosenthal 24:10
Yes, sure. They can, and they do. It's no magic, it's nothing special. There are a lot of risk management frameworks. They're very complicated if you look at them. What we had as an experience is we had companies that came to us and say, David, do with us a risk assessment. And in such cases you have to go and push back and basically say, you have to know the risks, but I can provide you a tool where you can put all these risks in a nice order. And where I will lead you systematically to these risk lists, basically. So identify all the typical risks, which we all know, and then give you a tool where you can basically say, and give you rating how big is the risk, what is the impact? And the tool is not really the special part of it.
David Rosenthal 25:04
It's going through it and basically leading business, the stakeholders, through this exercise, having them think about the risks in a structured manner, and that's the whole magic. So what, what we've created is, is not very special, but it's an Excel sheet that has grown since where you have two different types of risk assessment forms, which you can complete for your project. So let's assume you have a chatbot and you want to do that, and you want to show management that you have done a risk assessment, then you can go and complete that form. It will guide you through all these questions, and it will ask you the questions that everyone else would also ask. And you can document your answers and basically use that to say, we've done the exercise. That will take you some time, not too long. But you then can say, I have considered these aspects.
David Rosenthal 26:02
And most people use the light version of it, which is for medium and lower risk projects, because most projects are medium and lower risks. And it's something where you can simply go and select, and it's an open source tool, so it's free for everyone. You can even put your logo on it if you want to do. And it's something that we've put in open source because we've done a lot of open source work because we believe this type of thing should be available to everyone. And you can actually, it even has an AI Act checker in there where you can find out whether you're a subject to the AI act and you can download it and, and, and basically use it for your purposes. And many companies already do.
Charlie McCarthy 26:47
Fantastic. And we'll link to that tool in the show notes. Just one quick follow up question. How often is it recommended that folks complete that exercise? Is it on an annual basis, a quarterly basis? I'm assuming risk fluctuates, right, or new risks come into play.
David Rosenthal 27:08
Actually most of the company's concerned to do it actually the first time. So <laugh>, it's getting the project -
Charlie McCarthy 27:15
Oh yeah, I mean fair enough! <laugh>
David Rosenthal 27:17
- in this process and have it reviewed for the first time. The technical answer would be you should change, you should redo it when there is a significant change or within a reasonable period. And I would say not, I mean, one year is a typical assessment period. It could also be more, it depends on, on how stable the situation is. I believe these issues are more stable in general. But what you will see in practice is that of course you start having issues that you already need to tackle in advance. So for example, if you, of course if you update models, then you have new risks. If you use models, but the input changes over time, then this is a concept drift, it's a kind of a problem, but that's a problem you already need to address in advance.
David Rosenthal 28:17
It doesn't mean that you have to do a new risk assessment, but part of what you will do is to regularly do tests whether the system still performs. Now that's not renewing the risk assessment. That's actually a measure you already defined in the beginning. And these kind of things you will probably be doing more often just to get your system stable, especially if you depend on it. The other risk assessments, we see periods between one year and three years, but we're not - I mean, for this generative AI, it has become now a topic and other machine learning systems have been already used long ago, and there you already have established practices.
Alex Bush 29:00
Yeah. That's actually a perfect segue, David, into just as we think about looking ahead and understanding kind of what your perspective is on the future of AI regulation and the role of innovation in being able to address those challenges. Right? This is very complex. There's lots of frameworks out there, there's lots of tools there. So what do you foresee in the future in terms of the landscape of AI governance and data protection and how innovation can help meet those challenges?
David Rosenthal 29:29
It's a, it's an interesting question because I see on the one hand that regulators and supervisory authorities, which play, at least in Europe, an important role, they have certain expectations and they are rather vague because they understand that technology has its limitations. Just these days there has been a data prediction authority publishing a paper where they said that they believe or they say their position for discussion is that a large language model doesn't contain perceived personal data, which I believe is outright wrong, that you have to look at it more distinctively. But the interesting thing about this is you have now an authority that tries to create a situation where they not have to deal with the question - what happens if there is personal data within a model - by declaring with some reasons I don't think are correct, but they, they've done it actually to say why it's not in there.
David Rosenthal 30:31
And that's a bit what we will see. We, we have now these general requirements, and now when it comes down to practice, we will see that they will, will be very unsure because they understand that they can't stop it. They cannot come and say this doesn't happen. So they have to very carefully find out not to put themselves in the corner. And this already happened, by the way, in data protection in Europe with the whole Trends 2 issue you may have heard of with the transatlantic data transfers. So they already had one problem where they came up with terribly unreasonable positions, the data protection authorities, which in Europe will be very relevant for AI regulations. And they, they now I think, understand that they have to be very careful. And there, I think technology and understanding technology will be very important.
David Rosenthal 31:30
We are now helping in a project in a very large, large language model creation project where a big model is created, about new technologies or new methods coming up every week essentially on how to avoid this problem and that problem and and so on. So we are still in an area where we are actually still finding out how the beast works that we try to regulate. And that will be very interesting, that interplay because we don't really know so much about AI. We're doing a lot of things for many, many years. And I'm not talking about deterministic models, but if we go into generative AI, if you talk to top researchers on this area and you ask them why exactly is this the answer I'm getting? They will tell you, I have no idea. And when do certain things happen?
David Rosenthal 32:28
There are a lot of things we don't know. It's like I've read once a paper I think from MIT or so where say it's like in the early days of physics where you do a lot of experiments and find out how things work, but you cannot yet fully explain them. And that is of course, relevant with regards to how do you regulate that. There I will see that regulators will probably be careful not to kill everything and that will drive forward. But what will happen is they will say, okay, if we don't know how to do this, we'll just put the obligation up on the big providers and you find some solutions. And that's what's actually happening. I gave the example of watermarking synthetic text and images. With images, it's not a problem, but with text everyone says that doesn't work yet, but they have the obligation.
David Rosenthal 33:20
And so now they have at least two two years time to find out how to do that. Even though we see that many companies already now try to implement actually the AI Act, which is also interesting. So we have people who want to implement it as fast as possible, whereas certain answers and certain things will only have to come up in the future. And the more time that passes, we get more used to it and more relaxed to it because we get used to the problems and we get more focused on actually where are really the business cases and what can we do with that. And so that's I think what will happen over the next years.
Alex Bush 34:03
Yeah. Yeah. Wonderful. And, and just a few closing questions here. Obviously the EU AI Act is one thing, but we've seen Canada follow with their [Artificial Intelligence and Data Act], the [US] Biden administration recently last towards the end of last year put out their Safe, Secure, and Trusworthy AI Fact Sheet, and this is what we need to go do here in the US as well, so are there any other regulations areas of, of this, this topic that you would recommend our listeners go familiarize themselves with?
David Rosenthal 34:37
So there are a number of other regulations that people may want to look at. And the AI Act, by the way, is very careful in not getting into the yards of those others regulations, including the data protection regulations, the GDPR. So it's, it's very focused, but there are some others such as the Data Act, which is about all devices that are somehow connected to the internet. If you offer something like that, that's something you want to look at. If you offer some type of online platform to Europe, then the Digital Services Act will give you some more red tape that you have to deal with and which is actually already enforced now. And if you have products that you offer now, any type of also product that could be hacked, there is the Cyber Resilience Act, which means that you have to create systems that are basically secure, have updates, et cetera, and that all comes together. So your product that you may be offering to Europe may have to fulfill then various different regulations apart from the AI Act and of course the GDPR, which you all know and have to deal with in Europe.
Charlie McCarthy 35:50
Correct. Okay, fantastic. This is, you are a wealth of knowledge and I so appreciate you coming on the show to share your very valuable insights with the community. As we wrap up, David, any final thoughts you'd like to leave listeners with or perhaps a call to action, a takeaway that you might like them to leave this show with?
David Rosenthal 36:14
As a last word? I would say that in five years we won't be talking about or we won't be caring about AI anymore and it won't be here anymore because it will be so obvious for everyone and so much part of our work that it is, it's just there, like the internet. We don't talk about the internet today anymore because it's just here. And that will happen with AI as well, I believe. And it takes some time, but we will get used to it.
Charlie McCarthy 36:48
We will. Absolutely agree.
Alex Bush 36:50
Yeah, for sure.
Charlie McCarthy 36:51
Okay. Well once again, big thank you to you, David, for being here. Alex, pleasure having you on as a co-host. I appreciate your time.
Alex Bush 36:57
Appreciate it. And thank you, David, for the time. It was great to talk to you.
David Rosenthal 37:02
Thanks very much for this opportunity to talk to you. It was great.
Charlie McCarthy 37:07
It was a pleasure. And to all of our listeners, thank you again for joining us. This is the MLSecOps Podcast brought to you by Protect AI. We will put links to references that we made throughout the show in the transcript, as well as contact information for David if you are so inclined to get in touch. Thanks again, everybody.
[Closing]
Additional tools and resources to check out:
Protect AI Radar: End-to-End AI Risk Management
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
Thanks for listening! Find more episodes and transcripts at https://mlsecops.com/podcast.