MLSecOps | Podcast

Finding a Balance: LLMs, Innovation, and Security

Written by Guest | Feb 22, 2024 11:33:00 PM
 

 

Audio-only version also available on Apple Podcasts, Google Podcasts, Spotify, iHeart Podcasts, and many more.

Episode Summary:

In this episode of The MLSecOps Podcast, special guest, Sandy Dunn, joins us to discuss the dynamic world of large language models (LLMs) and the equilibrium of innovation and security. Co-hosts, Daryan “D” Dehghanpisheh and Dan McInerney talk with Sandy about the nuanced challenges organizations face in managing LLMs while mitigating AI risks.

Exploring the swift pace of innovation juxtaposed with the imperative of maintaining robust security measures, the trio examines the critical need for organizations to adapt their security posture management to include considerations for AI usage.

Transcription:

[Intro] 00:00

Daryan Dehghanpisheh

Sandy Dunn, it is a pleasure to be here with you today. Thank you for joining The MLSecOps Podcast with myself and Dan McInerney, security threat researcher from Protect AI. Let's start with a little bit about you. Tell us about yourself and how we got here today.

Sandy Dunn

I have a  long history in cybersecurity. I've been part of it before we even called it cybersecurity. I got my start in doing digital intelligence, competitive intelligence for HP back in 2001, and no one was really even talking about security. 

And I dove in and thought, well, if we're sending off these, you know, really multifunction printers - printers were the first IOT devices - and so I thought, well, if we're sending off these, we should care about security. So I dove in and at the time there really wasn't a lot of people talking about it. You had to jump onto IRC channels, and there was a couple of different podcasts that were out there, so started experimenting. And as I became more knowledgeable, different opportunities came up, became available.

I ended up on HP's cybersecurity team, then became a CISO at Blue Cross of Idaho. I have experience in manufacturing, obviously a CISO in healthcare, and a CISO in a startup. So I've been doing security for a really long time.

Daryan Dehghanpisheh

Across a vast set of experiences. That's wonderful. So that vast set of experiences, you know, your tenure in this industry, obviously you've been through several digital transformations and several technological domain evolutions, if you will, right? The move to cloud, the move to mobile, web 3. 

How do you see safety, security, and governance of AI applications at this moment in time?

Sandy Dunn

I think when ChatGPT was first released, I have to admit that I was a little bit numb to the hype cycles. As you said, we've been through so many. And once I started using ChatGPT, I found it to be the most exciting and most terrifying technology that I'd ever seen and so different than anything that we've seen before. 

I think the challenge for organizations is they will have to be - this moves them into a very uncomfortable space because this is so different than anything that we've seen before, and trying to manage it with, in the ways that we've done traditionally, just won't be effective. 

And so helping them understand the bigger landscape and then being able to find that - being uncomfortable enough to know that they're going to probably have to have a larger risk appetite than they've had in the past.

Dan McInerney

What was the terrifying aspect of that? Like what made you uncomfortable when you first started using it?

Sandy Dunn 

Just what was possible. Like I quickly identified all of the different ways - and of course, always thinking about how could this be used against us? Thinking about how quickly an adversary could use these tools to accelerate attacks against an organization - that always move more slowly or are bound by processes - that [the attackers] can now use this.

We had already seen [attackers] taking on ransomware as a service, them being able to actually move and be more agile and attack more frequently before we had these type of tools. And then I saw that my Black Hat isn't very black and I could come up with a thousand ways that I could use it to really attack organizations.

Daryan Dehghanpisheh 

So as you thought about the myriad of ways that Black Hats could be using this technology, you came up with kind of a way to think about it, and a way to contextualize and compartmentalize different areas of risks or threats. Talk to us a little bit about that and your framework and how that got - the genesis of that.

Sandy Dunn 

When I was first diving in and trying to understand the different threats, I really came up with initially four categories. 

You know, there was the threats of using the models. So the nuance of that you actually - because of how you communicate with generative AI - you can't firewall all of the possible ways that it could be attacked, you know, all of the different ways that, because the code is actually part of the command, there's no way to actually theoretically come up with a firewall that always prevents the way that it could be used and manipulated, abused. 

So the threats from using the models themselves, and then of course the threats to the AI models; all of the different ways that you can do the stealing and do the poisoning on the models themselves.

Then of course the threats from the AI models, which I think is really the most interesting; the AI and regulatory threats. 

And then lastly, I finally added another category of threats of not using the models. There's the risk of not acting on this and not engaging in using them to improve your environments.

Dan McInerney

So I love all the work that you've done around LLMs especially because I can tell that you've thought so deeply about this, like just sitting there late at night pondering, oh my gosh, you could also use it for this. 

And so, how do you see an organization that starts employing? They're at ground zero and they start employing LLMs into their organization. What's like a 10,000 foot view of the first few categories they have to think about in order to help secure their organization after they introduce this into their network or their workflow or whatever.

Sandy Dunn

So one of the questions that occurs to me is, is the more controls we put in front of it, the less useful it gets. So I could see an organization saying, hey, we wanna use this, making this huge investment and then trying to lock it down and actually spending a lot of money to create a tool that isn't very useful. 

So how do we find that balance of this tool that isn't trustworthy? You know, my description of it is a nitroglycerin filled Tasmanian devil crossed with a cobra. Like it is, you know, it is dangerous. 

Daryan Dehghanpisheh

Sounds delightful. 

Dan McInerney

My kind of pet.

[laughing]

Daryan Dehghanpisheh

Where do I go buy one?

Sandy Dunn 

But yeah, but the more you try to contain it and control it, the less effective it is. So how do you find that balance between giving your people access to the tools to be able to use it, without trying to block all of the ways that it's effective?

Dan McInerney 

Yeah, I always felt that same way about jailbreaks

Like ChatGPT, for instance, has a bug bounty program, and they don't include jail breaks in their bug bounty program because that's like a safe...it's sort of a gray area of safety because you can Google most of the bad information that jailbreaks are preventing you from getting, but then you just limit its utility, you know? 

Like a security researcher can't ask it to go find an exploit in this vulnerable code. They can't really - and that's a use of it that helps humanity. You're on the security team. So it's interesting that you bring that up, that all these safety constraints around them end up just nuking their ability to help you get jobs done.

So what do you think needs to change in that conventional sense of security going forward? Because I kind of like to think of these changes, like D was saying, [for example] mobile app security, when that was brand new. Web applications, when those were brand new back in the 2000s. Everyone had to kind of tweak their security program, and I'm curious where you think the most useful time is spent for a security team in tweaking their program to include the LLMs in general, to protect the organization from the extra threats that these might pose when you start using them.

Sandy Dunn

Yeah, I would say that, you know, first off you have to understand the business case. I mean, the worst thing you can do is just start throwing LLM or GenAI technology into your environment without understanding the problem that you're trying to solve. 

GreyNoise did a great podcast where they talked about, you know, they identified a business problem. They came up with metrics on how they were going to measure what they were solving with it, and then they could effectively come through and show how using the model actually was a business benefit to them. 

I think that's the number one thing, is understanding the problem that you're trying to solve. And you see a lot of people making the mistake of just adding a chat bot, you know, misunderstanding what it's used for, seeing it as more of a way to be a knowledge management system. Or, you know, misunderstanding what it's useful for and then applying it to a problem that they probably could solve more effectively a different way, more cost effectively a different way.

Dan McInerney

Yeah, like that Chevy dealer that ended up using OpenAI's ChatGPT as a website helper. And then people are like, all right, you are going to make a legally binding agreement with me that you're going to sell me a Chevrolet for $1

Now you're just kind of applying the LLM just because you can, and you open a security risk without understanding even what you were trying to solve there. Are you trying to replace a customer service rep? It's not really what [LLMs are] best at, and now you've just opened up security issues.

Sandy Dunn 

Well, and so positive use cases: [example] threat modeling. I mean, looking at all of those different areas that we know we should be doing, that we don't have the time and resources. 

I think that, and then understanding the risks and benefits. I had a gentleman connect with me today on an AI risk register, which I think is a fantastic idea. I mean, we've never done risk registers well within cybersecurity. And now, you know, we have this amazing capability

depending on how you implement it and what your metrics are, how you validate that. But now you all of a sudden can move your program forward heavily or massively.

Daryan Dehghanpisheh 

Yeah, Sandy, you mentioned threat modeling, and the recent MLSecOps event this last month had Adam Shostack talking about threat modeling and, you know, when you think about threat modeling in the context of a business process and the technology that underpins it, if you will, in trying to understand both the perimeter of defenses you need to invoke as well as kind of the technological domain, nuances, or novel attack surfaces that might be there - when you think about threat modeling in that space for business, you know - applied AI - how do you guide?

How do you think about guiding your clients and customers and other entities that you are talking with all the time about changing particular elements of their security operations, specifically on that threat modeling element? I'd like to kind of get your thoughts on that.

Sandy Dunn 

Well, I think you're exactly right. So that's exactly how I recommend approaching it, is understanding what you're not doing well today. Or, you know, start with the threats. 

All of that technical debt, all of those skeletons that you still have, that everyone knows within the cybersecurity team, the IT team, that you're just waiting for something bad to happen. Now you all of a sudden have a tool that you can actually get ahead of it. 

If you think through the problem correctly and understand where to apply it, you now have something that you can use to get ahead. 

And that was part of what I got really excited about when I looked at ChatGPT, because all of that grind, front work that we have to do, that takes a lot of resources and time, we now have a tool that can get us ahead of that. So, I would say to look at the problem holistically.

One benefit that ML is really bringing to the conversation is around S-BOM. Don't look at LLMs in isolation. Think about the whole business and think about your entire digital ecosystem and saying, what do we do? What are the services that we offer to our customers? What aren't we doing well where LLMs could benefit us to offer a better service? 

And then based on our attack surface, what does LLMs broaden and do we need to think about protecting because we now have this inside and outside of our organization. 

Also the creativity, the thing that is fascinating to me as I have these different conversations is I think we all use them differently. They're a little bit like a personal assistant where if I have a personal assistant, maybe I don't want them to get me coffee. Maybe I don't want them to do you know, plan my flights because they're terrible at it, but you do, D.

Daryan Dehghanpisheh 

Don't do my inbox!

Sandy Dunn

Yeah! And so, you know, everybody's relationship with these will be different and how we use them to effectively do our work will be different. 

So I think organizations will benefit from really giving their employees the freedom to understand, hey, how do I apply this to my work so I work better?

And I think that's uncomfortable. I mean, where I think we're very used to as an organization coming up with the technology and then kind of from the top down distributing it. Now we're almost saying, you know, passing it out to our employees and saying, you tell us, you tell us how we best engage with this.

Daryan Dehghanpisheh 

Hey, I have a follow-up to that, maybe a slight detour. You mentioned the S-BOM as a construct, if you will, and a core element of security postures in other spaces. The sponsor of this podcast is Protect AI, and they are a leader in AI/ML Bill of Materials, which is kind of an equivalent to that. 

How do you think about an AI/ML Bill of Materials in terms of understanding the concepts that you just articulated? How vital do you think that is or important that is to begin constructing the types of things you're talking about?

Sandy Dunn

I think it's, you know, nutrition labels, however we want to call them, but I think those are critical resources and tools that us as organizational leaders, being able to look at those and understanding, hey, what does this system have and what do I need to know about? 

Things like even bias is such a very complex conversation. I was listening to “This Day in AI” podcast, and they were talking about one of their users asked for the President of the United States into [Google’s chatbot] Bard and it popped up a black woman. Well that's [-] obviously they're over-correcting [the model tuning] in some way that, you know…they're so concerned that they're going to be biased one direction, they've almost over-tuned the other way. 

One of the examples I give is I asked DALL-E for an image of a university professor and it gave me a happy person, blonde, attractive, and then I added the word “cybersecurity,” and she looked like a witch. I mean, that's hidden. So being able to understand all of the different ways that bias can impact, like all of the nuances of what is bias. 

Daryan Dehghanpisheh

Wow.

Sandy Dunn

And I think, and that D, I think that's the other interesting thing, is we keep throwing around this word “intelligence.” But what, you know, we don't even really have a great grasp of what human intelligence is and how we measure and convey what that is. We've all met that super intelligent person who can't find their way across town, that they don't have common sense. And so how do we measure what is intelligence and what is a correct interpretation of that?

What I would hope organizations recognize is this is truly the jagged frontier. I mean, this is absolutely the very beginning of exciting technology, but being able to think through it and understand just how new it is and how much we don't know.

Dan McInerney

Yeah, I find that there's a lot of gray areas here, like the whole LLM security, because there's the question of bias. In a traditional security metric, something either has remote code execution or it doesn't. There's not a lot of gray area. 

The gray area lies in, well, was it supposed to do remote code execution? Did you protect it well enough? And LLM bias feels like it's riding the edge of this gray area, where if you have a bad model and it just categorizes things poorly but it has a security context, that feels like a vulnerability, something that you could probably label with a CVE or something. 

But then there's this issue of social biases and stuff like that, and it's like, if you're an organization who's importing, let's say, Llama, an open source model, is this…you don't even control that. You didn't train the model. You're using somebody else's model. 

And so I feel like this might be a consideration for organizations in how they're kind of doing the threat model itself. There's only so much you can control if you're using open source models. Like you can't control the safety mechanisms, you can't really control anything like that. 

So when you're putting these systems together, you're probably gonna use an API or some other tool in front of this open source model, and I feel like that's kind of an underrated area of security. It's not just all these kind of gray area vulnerabilities in the models themselves, but the tools you stand up in front of them.

Do you have any kind of thoughts on the attack surface of this whole ecosystem of tools that have stood up around the open source models and how those might impact all the things you've talked about so far?

Sandy Dunn

Right, and how much of that do you control? Because the reality is, is for most organizations, they're going to be using a Databricks solution or they're going to be using an AWS solution. Like there's this shared responsibility. So we come out with this huge list of potential threats and vulnerabilities. And how much of that do you have visibility or control over? 

So I, yes, I agree with you, Dan. I think there's just [it’s] unknown right now exactly how much you can trust any of it, which, you know, really my response to that is to go back, but how much can we trust anything? You know, look at the issues with Avanti this week. So my hope is that we take some of, we look at the entire digital ecosystem and say, hey, we should really have model cards for our Cisco equipment. We really should have model cards for everything that we have.

So, and then being able, the dynamic part of it, being comfortable that saying that there's a lot of risk here that I can't control. I don't know. And then being able to monitor and being able to get in front of it quickly.

Dan McInerney

So what if you were like a new CEO or CISO [Chief Information Security Officer] of a small organization and they said, we're going to start using Llama2, and we're going to use this for some sort of business context. And we'll say it's maybe not a super sensitive like business context. It's not filtering spam emails or anything like that. What are like the top three things you would tell this organization they have to think about in order to mitigate the worst risks that you see in the AI sphere as a whole?

Sandy Dunn

So it's a great question. And I do think a lot about this, which is there's so many different benchmarks and tools out there. 

And my guidance would be understanding the business problem that we're trying to solve, putting tests and measurements in place that I can dynamically test, and then going through and monitoring it with those tests consistently.

I think that a little bit like a, you know, if you're on a battlefield and you're trying to win a war, there's so much that you can't control and you don't know, but you have a mission. 

And so being able to have that, um, real time information on what is the threat and what you're trying to, what could possibly have an impact to your organization will be how you move forward effectively using the tools with, not trying to constrain it too much, but being aware that there are a lot of threats.

Dan McInerney

So let's talk about practical examples. If you are the CISO of this organization, and they put an LLM somewhere in their network, either publicly facing or internally, what do you think would be one of the worst things that could happen to it? 

I mean, given the examples in the AI Vulnerability Database and things of that nature, how do you see the first breach occurring as the lowest hanging fruit?

Sandy Dunn 

Yeah, it’s a great and excellent question. I don't have a great answer. I think anyone who's in the space right now recognizes that it's just a matter of time before there's some huge disaster that none of us anticipated. I think that's what keeps us all kind of focused on understanding what the threats are. 

I mean, you look at the models that are out there, they've been shared and everyone's implementing them really quickly. No one really knows their provenance. There could be backdoors. We know adversaries are actively always trying, you know, even on GitHub, they're always trying to sneak in different ways that they can compromise systems. 

And I think right now, Dan, it's just an unknown. So, you know, part of that is just segmentation and making sure that you're not letting it impact knowing, you know, putting the LLM into a place where it is in production, but you're isolating it from the things that would absolutely devastate your business.

Dan McInerney

Yeah. And so I was thinking about this because you just mentioned you think that there's probably going to be some kind of event in the future. That's probably pretty catastrophic. It has something to do with AI security. And we don't know the unknown unknowns of course in this case. 

But I was thinking about this a lot too. And we actually spoke about this recently about how we both kind of feel like the whole agency thing with LLMs is really going to blow the door open on the practical security concerns of deploying LLM in production.

Because, so for those that don't know, agency and LLM kind of means you allow the LLM to make commands on your behalf. You allow it to make system commands, stand up a web server, write some code and deliver it to GitHub. That kind of thing is terrifying to me and I have this feeling that you may agree with me on that. 

Did you want to talk a little bit about how agency kind of changes everything? That's probably coming in the next year or two. I know OpenAI is working on it right now.

Sandy Dunn

Yeah, the agency, I mean, I'm confused on why anyone would trust an LLM with agency because my personal experience is I'll ask it to do something, I did this recently - I asked it to identify - I had a weed and some hay and I took a picture of it, uploaded it to ChatGPT and said “what is this?” And it was able to answer the question quite, you know, identified the weed, told me exactly what it was.

Two days later, I took the same picture, asked it again, because I was going to add it to some slides, and it said, sorry, I can't help you. I can't help you with that, I don't know. And so it failed. 

Daryan Dehghanpisheh

It didn't have its coffee, Sandy. Didn't have its coffee. Early morning. Time of day.

Dan McInerney

[laughs]

Sandy Dunn

Or that experience where you ask it to do something and it refuses, and you say, yes, you can do this, do it now, and then it says, oh yeah, okay, I can do it. So, you know, like it's so tricky to engage with it. For me personally, and I don't consider myself an expert prompter, it fails so frequently. I would never hand it a task that was mission critical because I just don't trust it.

Dan McInerney 

Yeah, I kind of feel the same way. Right now, I feel like the agency aspect of these is really scary because it gets the code wrong. I ask it to write code all the time, and about 30% to 50% of the time, there's a bug in the code somewhere. And yet there's these tools like AutoGPT that came out fairly recently, I guess. 

Sandy Dunn

No, over a year ago, I played with it, Dan. 

Dan McInerney 

And it's got 100,000…a year ago, right? Really?

Sandy Dunn

Yeah, I played with it. And yeah, you know, spent a couple of days getting it all stood up because I thought, oh, this is the answer and then found it failed all the time. Now that was a year ago, so I'm sure they've improved it since then. 

But I've consistently found that problem personally that when I ask it to do something, you know, my answers are incorrect. So I agree with you that I think agency is something that, you know, and obviously everyone has their different prompts that they use to improve the quality, but you know, I myself have not been able to have success with it.

Dan McInerney

Yeah, I haven't either. So one thing I've been thinking about, like kind of a parallel here, is whenever MacBooks started doing facial recognition, not MacBooks, iPhones started doing facial recognition, I remember all my friends, all my hacker buddies are all like losing their minds. They're like, I'm never getting an iPhone. 

And then here we are several years later, and they all have an iPhone, and they're using facial recognition. 

And they're like, well, the convenience just kind of overweighs my security concern. And I feel like we're going down that path on the LLMs because people are…they want the convenience.

Daryan Dehghanpisheh

Yeah.

Dan McInerney

And even if it fails today, in probably a year or so, there'll be tools that kind of figure out what it's good at and what it's bad at to give it that agency. And I feel like that is when the attack surface blows up. 

That attack surface starts going from data leakage to remote code execution through the prompt itself into the model. And so these things are gaining power exponentially every day. So I completely agree that these unknown unknowns in the future risks of these is probably a lot more significant than the current risks that we can think of. You know, the data leakage, the model theft, that sort of thing. Those still feel slightly theoretical. 

Did you have any thoughts on like the theoretical attacks, like the model theft versus a practical attack like prompt injection and LLMs and where kind of companies should be focused on defending in those situations?

Sandy Dunn 

Yes, I completely agree with you. A lot of the actual machine learning model attacks, there's some of the poisoning attacks and the different things that they come up with do feel a little bit like, you know, thinking about attackers, I mean, they're always, you know, the end game 95% of the time is money. And they're, you know, they're going to do whatever they can to access, you know, impact you financially. And if it takes a lot of effort, they're not going to do it. 

But as we close down and we mitigate against those kind of threats, those ones that seem theoretical all of a sudden become attractive to attackers. So I don't think we can ignore them in the long term, but the short term, I think that we need to really put on that Black Hat and think about how an attacker will want to impact our organization and the low hanging fruit right now, which is absolutely just tricking end users into doing something really stupid. 

We see the phishing attacks, you know, last year, QR codes that were used effectively in phishing attacks because our email filtering systems weren't prepared for QR codes as a method of attack. 

So again, I think that we have to really put on that Black Hat and think about how attackers will want to use this and focus on the most likely attacks and maybe ignore the noise, the research noise around the theoretical attacks.

Dan McInerney

Yeah. A slight detour here into what you were talking about with the phishing attacks. So we've been talking about how to defend your own LLMs, but you had mentioned in the beginning about how you think deeply also about how attackers can use LLMs to change the landscape of the threat field for general security of an organization. 

And I remember, I have an example here of how that's actually happening right now. One of my buddies was on a pen test and he was supposed to reset a password for a user so he could get access to the network. And he put on a wig, got a couple photos, I think he literally only got two pictures of an employee and used a deep fake software to impose her face onto his face. 

He prints off a fake ID at Kinko's and he calls up Help Desk, and Help Desk had a policy where you can put up your ID to the camera and you can show your face and you can say, request a password reset and they’ll just reset your password and it worked. He used the deep fake technology to reset his password and I see this probably happening a lot more. 

So when we’re talking about organizations that are employing LLMs, you don’t have to just think about securing your LLM. You have to think about your holistic security. as to how you defend against things like deep fakes and reach and change, you know, your password reset policies, for instance, or your phishing emails because LLMs are fantastic at writing phishing emails. 

Did you have any other kind of thoughts on how attackers might use LLMs to attack an organization, you know, whether they use AI or not?

Sandy Dunn 

Well, to your point, Dan, which is one of the interesting outcomes that we're seeing with the deep fakes is attackers actually taking actual information and saying that it's a deep fake. So creating churn and noise by not only creating deep fakes, but making people question whether real information is accurate. 

So right, we're at a point right now where really the internet, you absolutely, it's never been more untrustworthy than it has been right now. But one of the fascinating and positive outcomes I see is what I see in many of the AI groups that I'm part of is we now have circled around to where we're connecting with the human. We're actually distrusting. There's so much noise, so much information that I'm reaching out and I'm saying, oh, I trust Dan. I've seen what Dan produces, I've watched his YouTubes, he's credible to me, I'm gonna trust what he says. 

So we're now seeing where my personal experience is that we're actually putting more emphasis on that human connection than on the digital connection.

Daryan Dehghanpisheh

Yeah, you know, you were talking about use of GenAI in both kind of a professional context and then kind of like what I would call out-of-band use, right, where somebody's gonna use the tool. It reminds me of how, you know, back in the day everybody tries to block social media in their enterprises. That did not work, right? And I think it goes back to a comment that Dan made where utility trumps all. 

So knowing that utility is available and knowing that there is a spectrum of entrepreneurs, business owners, directors, managers that are listening to this podcast, what are recommendations you have for businesses whose employees might be using GenAI today in the workplace, but the company itself has no formal policies in place? 

Like what are some of the approaches that those businesses can take to build an AI security strategy considering they have that they're starting from scratch. And I mean, isn't that kind of how your proposed OWASP LLM AI Security & Governance Checklist that you've been working on kind of comes into play? Maybe we can talk a little bit about that.

Sandy Dunn 

Yeah, I would say that just like you've mentioned, and I think blocking or trying to prevent it as an organization, you create a bigger problem than you solve. So as a CTO, I've always made a huge effort to build a relationship with my DevSec team, my DevOps team, because I always felt like the worst thing is that they're trying to hide something from me, and I get surprised by it. Like I'd much rather make sure that they brought issues to me so I could get in front of it.

And so my response to that as an organization, I think that you have to make sure that you have policies, educate, explain to people what bad, you know, the negative, if something, if they use these incorrectly, they impact the organization, but give them the space to be able to use the tools because you know they are going to. They have it on their phone. You know, there's so many different applications that are out there that it's impossible to block, but give them the freedom. 

And then of course, monitor, I mean, trust, but you still have that responsibility to look for those rogue people who are using it in a negative way or something, but finding that balance between bringing it into the organization but also encouraging. I mean, your employees are the ones that are going to help you with the use cases. They're down in the trenches knowing the work that's slowing them down and where GenAI could actually help them. 

So I think trusting and building together as an organization will move the organization faster more quickly, putting those guardrails in place, making sure that you're doing it responsibly, but trusting them to do it, to use the tools.

Daryan Dehghanpisheh

And we'll put in a link on the show notes, obviously, for your incredible, it's worth a read. Everybody should take a look at it, the kind of proposed OWASP LLM AI Security & Governance Checklist. It's a really cool framework. 

And I guess that brings me to, as we kind of think about closing out here, a call to action, right, for the listeners of the MLSecOps audience, which includes practitioners, managers, developers, pen testers, hackers. Just interested parties generally. 

One of the things that we talked about early on was that you kind of described this almost as asymmetrical warfare, right? And that we needed to start thinking and acting cohesively and collectively. From your perspective in this asymmetrical theater of, you know, attack possibilities and potentials and, you know, defenses, what is the call to action that you would want this MLSecOps audience to take to try to advance the capabilities and advance the defenses and advance the mitigations of these coming AI attacks and AI risks?

Sandy Dunn 

Yeah, I see this as an opportunity. We're at a tipping point. I mean, the internet was really an experiment that got loose quickly. You know, it's always advanced quickly. Security has always been an afterthought. 

The impact to people from, you know, even from a mental, you know, we're seeing our children that are impacted negatively from all of the social media and comparing themselves. I think we're at a point now, this is so serious and the impacts can be so devastating in both, but there's also a ton of positive potential too, where we reestablish our relationship with technology in general and come up with, how do we use this effectively so that it benefits our organization and benefits us all as people, as we move into this new frontier, this jagged frontier. 

I think this is a tipping point from, you know, kind of a relationship that has moved quickly, not always in a good direction. And now we get a chance to really use these tools to reestablish, you know, how does it serve us and not us serve it?

Dan McInerney 

Right. We're kind of at that peak Gartner hype cycle right now where the utility is slowly not meeting our expectations, but we're only a couple of years away from these things just taking off like a rocket ship and either taking jobs, or hopefully benefiting humanity by getting rid of things that we don't want to do already and creating value.

Sandy Dunn 

But Dan, but you know, even as you know, look at our kids in schools. You know, they're bored to death. They're not inspired. It's the same, you know, stuff over and over again. And people talk about, you know, the fact that kids can't communicate and they're not interested. Well, you know, that's a problem that we created as a society. 

Now we have a tool where we can customize and create an education system that's unique for every student that they get challenged every day and they're excited to come in. I love learning. You know, how come kids don't love learning? Let's understand that and try to inspire them about what's possible.

Dan McInerney 

I'm so glad you brought that up.

Daryan Dehghanpisheh

Unless it's consumed via an iPad, right? My nine-year-old; I-P-A-D, the four most dangerous letters in my household.

Dan McInerney

Yeah, because we didn't get to talk about education here, but man, I have found ChatGPT to be the greatest tutor on earth. It teaches me about everything I want to know. It's like my fountain of curiosity, it can't be large enough. There's nothing I can't go investigate using Chat GPT. 

So, yeah, I think it's absolutely critical to implementing an education specifically so that we can let kids go down these crazy paths to learn about all the dinosaurs they could ever wish to know that their parents may not know about.

Dan McInerney

And unfortunately, I don't necessarily see that happening in the education field right now. I see a lot of - my uncle is a professor at a major university. And he was asking me, well, how do I detect plagiarism from ChatGPT? I'm like, man, I mean, you got a couple of tools you can use, but these things are evolving so fast. I mean, ChatGPT is updating every couple of months. What's the point? Try integrating this into your curriculum. Stop trying to fight it off.

And I feel like that's kind of also the same thing that we should be sending to security people. You don't necessarily have to worry all the time about LLMs. You can use LLMs to help secure your infrastructure too. Hey, write me a checklist of things I have to change if I implement an AI engineering department. Type that into GPT. I'm sure it will give you some good ideas.

Daryan Dehghanpisheh

[laughs]

Sandy Dunn

Well, just think about CharacterAI. Instead of having kids learn about Joan of Arc in a book, now all of a sudden they can talk with her and say, what motivated you? Now all of a sudden they're having an engagement with a historical figure. Napoleon Bonaparte, teach me about leadership. What are the three critical things? And again, but there's a negative side to it too. The AI girlfriends that people...it's a very difficult, it's a fine line between disaster and success with AI. And I think that's the thing that is the most interesting and also the most terrifying.

Daryan Dehghanpisheh 

Well, to end on education and learning, I want to thank you, Sandy, for joining The MLSecOps Podcast here. We really appreciate your leadership, your ability to weave LLMs and other tools into the education of this field of MLSecOps. So I want to thank you for coming on.

And for everybody else, make sure you check out the show notes. Go have a look at the proposed OWASP LLM AI Security & Governance Checklist from Sandy. It's a wonderful read and I'm sure commentary would be amazing. 

You can find Sandy in our MLSecOps Slack channel where she is very active and guiding a whole lot of people in how to think about this stuff. And once again, Sandy, thank you so much for coming on and thank you to Dan, my co-host, for today.

[Closing] 


Additional tools and resources to check out:

Protect AI Radar

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard - The Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for listening! Find more episodes and transcripts at https://mlsecops.com/podcast.