Exploring AI/ML Security Risks: At Black Hat USA 2023 with Protect AI
Daniel Miessler
"Daniel is the founder of Unsupervised Learning, a company focused on building products that help companies, organizations, and people identify, articulate, and execute on their purpose in the world. Daniel has over 20 years of experience in Cybersecurity, and has spent the last several years focused on applying AI to business and human problems."
Adam Shostack
Check out Adam's fantastic book, "Threats: What Every Engineer Should Learn From Star Wars!"
Adam is a leading expert on threat modeling, a consultant, expert witness, and game designer.
Christina Liaghati, PhD
Dr. Liaghati is the AI Strategy Execution & Operations Manager, AI & Autonomy Innovation Center at MITRE, and a wealth of knowledge about all things MITRE ATLAS and the ML system attack chain. She is a highly sought-after speaker and is a repeat guest of ours on The MLSecOps Podcast!
Phillip Wylie
Phillip is the Director of Services and Training at Scythe.io, and is a passionate offensive security professional with over 25 years of information technology and cybersecurity experience. Specialties include penetration testing, security vulnerability assessments, application security, threat, and vulnerability management.
Transcription:
[Intro] 0:00
D Dehghanpisheh 0:29
Hey, everybody! Welcome back to The MLSecOps Podcast!
I’m D and we are at Black Hat 2023, where Protect AI is moving the market and educating the market on MLSecOps, and we have some exciting new things that we’re showing off today, including ModelScan, an open source model scanning tool, first of its kind, that scans five different types of models.
And the debut of huntr, the world’s first AI/ML bug bounty program with payouts as high as $50,000. So, if you are interested in trying to earn some of that cheddar, I would say go to huntr.mlsecops.com, and that’s H-U-N-T-R dot MLSecOps dot com.
Hope to see you all again soon!
[Segment 1]
Adam Nygate 1:25
Hi, I’m Adam, ex-founder of huntr and now a team member at Protect AI. My background is in engineering, or before that I was a bit of a young, rebel hacker. Got summoned to court once for something but never convicted, which is great.
Yeah, I moved from software engineering to security architecture, then into consulting. I then started my first company, huntr, but very happy to be with Protect AI now.
D Dehghanpisheh 1:54
Adam, it’s finally happened after four or five months of discussions, negotiations, talks, dreams, hopes. We’ve done it. We’ve launched huntr. We made it. We launched huntr in the new form, focusing exclusively on open source assets of AI/ML.
I think it’d be great to tell the community the story of how huntr.dev got started. Why don’t we start there?
Adam Nygate 2:20
huntr started, I would say, in about 2020. I was working in consultancy and I’m speaking to a pretty big bank in the UK, and they were telling me about how they used open source, how they made sure it's secure within their organization. And it sounded really, really complicated, slow, time-wasting. Developers took an average of six weeks to two months to be able to use open source software, and it just wasn’t feasible for any team you had who actually wanted to ship code.
And they explain this model where they would kind of have this centralized committee who would review the open source software and who would A-OK it for use by the developers. And I was like, you know what, that’s a pretty interesting idea. What if I built that capability, centralize that capability, and sold access to that now approved library of open source software to customers?
And that was kind of like - that kickstarted huntr. We thought to ourselves, okay, so what’s the real risk? What’s the real challenges here? And we thought, well you know what - can we actually fix open source software at scale?
And so we tried the classic approaches of, can we hire engineers fast enough and security people fast enough to be able to fix this stuff? Terrible idea. It didn’t work.
It was actually in one of my early pitch decks to investors: I had this little footnote which said, down the line, it would be interesting to crowdsource that talent. Can we actually go out and get freelance, ethical hackers, bug bounty hunters to crowdsource that capability? And this was now about, maybe January of 2020. And so we kind of wrote a prototype in about six weeks.
We launched in February 2020, and we launched with 20 bounties. Can we fix 20 vulnerabilities? And I was a bit skeptical, I didn’t know if it was going to work. I thought maybe we would fix them in about two weeks. But within 72 hours, all 20 were fixed. And so, we were pretty proud.
And there, the journey continued. We then reached a new risk, a new bottleneck. We couldn’t find vulnerabilities quick enough to list for the hackers. We pivoted the platform to now find vulnerabilities in open source software. We initially set ourselves a goal of finding 60 vulnerabilities in a month. We achieved that in, I would say, September, October of 2020. And then we peaked a year later and we were finding, you know, 4,000 vulnerabilities a month.
And yeah, we’ve just been growing since then. But yeah, it’s been a great journey.
D Dehghanpisheh 4:58
We became super interested in that because of the notion that AI/ML is built on the backs of open source, whether you’ve been taking about commercial entities like Amazon Sagemaker, or AWS Sagemaker, which is really a bunch of open source components that were stitched neatly together, and companies like Databricks, which have huge open source followings with MLFlow, we realized that the thousand bugs a month that you guys were getting in open source could be vectored into AI/ML.
You’ve got a really large community. 10,000+ users on the platform. What do you think we need to be doing to refocus their talents? And how do they go from just traditional OSS security, traditional security components, and refocus and harness that energy around AI/ML? What do we at Protect AI and MLSecOps.com need to do to take these "huntrs" and make them "AI huntrs?"
Adam Nygate 5:57
I think the good news is that I think we’re about halfway there, which is the platform is the first step. Just giving them a place to congregate around, a place to hunt, to find vulnerabilities against and really show them, really show off what they’re able to achieve when they’re focused on these AI and ML components.
You know, hackers naturally love a challenge. They’re super self-motivated, whether it be diving into open source, into traditional web apps, kind of Web3 smart contracts, or now AI, they love to learn, they’re self-motivated, they love a challenge. So, I think (a) providing them with a platform, (b) a place where they can congregate, a community, our discord server, and then finally, the learning materials that are going to be pumping out of huntr as people find bugs, develop those techniques to find vulnerabilities.
And there’s novel attack methods. I think providing those resources to them with that kind of self-motivated spirit that they already have will be able to equip them really quickly to become this new generation of "AI huntrs."
D Dehghanpisheh 7:00
Well, we are super excited, Adam, to have you in the team. There’s so much more to come and it’s going to go really fast.
[Segment 2]
Chloe Messdaghi 7:07
Hey, Phil!
So, welcome to Black Hat 2023, and thanks for swinging by our booth at Protect AI. And I’m gonna ask you some questions, and I’m curious to know because you’re such a huge figure in our industry when it comes to helping security researchers know what are the latest threats and vulnerabilities, but also helping those break into pen testing as well.
You wrote a wonderful book, the hacker’s blueprint, right? And so I’d love to hear a little bit about what you’re seeing when it comes to AI and ML security. So what are things you’ve noticed or been hearing today about AI and ML security?
Phillip Wylie 7:47
Sure! Some of the things we’re seeing is, with ChatGPT coming out, it’s readily available. Before that, your common person didn’t have access to AI and ML.
So all of a sudden November when it’s released, you’ve got this out there, people are using it. They’re not really thinking about the security aspect. One of the things we’re seeing is people putting proprietary information into this large learning model, and who knows if it’s going to be exposed for someone to get it. So it’s really good that a focus is starting to be put on AI security because you see so many technologies throughout other industries.
Like, the medical industry. There was no security around pacemakers and insulin pumps. Those can actually be hacked. Someone can exploit those to injure someone. So, it’s good that Protect AI is starting to do things around AI, machine learning, because in my opinion it’s kind of early in the game compared to other things. You look at, like cloud security. Cloud’s been around quite a while, but it really hasn’t gotten serious until the past 3 to 5 years.
So, it’s good that someone is starting to think about that now because it’s being widely adopted. People are using AI to create content, and maybe people are writing contracts for their company, or different personal proprietary information. They’re putting into these documents, and you have to be careful with that. So, it’s good that people are starting to do something around security because it’s really critical.
Everything is starting to use AI, so it’s definitely something much needed.
Chloe Messdaghi 9:18
Now, when you were talking about ChatGPT and that whole situation where people are putting in personal information, but also like your company's information, and we’re already seeing that, and that’s been definitely one of those things.
But I know you and I were talking about this earlier, which is the fact that a lot of people in the bug bounty community are still not aware how to get into AI and ML security. And it is one of those things like, how do you think that us as a community can do better on that, because there’s such large gaps of information about how to get started?
Also, what are these vulnerabilities in AI and ML?
Phillip Wylie 9:59
So a lot of that is going to be awareness, because I think I think a lot of not– you know, this is something new.
This is the first AI/ML bug bounty I’ve heard of. I don’t hear or haven’t heard of the other bug bounty platforms even doing that. I’ve heard some stuff around like Web3 and crypto, but nothing around AI and ML. So the biggest thing is going to be awareness, getting the awareness out there and then educating people how to do it.
And one of the things I’d advise someone, if you’re wanting to get into bug bounties. There are so many people in the typical bug bounty space. So if someone’s new to bug bounty, this would be a great place to start out in. Because it’s not flooded, there’s not a lot of people. People are still learning. So you’re kind of more on a level playing field than if you’re a new person trying to jump on one of the other bug bounty platforms.
Chloe Messdaghi 10:41
I think that’s so valid in so many ways possible. I think many of us are shy or embarrassed. You know, for example, when we do our first CTF we’re a little bit shy for people to know that we’re maybe a newbie in this area, and so it makes us not want to participate when we have impostor syndrome, but also we don’t want to be the idiot in the room kind of situation.
And what are some recommendations hat you have for the hacker community on trying to get past that ego or maybe getting past that impostor syndrome to get into this type of new bug bounty?
Phillip Wylie 11:16
Well, I think one thing with impostor syndrome is just realize not everyone knows everything. Don’t worry about what other people think, you know. Don’t worry about what people think about you. You do that, if you try to compare yourself to others, that makes it difficult. So just focus on learning yourself.
And I think people are more impressed with you being transparent about what you know than trying to fake it. Because if you’re worried about people thinking you’re not that smart or don’t know what you’re doing, then if you try to hide that, it’s going to come out and that’s going to ruin your reputation.
So be open about it. Communicate with others that are doing it. Learn from other people. I mean, even you can learn from people that are just barely starting out because, you know, they’re a little bit ahead of you, you can still learn something from it.
But as for any of these types of things, start learning the technologies behind AI and machine learning. There’s different platforms on there. Like Coursera, they have some courses and stuff, start learning it, just start doing some general learning. There’s some stuff on LinkedIn Learning, so start doing that to educate yourself.
But definitely, I think it’s a good area for people to get into, which is very interesting and I’d like to learn more about it myself.
Chloe Messdaghi 12:27
Well, thanks Phil for swinging by. And you know we love to do anything with you to help you out. And yeah, thank you for all that you do for the community.
Phillip Wylie 12:36
Always a pleasure to collaborate with you on anything. So thank you for having me.
[Segment 3]
Charlie McCarthy 12:39
Daniel, we really appreciate you being here visiting with us at Protect AI. I’m wondering if you can give our podcast viewers and listeners a brief introduction about yourself, a little bit about your background in cybersecurity or threat research.
Daniel Miessler 12:53
Absolutely. Thanks for having me. And so, I’ve been in cybersecurity for around 24 years, and I just recently transitioned to starting my own company called Unsupervised Learning. And my focus has been primarily solving security and other problems with AI. So that was a big transition that happened around October of last year.
Charlie McCarthy 13:14
What would you say is the biggest threat to AI and machine learning security presently?
Daniel Miessler 13:21
I would say the biggest threat to that stuff right now is actually the use of agents, specifically LangChain agents, but really any agent that’s connected to external APIs. We’re actually starting to see people frontend their actual enterprise APIs with an agent, and the agent decides which of the APIs to use. So if you send it confusing language, you could trick it into going to the wrong API.
It’s really nasty.
Charlie McCarthy 13:51
Startling.
Daniel Miessler 13:52
Yeah.
Charlie McCarthy 13:53
So, as an industry AI, machine learning, what do you think are the most important things that we need to focus on now in terms of moving the industry forward and focusing on MLSecOps, machine learning security? What do the CISOs and enterprise owners need to be focused on in their organizations?
Daniel Miessler 14:11
Yeah, I think the most important thing is tracking where the data is and tracking which APIs can touch the data, and then tracking which parts of the AI are touching those APIs. So, you have to have a direct map between the stuff you’re protecting and the stuff you’re letting the AI view or touch or interact with. Because if you don’t do that mapping, you’re going to end up with it having too much access and too much power.
Charlie McCarthy 14:38
The question that I don’t even want to ask because I feel like we’re all sick of talking about it at this point, large language models, LLMs. Main security concerns there: do you have any? If we’re talking about foundational models or just this move in the industry toward everyone adopting generative AI and these applications that they’re plugging in, should we care?
Daniel Miessler 15:00
Yeah, I’m not overly worried about it, just because it’s so early and I feel we’re going to have a lot of controls around the outputs that come out of them. So I’m not super worried about it.
What I’m more worried about is connecting, like, interactive agents to APIs.
Charlie McCarthy 15:17
So you feel like there will be sufficient guardrails?
Daniel Miessler 15:20
I think so. I think there’ll be whole industries, like Protect AI, that stand up to actually protect it. So I’m not super worried about it.
Charlie McCarthy 15:30
I really appreciate you stopping by to chat with us, and everybody needs to check out Unsupervised Learning, UnsupervisedLearning.com.
[Segment 4]
D Dehghanpisheh 15:37
Hey, everybody! With me again is Dr. Christina Liaghati, the famed, shall we say, director at MITRE on the ATLAS framework. And we have three simple questions for you today, although they may not be so simple.
I guess the first question that’s not so simple is, how do you and MITRE see the “state of the union,” if you will, on AI and ML security today?
Christina Liaghati, PhD 16:06
That’s a fun one.
So I’ll maybe start off by saying this is only the beginning, right? I think we’re at the very cusp of what this field is going to look like. We’re in the same spot that cybersecurity was 20 or 30 years ago.
So it’s definitely something that’s evolving pretty rapidly right now. But I think that also means we’re in a very rapidly shrinking proactive window, which means that our adversaries are not currently as complex as they have to be in how they’re attacking AI enabled systems.
But I think we’re going to see the rise in sophistication in those attacks happen pretty quickly now that we’re deploying AI in so many different environments. So, I think that’s going to happen pretty fast, but we’re still in a little bit of a window. And that kind of ‘only the beginning’ feeling, I think, is what’s exciting to get kind of proliferated across the community, but we’ve got to act fast if we’re going to take advantage of it.
D Dehghanpisheh 16:54
So, in the spirit of acting fast, what would you say is the most important type of security problem, or what’s the biggest security threat? We hear a lot about large language models and prompt injection attacks, and we see a lot of other things out there. And there’s adversarial machine learning and model evasion attacks.
What is fact from fiction, if you will, in terms of the risks and what is probably the thing that scares you the most?
Christina Liaghati, PhD 17:26
So, this one’s an interesting one because I feel like the types of threats are very unique to the system that you’re actually deploying AI inside of, right?
So it’s very dependent on how you’re actually using AI in your system of systems context. So, I would actually probably say that the biggest threat to our community is our lack of understanding. Like, we don’t have a good way to, as an entire holistic community, assess the risk of deploying AI in these really consequential ways. So it’s much more of a, we need to become smarter on how we’re vulnerable and how we can mitigate those risks rather than saying, ‘Oh, data poisoning is the biggest problem, focus all your money, time, and energy on that.’
It’s honestly very dependent on how you’re using and deploying AI because the ways that your system could be attacked or could be vulnerable to failure or different types of even emerging attacks like prompt injection, or LLM-unique things that we’re still characterizing as a community. That’s going to be very different than, say, someone who’s deploying AI in a healthcare context, right?
It’s very, very system dependent.
D Dehghanpisheh 18:31
Thanks for that.
So, if education and awareness is the most important thing, obviously Protect AI, the company that sponsors the MLSecOps Podcast and MLSecOps.com, launched a new bug bounty program, Huntr, to create more awareness around security threats of the supply chain and the like.
If we have to go get educated, what are some of the things, and resources, and spaces, and places that the community should be engaging with to stay educated, stay informed, and stay updated?
Christina Liaghati, PhD 19:03
We need more data. That’s the bottom line here.
We do not have enough understanding as a community of what is happening in the real world, right? Well, all the bits and pieces of data are in these different organizational silos. We don’t have a good picture of what the threats look like or where AI is actually failing, right?
So, I think that the underlying theme for all of it is we need more data. I’m really excited because the bug bounty programs as well as much more of the, like you brought up, some of the community aspects of this, right?
Like, under MITRE ATLAS we’re building out incident-sharing mechanisms to allow the community to contribute both vulnerabilities, which is more that proactive searching for vulnerabilities behavior, and the actual more reactive forensics of, ‘All right, this is how I was attacked.’ Right? So, let’s avoid these kinds of things in the future. So there’s multiple aspects of this data that I think is going to really underpin how productive we are around AI security in the future and really prioritizing the risks that our individual organizations need to start to mitigate first.
So, it’s very, very dataset dependent.
D Dehghanpisheh 20:04
So, one last question.
When we spoke on the podcast, one of the questions I had asked you was, are you noticing an increase in the number of attacks that are occurring, and the attack, I guess, frequency and severity, is it increasing or decreasing?
And you gave an anaswer that was basically, hey, it has increased quite a bit. That was about five months ago. I’m curious, is that pace of acceleration still continuing? And does MITRE see that continuing unabated?
In other words, we get asked all the time, are these attacks real? The answer is yes, but they’re not super public. So just tangentially, maybe off the cuff, could you give us a sense of the frequency of which these are happening?
Christina Liaghati, PhD 20:51
Yes, the frequency has increased. A shocker, I know. And in terms of how often it’s happening that actually kind of comes back to the data problem that I brought up earlier, because even at MITRE we’re getting anecdotal evidence from different conversations that are happening sporadically, both with government and industry.
And while that gives me quite a bit of confidence in saying that, yes, they’re increasing rapidly and it’s really concerning, I want to get that data in front of the community. I want to release much more of, like, a dashboard view or something like that, right? As we’re evolving these different mechanisms for collecting the data that will appropriately protect the IP or the reputational risk of the organizations involved.
D Dehghanpisheh 21:28
[inaudible]
Christina Liaghati, PhD 21:29
Right, exactly. But we still have a little bit more community awareness of how often they’re happening, because right now it’s not a very good picture. And even though MITRE, our not-for-profit safe space in the middle of all of that, has a good bit of information, I really want people to have the data in their hands.
D Dehghanpisheh 21:46
On that note, I just want to say to help us generate more data, get on the bug bounty platform at huntr: H-U-N-T-R dot MLSecOps dot com. Start attacking, and we can work together to improve AI/ML security.
From Black Hat 2023. Thank you for joining.
Thank you, Christina!
Christina Liaghati, PhD 22:06
Thanks D, I appreciate it.
[Segment 5]
Diana Kelley 22:07
Hi Adam!
I’m Diana Kelley, the CISO at Protect AI. And you are?
Adam Shostack 22:12
I’m Adam Shostack. I’m a review board member here at Black Hat and I’m an expert on threat modeling. I help people secure their systems and anticipate what’s going to go wrong with them.
Diana Kelley 22:23
And you wrote the book on threat modeling, right?
Adam Shostack 22:26
A lot of people say that, thank you! And I wrote two books! I’ve just released Threats: What Every Engineer Should Learn From Star Wars and the reception has been super good. I’m really excited about it.
Diana Kelley 22:39
It’s a great book and really important work that you’re doing.
I was wondering, when we look at AI and ML, what are your thoughts on the current state of AI and ML, and what do we need to think about if we threat model it?
Adam Shostack 22:51
Ooh! We could talk for hours!
And I think the biggest thing is, it's exciting. Right? The things that have happened in the last year are so transformative. It’s one of the most exciting times that I remember in the industry. It’s like the launch of cloud, it’s like the launch of the web. It’s like, this is going to change a lot. And I think we need to get systematic about how we approach the security of these systems.
Today we’re hearing a lot of people who are deploying these and then figuring out, what am I going to do to secure them? And that is one way to do it, but if we stop and think and we ask the question, ‘What can go wrong,’ then we can anticipate the sorts of problems we might have and we can plan for them so that we can protect our systems better.
Diana Kelley 23:51
And how would we do that with AI and ML?
A lot of the talk about AI and ML threats really focus on the last stage. The adversarial, the deep fakes. What would it mean to shift left on AI and ML and how would threat modeling play in?
Adam Shostack 24:09
So, I think that’s exactly the right question. How do we bring security into each stage of the AI pipeline as we’re training our models, as we’re refining them, as we’re building them into our systems?
And I think it’s important to realize that there are – the way I think about it at least is there is AI for business. I’m going to solve a problem for my business using AI. There’s defensive AI, I’m going to use AI to help protect my systems. And offensive AI, I’m going to use AI to attack things. But if we’re shifting left, we’re thinking about the AI that’s helping our business. And there, I think the work we do from inception to say, “What could go wrong with this? What are we going to do about those things?” Those questions are the heart of threat modeling.
And we don’t have to wait for the system to be in production to start doing that work. We can say, oh gosh, this system might memorize data. We should think about where we’re getting our training data. It might hallucinate, we should think about the business process we’re going to insert it into. How do we get human beings to be vigilant when the AI is right, or sounds right, 80% of the time, 90% of the time.
And I think we can think about these problems before we’ve even selected a language model, before we’ve trained the language model, before we’ve built a user interface, we can be thinking about these. And I think of all of that work as threat modeling. And I think it applies to AI in all of the ways that we’ve been doing it, and I also think we’re going to learn to threat model AI specifically better.
So for example, the OWASP just launched– Oh, the OWASP? I sound like an old person. OWASP just launched a top 10 for AI, for language models. So that’s a list of specific threats that apply to your LLMs. And so we can use that to inform our engineering even when those engineers are AI experts, they’re data scientists, we can give them that educational material, shifting left and getting better at protecting the AI we’re building.
Diana Kelley 26:50
MLSecOps!
Adam Shostack 26:52
MLSecOps! There we go!
Diana Kelley 26:55
Thank you so much for speaking with us. Any final thoughts for the Black Hat crowd on what to be thinking about this year?
Adam Shostack 27:01
Oh, wow! So I do think AI is really transformative, and if you’re not thinking about what is your strategy for AI, not ‘Hey, we’ve got some red teamers.’ Red teaming is great to make sure you did what you thought you did.
But how are we going to strategically think about, and how are we going to shift left? How are we going to build security in is a crucial, crucial challenge for the next year.
Diana Kelley 27:34
Especially now at this transformative time.
Thank you so much, Adam Shostack, for speaking with us at Protect AI.
Adam Shostack 27:40
You’re welcome! Thank you!
[Segment 6]
Diana Kelley 27:42
Hello, I am here with Mike Rothman. Mike, would you tell us a little bit about your very illustrious career in security?
Mike Rothman 27:48
Oh [...] you know. So I’ve been in and around security for about 30 years at this point. Before it was a thing. And that was always fun.
So I’ve been kind of between practitioner and researcher and corporate folks. I’ve worked for a bunch of companies, too. I’ve kind of been there, done that, have all the road rash to prove it. But it’s been a fun ride, it really has.
Diana Kelley 28:09
You know, all these 30 years, you’ve seen a lot of tech come and go and make a lot of splash, and this year, AI. Right?
What about AI security?
Mike Rothman 28:20
So there are a whole bunch of different ways to think about AI security. So obviously, there’s the aspect of how do we protect the intellectual property that we are trying to train with AI? But there’s also, how are folks using AI within their day-to-day operations? Are they putting some of your corporate data up into something like ChatGPT? That’s a problem as well.
So you have both the user side, as well as kind of the corporate intellectual property side that we really have to think about. And then, of course if you want to get all conspiracy-ish and stuff, we have this The Terminator, is this the beginning of SkyNet? So, everyone’s going nutty about that.
And again, it’s a very interesting corporate tool. But I do think we need some guardrails and some rules of engagement to make sure that we don’t get in trouble with it.
Diana Kelley 29:07
What do you think is most important for people to be focusing on right now when it comes to AI security?
Mike Rothman 29:15
Well, if I had to just pick one thing, I think it really is to start to educate folks about how the data that is used to train– And by the way, every time you put a prompt into one of these public kind of GPT or any other kind of LLM-based environment, they’re using that data and that prompt to train the algorithm to do more stuff.
So, you have all sorts of different privacy things, and it’s not like you’re going to train people to actually read the EULA for how you’re supposed to use these things. So, I think it’s really about education in terms of, yes, they can be a very powerful tool to accelerate how you do business, but you have to understand what’s going to make sense and what’s not going to make sense.
Diana Kelley 29:59
You’ve got to secure it. All right, well, thanks so much for taking the time, Mike, thank you.
Mike Rothman 30:04
Good to see ya!
[Segment 7]
Diana Kelley 30:05
Hello, I’m here with Aaron Turner. Aaron Turner is a huge security expert. Why don’t you tell us a bit about yourself, Aaron?
Aaron Turner 30:12
I’ve been doing some form of security for 30 years. I think I’ve seen a little bit of everything. The good, the bad, the ugly.
And Diana is one of my favorite people!
Diana Kelley 30:21
You have seen it all, Aaron. So there’s a lot to see this year on AI and ML. What are your thoughts about AI and ML security?
Aaron Turner 30:30
Well, there’s the AI as vaporware, slideware, marketing stuff, right? Where everyone’s just slapped AI on that, right? And then there is the stuff that’s really interesting, where large language models have fundamentally changed things.
Like, what’s the most disruptive thing in the security of AI? Well, I think ChatGPT was really successful because they took the governor’s off of it. They allowed it to hallucinate more, they allowed it to make stuff up, and it talked in such an authoritative way people believed it! And so, I think that’s the biggest disruptor is people have trusted these large language models in ways that maybe we shouldn’t have.
So, for example, I look at all things in security through the lens of the C.I.A. Right? The good old fashioned Confidentiality, Integrity, Availability. It’s worked for decades, why stop? So, AI from a confidentiality perspective, anything you put into the model: No longer confidential. Because the model owner can use that to train and do what they want to do.
Integrity of the output? Well, we already talk about hallucinations. I was talking with one of my Eins faculty colleagues, Mick Douglas. He was working on a project and had this massive amount of C-sharp code that they put into the model to see, hey, can we optimize this? And ChatGPT optimized it. Went from 800 lines to like a hundred lines. Awesome, right? But it hallucinated an entire new class and method inside of C# that didn’t exist. Not so helpful when you want to actually run some code, right?
And then from an availability perspective, I had one organization I was working with where they built a chatbot to basically condense a knowledge base into a natural language model, make it easier. Well, when they did that, they built the chatbot on the 3.5 model of OpenAI. Well, when they took the API and forced it to 4, it broke the chatbot. There’s no SLA.
And so I think we have to have that filter to go, okay, it can be really good for some things. Now there’s, what I’ll call, the filter to say no. The filter to say yes, I think large language models are really good at creating compelling tabletop scenarios. So think of large language models as dungeon masters, right? Give me a really compelling scenario of how I can test my business continuity or asset response policies.
Another really interesting thing, I was working with a company that had a very large, young labor force. They were doing really awful at phishing, and well, they were being phished constantly, right? And so they wanted to do some sort of phishing awareness. And so they went to ChatGPT and said, ‘write a phishing awareness training in the voice of Rick and Morty, because that’s what they want to engage with, right? And it did a great job of coming up with curmudgeonly Rick and naive Morty, and why you should so and so. So I think there’s some positive things that can be done too, right?
So we’ve got to give, we got to take. So, what do you think?
Diana Kelley 33:31
All about the use cases, right?
Any final thoughts? What are you looking for here at Black Hat? What’s on top of mind for you?
Aaron Turner 33:39
I come here for the people. Like you. Right? Like, the connections that we have here.
The vendor stuff, whatever. Okay, they have to be here because they subsidize this thing, right? But like, I saw friends that I’ve had for 20 years last night. Right? And just seeing those people, connecting with them, talking about what are we doing for the community? Like, what are we doing to pay it forward? Who’s going to be the next people who come into – We’re not going to be around forever. Right? So who’s filling our shoes?
Like, who are the up and comers that we’re mentoring to be here, right? Because at some point I’m not going to come here anymore. You know, as much as I do like to see people like you. But what’s going to happen?
So those are the things that are most interesting for me is like, what are we doing about our community to actually build it forward? Because so many times it gets commercialized about, well, what job did you get and how much are you making? And you know, that sort of thing is like, okay, great, whatever. If we’re here on mission, that’s what I like.
Diana Kelley 34:38
All right. Thank you so much. It’s so good to be a part of this community with you.
Thank you, Aaron.
[Closing]
Additional tools and resources to check out:
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
Thanks for listening! Find more episodes and transcripts at https://mlsecops.com/podcast.