<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4373740&amp;fmt=gif">
MLSecOps-favicon PAI-favicon-120423 icon3

AI Threat Research: Spotlight on the Huntr Community



Audio-only version also available on Apple Podcasts, Google Podcasts, Spotify, iHeart Podcasts, and many more.

Episode Summary:

Learn about the world’s first bug bounty platform for AI & machine learning, huntr, including how to get involved!

This week’s featured guests are leaders from the huntr community: 

Dan McInerney, Lead AI Threat Researcher

Marcello Salvati, Sr. Engineer & Researcher

Madison Vorbrich, Community Manager


[Intro] 00:00

Charlie McCarthy 00:07

Okay, welcome back to the MLSecOps Podcast everybody. I am your MLSecOps community leader, Charlie McCarthy. It's great to have you back.

And today we're joined by some leaders from the huntr Community. huntr is the world's first, AI/ML bug bounty platform. Welcome, everyone. Thanks for being here. We're going to give our listeners a little bit of background about huntr: what it is, who it's for, who can join and just do a little bit of an introduction. 

So with me today, we have Madison Vorbrich, Dan McInerney, and Marcello Salvati. Dan has been on the show many times before as a host, but now we've got him in the hot seat.

Dan McInerney 00:46

I've been demoted.

Charlie McCarthy 00:47

Yeah, no <laughs> he's just back in the hot seat answering questions instead of asking them. So, we know who y'all are; all three of you are employees of Protect AI, but maybe we could just go down the line starting with Marcello and you can tell us about your role within the huntr community; what you do there.

Marcello Salvati 01:05

Yeah. I mostly do report triaging with Dan. So whenever you send the bug bounty report to huntr.com, we’ll be the first people to actually look at your report, triage it. And also I do a lot of backend automation stuff for the platform itself. That's basically my role in terms of the <inaudible>.

Dan McInerney 01:25

Yeah, I do pretty much the same thing. And then I also do, I mean, we both do some original research in all of our free time at work, too. 

Marcello Salvati 01:35


Dan McInerney 01:36

Looking for new bugs and new attacks and things of this nature because that's what we've done for, I mean, combined 20 years, 30 years. 

Marcello Salvati 01:42

20 years, yeah. 

Charlie McCarthy 01:43


Madison Vorbrich 01:44

And I focus on the community aspect. So basically I'm in charge of bringing all of us together, getting hunters in the spotlight, celebrating your wins and accomplishments, and yeah.

Charlie McCarthy 01:56

Awesome. And “hunters” are like the community members that make up the pool of bug hunters.

Madison Vorbrich 02:01

Yes, exactly, yeah.

Charlie McCarthy 02:02

Awesome. Right on. Okay, let's dive in you guys. So to level set for either first time listeners to this podcast, or maybe listeners who have been here a while, but they're not super familiar with the bug hunting space, let's do just a brief overview about why the security of AI and machine learning (ML) systems is a concern in today's technology landscape. Why should we care?

Dan McInerney 02:24

We could start over here. Yeah.

Charlie McCarthy 02:25


Marcello Salvati 02:26

Oh, you want me to start? Well, I think primarily it's because it's a completely new, it's a very new field, and there hasn't been a security focus on AI and ML, which is normal for most new things that pop up in the tech world. But the main difference between like this sort of wave versus like other previous new technologies, in my opinion - just the pervasiveness. And that tends to have a lot more security implications than usual.

Dan McInerney 02:58

Yeah. It's like the speed of development. 

Marcello Salvati 03:00


Dan McInerney 03:01

Because I mean, everyone just latched on as soon as, you know, ChatGPT came out, it was like - 

Charlie McCarthy 03:06


Dan McInerney 03:07

Yeah. CEOs are like, well, are we using this to do everything? It's like, well, no, not yet, but I mean, we can. And so let's do it. And then they try and push something out, or people have little side projects that help themselves at work, and they end up publishing that in open source. 

Marcello Salvati 03:21


Dan McInerney 03:22

And now people are like, oh, this is really useful. But it was just a side project. They, there was no professional security testing of this. There wasn't anybody, you know, they're, they're just pushing it as fast as they can to get users because speed of development gets, you know, the early bird gets the worm, I guess.

Marcello Salvati 03:37

Yeah. Yeah, basically. And it's also just like, [the people] in organizations are gonna be hooking this stuff up more and more to different kinds of systems. So like there really needs to be a really big focus on the security aspect of it if you're gonna start hooking this up to like, you know, surgery robots or something that's - 


Dan McInerney 03:22

Let’s hope so. Yeah. 

Marcello Salvati 04:01

Yeah. It's <laugh>, which is totally gonna happen too. Like there is definitely gonna be a use case in like five, six years down the line. 

Dan McInerney 04:05


Charlie McCarthy 04:06

Right. Probably gonna want to run a scan or two first.

Dan McInerney 04:09

You know, it happens. Chatbots can already pass medical exams and stuff. So it’s a matter of time.

Marcello Salvati 04:11

Yeah, exactly. 

Madison Vorbrich 04:13

Yeah, and like bar exams. 

Marcello Salvati 04:15

Yeah. Just a matter of time when, you know, people gonna start hooking this up to physical stuff.

Madison Vorbrich 04:17


Dan McInerney 04:18

But it’s - I don't wanna also sit here and be like, “I can't believe all these developers didn't hire penetration testers to test all their software.” Like, I get the speed of development is a necessary part of new technologies. It's just a fact of life. 

Now, should you have, if you slow down and add lots of security to your product before you release it, that's fantastic. But you might get beat to market. You know, you might actually lose money on that so I'm not here to, like, proselytize that “oh my gosh, everyone's so stupid for not focusing on security as much as they have.” I mean, there's economic reasons that a lot of these have not been thoroughly tested or had security even thought about yet.

Marcello Salvati 04:57

Well also, the developers aren't necessarily security experts either. So they might not even, they might not even take that into consideration.

Dan McInerney 05:02

That’s something weird I noticed too. In like the AI world - so I started studying AI and machine learning and doing my own machine learning projects a couple years before ChatGPT came out - and what I noticed is like, AI engineers tend to have stuck in their lane for a long time. What I mean by that is, like a lot of the AI engineers I've met and machine learning engineers, they went to school for machine learning or statistics or something, and they just continued down that path to data scientists to, you know, junior machine learning engineer and so on and so forth. 

But in the general software engineering world, you see a lot of overlap between like system administrators and developers, or penetration testers and network engineers. And I just don't really see that in the AI world. I see a lot of ML engineers that stayed on the ML engineering path and never really went into professional software engineering or anything like that. Yeah,

Marcello Salvati 05:49

Yeah. Exactly. Yeah.

Dan McInerney 05:50

Which is a contributor to some of these issues.

Charlie McCarthy 05:53

Yeah, and that's something that we've talked about in some other episodes of the show. As you're talking about building, you know, whether it's an MLSecOps “dream team” or an AI security team within your organizations, how there is this knowledge gap between practitioners or developers in this field specifically and security experts, and kind of how we bridge that and bring those groups together for sure. 

Dan McInerney 06:14

Mm-Hmm. Yeah.

Charlie McCarthy 06:15

So in an organization then, if we switch tracks a little bit, you know, to your point that there are all these new AI-powered technologies and applications that kind of - since that ChatGPT or LLM explosion happened [in the market], what, close to a year and a half ago now -  individuals within organizations have adopted them. Maybe the organization knows that some of their employees are using this stuff. Maybe they don't. Organization-wide, they've adopted certain tools and are building things on like open source assets, that sort of thing. We've also talked on the show a lot about the practicality of certain attacks or whether a vulnerability that's found is actually valuable or it's something that an organization should care about. 

What do you think are some of the, like most common or most impactful vulnerabilities or risks to ML systems that, that maybe orgs specifically should be concerned about? What's practical?

Marcello Salvati 07:14

I think the, probably the stuff that would be more, that probably would be concerning the most is the stuff that's sort of out of your control in an organization, so like supply chain attacks. 

Charlie McCarthy 07:28


Marcello Salvati 07:29

Like, that's the kind of thing that considering just the state of the ecosystem right now, supply chain attacks specifically, I think are probably the most main concern because it's something like your organization can't really control. 

Like, you can have, you can have like basic security hygiene practices like inventory, like you can have egress controls, you can have all that, but like, and you can sort of like control your environment to a T [to perfection], but that's just completely something out of your control.

Dan McInerney 07:51

Yeah. I feel like the, what gets the most press is, you know, injection attacks and like attacks on the LLMs because I've [inaudible] a million customer calls and I think, I mean, probably the majority of the time, the first question is, well, what happens if we introduce an LLM in our environment? Because we're gonna try and replace our customer service agents or something. I'm like, well, A., you're probably not there yet to have an LLM replace your customer service agents. Will be soon, but probably not there right now. 

Those - the attacks of like prompt injection stuff are novel, and so everyone kind of talks about them a lot. But from a practical perspective, you're not going to get your network hacked through your LLM.

Marcello Salvati 08:30

Unless you really messed up somewhere. <laughing> Yeah. Unless you really messed up, I guess.

Dan McInerney 08:36

But you're probably gonna mess - so what's going to happen is you're going to implement the LLM and you're going to be concerned, oh my gosh, there's all these new attacks and the prompt injection attacks and stuff like that, but realistically you have to cover that LLM with some kind of API so that other people can reach it, and that API is where you're gonna get hit.

Because, I mean, you found a really interesting denial-of-service in FastAPI.

Marcello Salvati 08:57

Yes. Yeah, yeah. And that, that's the kind of stuff, but, and again, like, I think there's, there's definitely still some misconception here about like - especially from a security testing side, and I think this leads into another conversation - but like from the security and testing side, like you really don't need to be an AI or ML engineer to know, to actually like, test security-wise, these systems. Like to perform a pen test on these systems. You can just, as long as, you know, like basic web hacking skills, like you can, you can just continue to, you can just divert your attention to these systems.

Dan McInerney 09:29

Yeah, I kind of consider like the API…API security is AI security essentially at this point.

Marcello Salvati 09:33

Yeah. To an extent.

Dan McInerney 09:34

And so like ChatGPT, they did a bug bounty program and they actually had like prompt injection and jailbreaks and stuff just out of scope. 

Marcello Salvati 09:42


Dan McInerney 09:43

So like, what, what are you supposed to attack? You're gonna attack the API and like the web, you know, the web components there.

Marcello Salvati 09:46

Yeah, your standard, like, web hacking skills apply here. Like this is not, this is not something completely out of the usual for, like, from a security testing perspective.

Charlie McCarthy 09:55

So we're talking about organizational risks related to use of AI powered technologies. What are, what are the defenses or, you know, what types of roles within an organization, is it pen testers that should get involved or red teaming exercises? Like what's, what's kind of the first line of defense, would you say - through your lens - for an organization looking to bridge the knowledge gap between their security teams and their engineering teams?

Dan McInerney 10:27

We may be biased here, but I think that the pen testers are extremely valuable in an organization that deploys LLMs and they wanna say, oh my gosh, did we open up a new risk in our environment by doing this? 

Charlie McCarthy 10:34


Dan McInerney 10:35

Well, pen testers are gonna be able to figure that out because you can just give them a network foothold and then say, “hey, go do whatever you can from this AI engineering network segment. You know, we'll just, we'll give you the ML engineer standard laptop now. Go do whatever you can.” 

And you're gonna find like that maybe the AI engineers are standing up these web servers that make their life easier, and turns out those web servers haven't been security tested, which we've seen multiple times in the past. 

Marcello Salvati 11:02


Dan McInerney 11:03

And they end up getting, you know, local file includes, they get further network penetration, now than they're in your accounting department. 

So that sort of thing's useful. But there is also a utility in distinguishing that, which is essentially just a regular network pen test or web application pen test, and an actual like model security test. So there's a few different open source tools you can use that actually test the inputs of the models. And they test for things like jailbreaks and prompt injections and things of that nature. 

And I think that's useful, but I also think that's probably not where you're gonna get hacked. You know, if your model is slightly biased in some way, shape, or form. That's what these kind of model attacks - and model pen tests is what they're being called now - the, I just don't really see that as all that practical and impactful to your security. It's useful and should probably be done, I mean, security testing is always good, but yeah, like a regular network pen test in your ML engineer's network segment is gonna be probably way more impactful than you think.

Marcello Salvati 12:05

Yeah, I agree. And, and again, like a lot of this comes down to just like the standard, like basic cyber hygiene, you know, that's the new-fangled word for it. But you know, like inventory, like, you know, like inventory, just like knowing what, when, where like the software stack that you run, like, it's just like, some of these are just basic principles. 

Dan McInerney 12:27

Yeah, and where it gets confusing in AI is that a lot of these ML engineers, they need to stand up these small open source projects in order to make their lives easier. And then you start to lose control over your own network because they, they're like, listen, you want me to go fast, right? Yes. You want me to go fast. Okay. I need this, this, this, this installed in a bunch of cloud networks and on our internal network and stuff. And suddenly you just opened yourself up with a, you know, a giant backdoor.

Marcello Salvati 12:52

Yeah. The zero, the zero trust architecture principles really actually help a lot here. The problem is like, you know, having been on the defensive side of things, like organizations are rarely in a position to even know like what's running on their laptops, let alone like applying like an entire rearchitechture of their network. So it, it's, it's the same old problems, honestly. 

It's, to an extent, it's like the the same problems that we've been having for the last 20 years in terms of like cybersecurity. Only that there's a fundamental aspect that here that's a little bit different, which is like, these systems are gonna be like - they're intended to be publicly exposed to an extent, on a wide scale. So there might be some additional things down the line that you might have to worry about. But like, a lot of these come back, a lot of these problems come back to like the core tenets of just security.

Dan McInerney 13:43

Yeah. Knowledge of what is installed in your environment.

Marcello Salvati 13:46

Yeah, exactly.

Dan McInerney 13:47

But it's like, what are you gonna do? Like, shut down the ML engineers from using their tools. So at that point, that's where the pen test is really useful. You go, okay, listen, you have a week to install everything you need, and then we're gonna do security testing on that, and then we'll figure it out from there, what you're gonna keep and what you're gonna lose. So that seems like the reasonable path to go down is, is yeah, give 'em all the speed of development they want, but then check it out, you know, with a security test or a pen test or something.

Charlie McCarthy 14:12

Got it. Awesome. Okay. So organizational risk that we know exists; there are a couple different pools it sounds like. There is the use of some AI-powered technologies or applications that employees might be using - has its own set of risks. And then also if you are actually building and deploying AI, you have engineering teams, ML engineers that are creating systems - that's kind of another category. So for the, the organizations that are building and deploying, many of them are using open source machine learning assets, and these assets can contain vulnerabilities, right. And kind of, and “enter huntr” here. So if we're transitioning over to that group, what it's there for and how it participates in this ecosystem, who are the participants within the huntr community and how do they contribute to identifying vulnerabilities or bugs in AI/ML?

Madison Vorbrich 15:11

So I can speak to part of that. So I feel like most of our hunters are either bug bounty, hunters, hackers themselves, security researchers, pen testers. We've also seen like software developers and engineers come in, hacking enthusiasts, some students that want to learn. As far as how they can contribute to identifying vulnerabilities, I'll pass that over to you two to provide more detail.

Marcello Salvati 15:38

I think, I mean, in terms of like, actually, I think the biggest part about this is the scale. Like, if you're able to like, get 10,000 people to have eyeballs on a specific project that's open source, like you're guaranteed to like speed up the whole security life cycle of your product almost immediately. Like it's just, it's just a scale that really helps. Like if you never had like any sort of, or have any security background whatsoever, huntr is actually really valuable in terms like, in terms of finding security vulnerabilities in your open source project because - 

Dan McInerney 16:16

Yeah. And it's not like, you know, we're charging for these open source projects.

Marcello Salvati 16:19

Yeah, exactly.

Dan McInerney 16:20

We add these open source projects because they're open source. And then we just help the maintainers find the security issues. And I mean, there's no legal reason that they have to like, fix the vulnerabilities or anything, but we find pretty much all of them get fixed. There's been a lot of really good maintainer interest. 

But as far as like getting started and involved, it's pretty simple. I mean, you just kind of - 

Charlie McCarthy 16:41

Yeah, like what, what's the journey for like a new community member from signup through, say, submitting a report or something? What does that kind of look like for someone like me or a business leader who's not at all familiar with that space?

Dan McInerney 16:56

You could just sign up for GitHub and get a GitHub account, log in with your GitHub username to huntr, and then pick out one of the projects that is in our bounties. I don't know how many there are there. A hundred and something. A hundred

Marcello Salvati 17:10

A hundred-something. Yeah, for sure.

Charlie McCarthy 17:11


Dan McInerney 17:12

Pick out one of them. Especially ones with like web components, those tend to be the easier ones to attack. And then just start running - it's good to do an automated scan first. So something like Snyk is good for code review, so you can download all the code, then do Snyk. But warning - Snyk - 99% of those findings that Snyk finds are just warnings. It's a lot of people report on, you know, cross-site scripting or command injection that doesn't really exist, but Snyk is like, it might exist here. 

So that's…but it's good for an overview of just like to understand the code, where the vulnerabilities might be. If there's a whole bunch of warnings in one section code, probably good to check that out. Then you scan it with an automated scanner like Burp Suite. And that'll also give you a kind of good overview of where in the web application there might be vulnerabilities.

And then you just gotta kind of dive deep. Now, the beauty of this bug bounty program is because all the bounties are open source, you can just read the code. So you can just go to GitHub and be like, something weird is happening here. I wanna know what it is. Okay, great. Go read the code. This is not like a lot of other bug bounties where it's closed source products and something weird's happening, so you just have to black-box it. No, this is fully white-box and open. And that's where I think it actually is a lot easier to get started in this bounty system than somebody else's. Yeah,

Marcello Salvati 18:26

Yeah, for sure. A hundred percent. And the fact that you're contributing to like the security of an open source project is also really nice. Yeah. It's honestly fantastic.

Dan McInerney 18:34

I mean, you're doing something morally right. 

Marcello Salvati 18:37

Yeah <laugh>, there you go.

Dan McInerney 18:38

You can feel good about yourself. <laugh>

Marcello Salvati 18:40

And you might even like, and, and through like, maybe because there's, there's like a, there's a system in huntr where you submit the patches to the vulnerabilities you can find potentially as well. So you, you could wind up being an open source contributor as well, you know, resume, resume building event. A good one, you know.

Dan McInerney 18:55

Yeah, it's actually quite significant for resumes. CVEs. Because we publish CVEs if the finding is good and CVEs on your resume look phenomenal if you're trying to get in the security field, or even just if you're trying to be in the tech field as a whole, like that's really useful. And then they see that you also contributed the patch to it? It's like, “hire this person.”

Marcello Salvati 19:12

Exactly. Yeah. Especially if you're just starting out in security and trying to get like your first gig in, in cybersecurity.

Dan McInerney 19:17

Yeah, even software dev.

Marcello Salvati 19:18

Yeah. Anything. It's honestly really good.

Charlie McCarthy 19:21

For a new member to go sign up for the first time, like say I want to get involved, I go to huntr.com, are there certain prerequisite skills that would be helpful for me to have before doing that? Or, I guess that's question number one. And then question number two, you know, if you have been bug hunting in the traditional software space for a while, like is there a big difference? What, what is helpful to bring along if you're gonna go join the huntr community for the first time?

Marcello Salvati 19:48

To be perfectly honest, like there is almost no difference. Like there, like if you have any sort of background in bug bounty background or a web application pen testing background, like those skills apply here. Like again, like 99% of the stuff that's on huntr is traditional like network services, web applications. 

Charlie McCarthy 20:11

That's awesome. 

Marcello Salvati 20:12

So like, you really like this, there's no - and that's part of the, I think that's part of a lot of why, like there's a little bit less interest in this kind of stuff in the security community, at least from what I see.

Just because I think people have the assumption that like you need to have some sort of like hardcore ML background or something, but there's not, it's, it's literally just like a web application and you just, all of your standard like hacking skills apply there.

Dan McInerney 20:39

Yeah and it, we don't currently have like bug bounties against models themselves. Like there's - we don't have any bug bounties against Claude, you know, Claude 3 Opus just came out. So you really don't need to know that much machine learning. 

I think a little bit of the basics is really useful. So it's useful to know basically like the lifecycle. So in 30 seconds that's: you open up a library like TensorFlow, you create a model object inside that library, then you feed it a bunch of data, and then you hit “train.” Now you have a trained model, and you can feed that model new data and it'll make inference is what it's called. It'll make predictions based on that data you just fed it, so say housing prices. 

Now, because we don't have the bounties on the models themselves, that's basically all you have to know about machine learning. But the utility of that knowledge is now you understand that machine learning involves a lot of file manipulation. So when you're going through these libraries on our bounty system, looking at the file uploads and the file downloads tends to be a very hot spot in all of these projects. You can overwrite files, can you upload malicious files and upload back doors and stuff like that? So that's, I would say that's really all the background knowledge you need is just knowledge that there's a lot of file manipulation. 

Marcello Salvati 21:53

Yeah, and a lot of user, I mean user input everywhere. Like, like entire, the entire, like most of these services are designed to like accept user input almost everywhere. Just because of the nature -

Dan McInerney 22:03

Yeah, because you feed data. 

Marcello Salvati 22:04

You need to feed it data for inference, for the model, for any, all things data.

Dan McInerney 22:08

It's very, yeah. It's very different than regular web apps because regular web apps don't really do a lot of file upload or file download. They'll do like, you know, you upload an image and they just store it on an AWS server somewhere safe, and that's the end of that. But here it's like you have to upload massive amounts of data to a lot of these web applications in order to feed the model, remotely, you know, from Kazakhstan or something. You can upload some data and then have a model be trained and then get the inference. But that is very different than a regular web application, which opens up a lot of holes, which we've seen in the huntr Hacktivity page. 

Last tip, 'cause I just remembered this too - 

Madison Vorbrich 22:41

Dan with the tips <laugh>

Dan McInerney 22:45

The Hacktivity page. 

You said if you're just getting started, what do you do? You could pick a project and scan it and whatever. Honestly, the first step I would do is go to the Hacktivity page and see the valid CVEs that have been published.

That is like golden. Because now you're looking at real world practical attacks. A lot of times new hackers and stuff, they'll go to Wikipedia or OWASP and be like, okay, I'm just gonna read about cross-site scripting. And they get overwhelmed with 700 cross-site scripting injection attacks. And it's like, well, but which one of these actually works in the real world? 

Go to the Hacktivity page, you'll see what attacks really work in the real world. Start with that. Just run through as many as you can until you pass out and do it again tomorrow. And you'll have a great background knowledge to know where to start looking. 

Marcello Salvati 23:27


Madison Vorbrich 23:28

Yeah. Even reading like the proof of concepts on what they submit, you can learn not only just why it's important, but just like their process, their thought process through like going through this, finding it. 

Dan McInerney 23:39


Madison Vorbrich 23:40

And it like helps you learn, you get a better understanding. So it's like right there on, on the website easy to go check out on huntr.

I think so too with being a beginner, you know, you can also join other AI/ML communities on Reddit, bug bounty, you know, threads on Reddit. There's also people like, NahamSec, which is a well of knowledge. He's a big bug bounty hunter. You can go check out all of his stuff. He's great. Amazing. There's a lot of great people out there to follow. 

Dan as well, he has some good content on the huntr YouTube channel where he gives tips and tricks. We're constantly gonna be, we're constantly gonna be pushing out stuff too that's in the works. 

So yeah, if you're a beginner, I would definitely recommend checking those things out. Or if you just wanna completely submerge yourself into it, go to Black Hat, go to, you know, DEF CON. Meet other hackers like yourself. Meet the team. We will also be there if anyone wants to stop and say hi. And yeah, just submerge yourself into that environment. I think that's also helpful.

Charlie McCarthy 24:42


Marcello Salvati 24:43

I think, I think everybody's path is a little bit different, honestly. But open source definitely helped a lot in terms of like having any sort of like, background on open source with this stuff. It definitely helps a lot.

Dan McInerney 24:55

Yeah. It was useful to just encode the knowledge. So you, you learn a little bit of Python, you learn a little bit of web application security and then whatever you learn, just write a script that does it for you. That's a good way to get started. 

In terms of like finding communities and stuff, we have a fantastic huntr Discord channel

Madison Vorbrich 25:11

Yeah, we do.

Dan McInerney 25:12

There's lots of people there that are high value - high, [I don’t mean] high value - but good at what they do and willing to help. 

Charlie McCarthy 25:21

Yeah. I imagine you could just go jump into that space, I mean, there are a bunch of different channels and you could say, oh yeah, hey, I'm working on this thing. I’m interested in this. Or like, there's this bounty and see what folks have to say.

Dan McInerney 25:31

Yeah, ask questions about help and you know, hey, does everybody wanna work together on this project or something? 

Madison Vorbrich 25:37

Exactly. Yeah. And we always encourage that too, for them to be collaborative and to, to hit us up. You know, everyone has access to the huntr team too on Discord, which is great. So definitely check it out if you haven't yet.

Charlie McCarthy 25:50

Awesome. There was one other resource that I wanted to bring up that I thought I saw that I think exists on the huntr website, but didn't y'all put together like a Beginner's Guide to AI/ML bug hunting?

Dan McInerney 26:02

Yes, yeah. I wrote a big long post on the basics of web application security.

Charlie McCarthy 26:08

It’s pretty much step by step. 

Dan McInerney 26:10

Yeah. That's what we're talking about here. It's the basics of web application security, which is - and API security, which is like the basis of AI security - and then the common vulnerabilities that you, that we're seeing on the platform, which you can also see through the Hacktivity page. But, you know, it's stuff like local file include, arbitrary file overwrite, remote code execution, these kind of things in the networked components of these libraries. 

It gets a lot more difficult if you're looking for like buffer overflows in TensorFlow. It's, that's definitely the next step in advancement, in advanced knowledge. So if you are getting started, I would probably recommend like not just running a fuzz on TensorFlow or something. It's gonna take you longer to find a bug than just looking at like the HTTP requests in Burp. 

Charlie McCarthy 26:53

Okay. So Madi, I wanted to ask you, what would you say are the benefits of joining a community like huntr in Discord, getting involved with that group when you're embarking on this type of journey?

Madison Vorbrich 27:07

Well, like I said, you have access to the whole huntr team on Discord. You have access to other great hackers. You know, the maintainers, everyone is on our Discord community, so you have direct access to them, you can reach out to them. We're also always willing to chat. 

Something else that's really great is that we're also always willing to put you in the spotlight. So if you found a really great vulnerability or bug and you wanna kind of showcase your thought process and how you found it, and kind of even just walk through like your proof of concept and all of those things, we'd be more than happy to kind of highlight that and put you on a pedestal and show that to the rest of the community, which I think is great.

Charlie McCarthy 27:59

That’s fun. 

Madison Vorbrich 27:50

Yeah. So we always wanna kind of bring people to the limelight when it comes to their success. And even when it comes to content collaboration, we're always willing to get, you know, on a call or meeting, what have you, and kind of walk through some great finds that you discovered and how we can bring that to life on the huntr platform, on the huntr website. So I feel like that's definitely a plus that we try to communicate to our hunters all the time.

Charlie McCarthy 28:14


Madison Vorbrich 28:15


Charlie McCarthy 28:16

Okay. Well, I guess I would wrap with then - this is kind of a common question that we ask all of our esteemed guests at the end of the show - is if you're speaking to the MLSecOps listeners, listeners to this podcast, which is a pretty diverse group of you know, threat researchers, people that might already be on huntr, but also CISOs or security professionals and ML practitioners, engineers, data scientists…through the lens of the huntr community, what is like one call to action or one piece of info that you would want listeners to take from this episode in particular?

Madison Vorbrich 28:53

I would say from the community standpoint of huntr, something that I just wanna remind people is that we're all kind of in this same new exciting space, experience this all at the same time, so it might feel daunting to come and feel like you might not know what you're doing, but just know we're all there, right there with you. You know, you can talk to us on Discord, Twitter you know, X and sign up, reach out to us, you know, leverage the community, leverage other people. And again, I know it feels daunting, but everyone's kind of in the same boat as you. 

Marcello Salvati 29:26

Yeah. I think it's, I think that, and that's part of the main, I think one of the big issues is the lack of cross - the knowledge, right, between different people with different areas of the, their special specialty or expertise, right? 

So if you're like from an ML engineering background, you know, go talk to your security folks, you know, in your organization. You know, try to understand their concerns and, and just start those conversations because that's the only…and if your organization pays for training, you might wanna take a couple of code - secure coding classes. 

Charlie McCarthy 30:07


Marcello Salvatti 30:08

That's the kind of stuff that I think is going to accelerate the actually, like secure by design principles, both in your organization and both when it comes to these open source projects.

Dan McInerney 30:19

Yeah. That's, I feel like from a, like a CISO perspective, like where huntr is really, really valuable is the Hacktivity page

You can just follow along and see, oh gosh, you know, they added a tool that we use. Is it getting a lot of hits? Is it getting a lot of CVEs or issues? Then, you know, you can have that instant knowledge that there might be an insecure thing on your environment before maybe, you know, a scanner or something finds it.

Charlie McCarthy 30:43

Awesome. Well, thanks y'all. And I guess my call to action then, for this group, our MLSecOps listeners might be to listen to another fantastic show that Dan and Marchello are in called “Between Two Vulns.” There are two, three episodes out right now?

Marcello Salvati 31:01

Two right now.

Charlie McCarthy 31:02

Two right now with more to come. They're much shorter than this show, like it's great for a 10 minute coffee break and you kind of get to dive a little bit deeper into vulnerabilities that the huntr community has found in a very entertaining and engaging way. 

So yeah, I think that wraps it. Thanks y'all for being here today. Thank you to our listeners for joining us. We will have links to all of the huntr resources that we spoke about and contact details for these lovely folks in our show notes. Until next time, that's a wrap.


Additional tools and resources to check out:

Protect AI Radar

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard - The Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for listening! Find more episodes and transcripts at https://mlsecops.com/podcast.