MLSecOps | Podcast

ReDoS Vulnerability Reports: Relevant or Noisy Nuisance

Written by Guest | Mar 20, 2024 10:20:26 PM
 

 

 

Audio-only version also available on Apple Podcasts, Google Podcasts, Spotify, iHeart Podcasts, and many more.

Episode Summary:

In this episode, we delve into a hot topic in the bug bounty world: ReDoS (Regular Expression Denial of Service) reports. Inspired by reports submitted by the huntr AI/ML bug bounty community and an insightful blog piece by open source expert, William Woodruff (Engineering Director, Trail of Bits), this conversation explores:

  • Are there examples of ReDoS vulnerabilities that are actually worth fixing?
  • Triaging and the impact of ReDoS reports on software maintainers.
  • The challenges of addressing ReDoS vulnerabilities amidst developer fatigue and resource constraints.
  • Analyzing the evolving trends and incentives shaping the rise of ReDoS reports in bug bounty programs, and their implications for severity assessment.
  • Can LLMs be used to help with code analysis?

Tune in as we dissect the dynamics of ReDoS, offering insights into its implications for the bug hunting community and the evolving landscape of security for AI/ML.

Transcription:

[Intro] 00:00

Dan McInerney 00:07

Hi, I am Dan McInerney. I'm the lead threat researcher for Protect AI, and this is my colleague Adam [Nygate].

Adam Nygate 00:13

Hi, everyone. Adam here from Protect AI also, yeah.

Dan McInerney 00:17

We have William Woodruff on with us who wrote a really interesting blog post about ReDoS vulnerabilities, and generally vulnerabilities that really shouldn't even be submitted to maintainers. And I think he's got a pretty interesting perspective on this whole process and it's quite near and dear to our hearts, so let's get an introduction from William.

William Woodruff 00:39

Hi, everybody. My name is William. I am an Engineering Director at Trail of Bits. I'm on the open source team there, and yeah, I've been told I have a lot of opinions on open source and incentives in open source.

Dan McInerney 00:50

So how long have you been in the industry?

William Woodruff 00:52

Yea, so I've been working as an engineer for about six years, but I've been an open source contributor for about somewhere between 10 and 12 years.

Dan McInerney 01:00

Like a security engineer or regular software developer?

William Woodruff 01:03

Yeah. Security engineer. The company I work for is a security consultancy, and so I primarily do security consulting.

Dan McInerney 01:09

Oh, that's great. So how do you see, where do you see these ReDoS vulnerabilities? So to give you a background, just real quick on the blog post we're talking about - I'm sure we'll have a link in the editing - it's talking about how ReDoS vulnerabilities are not…that developers should push back against all of these really low quality CVSS and CVE submissions that people will send to them. 

And a ReDoS is a “regular expression denial of service.” It's taking advantage of the regex (regular expression) engine because the [regex engine] goes both forward and backwards. So you can kind of trick it and put it in this permanent loop that ends up exhausting resources of CPU and that sort of thing. 

And so, just to get started, I'm kind of curious about what was the thing, I know you had a trigger for this blog post. I know there was - somebody sent you something and you're like, that's it. I have to just go say, tell the entire world, that this is not how this should be.

William Woodruff 02:08

Yeah, there definitely was a trigger. The trigger, at least in my case, was I have a lot of other friends in the open source community. I often sort of get tagged on issues just to pop in and have sort of opinions on, on what's going on. 

And for a while we were seeing like these murmurs in the JavaScript community of these reports where people would file a regular expression denial of service report and, and receive either CVE or some other kind of identifier. And at least in the Java ecosystem, these made a little bit of sense because they would be like, it would be a service facing thing on, on a website, for example, where you could hang the website. 

But two or three years ago, we also began to see them in the Python ecosystem, where oftentimes the domain doesn't make any sense for user reachable input or user controllable input.

And so the one that really set me off was a friend of mine has a library and he received a ReDoS report that was not, not only in completely unreachable code but like seemingly had never, ever been used by anyone. 

So like, not only was the code unreachable in the actual library that was reported to, but also nobody had ever called this API function anywhere ever. He did like an entire survey of GitHub's- used GitHub's code search - couldn't find a single use of it. And so therefore it was like sort of doubly useless. 

Not only is it this, this sort of low value vulnerability class, but also there was absolutely no evidence that it was ever even remotely triggerable.

Dan McInerney 03:38

Hmm. So we end up seeing, so we run a bug bounty program too on huntr.com, and I'm one of the triagers there. And so I do end up seeing a lot of these reports. And it feels like, and tell me if this kind of jives with your experience, it feels like a lot of people are using static analysis tools on these bug bounty programs like Snyk, and Snyk is a fantastic SAST (static application security testing).

William Woodruff 04:00

Yeah, definitely.

Dan McInerney 04:02

Like I don't wanna denigrate it at all, but it does feel like a lot of people are just using Snyk and then getting a finding and reporting like copy/paste. [Something like] “Snyk found an XSS (cross site scripting) in this thing that's not even remotely close to an XSS. 

Is that like kind of what you, what you're seeing as well - is that - where do you think the source of these ReDoS vulnerabilities are coming from? Because I don't think BurpSuite really does that that often.

William Woodruff 04:26

No, I think, I think it's a trifecta things. One thing that's definitely detecting these kind of like pathological regular expressions is very easy, statically. It is, you know, you can just sort of search through the input and find the at fault regular expression. So that's one part of it. 

I think another part is that, as we've seen vulnerability programs classify them as a genuine attack surface or a genuine vulnerability category, the people who are sort of scraping the bottom of the barrel for vulnerability sources, latch onto that and say, oh, here's another thing that I can just sort of shove into my automated report. 

And so given, given that you run a [inaudible] program, I, you certainly see this. We see this with Hacker One for open source programs that I help with. Yeah. People will just sort of dump a giant text file at you with the output of a random static analysis tool. And now that ReDoS is in that set, that it's in that yeah.

Adam Nygate 05:20

Yeah. I think one of the interesting things that I think we're seeing as well, or like, and just I was reading, I was reading your, your blog post. It's, it's a really great kind of, like, summary of like all the influences at play in all the kind of like different aspects to this problem. 

And what it feels like to me is that there is, there is a weird kind of incentive to security vendors to kind of like have new vulnerability categories created that they can then kind of turn into alerts. And then, you know, they basically sell customers on these, on like the throughput of alerts, which kind of leads to, leads to downstream problems. Not just like the open source community, but even I think in your blog post, you mentioned like kind of like customer fatigue, like tool fatigue, these kind of issues.

But I feel like there's a, like maybe a, a winds of change shift with maintainer - because like in the past, right, it was like, it was security vendors who did this kind of stuff, right? They would like issue a CVE, maintainers weren't necessarily involved in the process and, but maintainers have to like pick up the pieces of everything now going wrong. But it feels like there is a winds of like wind of change shift with regards to like, even how GitHub now kind of do their GitHub security advisory database and kind of make it more like open source community friendly. People can like edit these things, can like put comments on them and kind of like in certain situations like lessen the blow. 

And even in, in our bounty program, one of the things that we like try to pride ourselves on - and you know, we also like step on toes and we try, we're trying to get better at this, right? - is that we like to think that we really work closely with the open source maintainer community and encourage them to have the final say. If they don't think something is a vulnerability, you know, then they treat it as such, and you know, it's up to them how they kind of like want to convey that security message to like downstream users or consumers. So it does feel like things are starting to change 

William Woodruff 07:10

Mm-Hmm. 

Adam Nygate 07:11

But, you know, lots, lots to go at the moment.

Dan McInerney 07:14

William, do you have open source projects that you host yourself, too?

William Woodruff 07:17

Yes. 

Dan McInerney 07:18

I mean, I know you do, but <laugh> Yeah, people were asking. I've already seen them. But do you see these security issues being reported to you as well? Because you mentioned that it was your friend's source project that you first saw this annoying one.

William Woodruff 07:31

Yes. So thankfully, so I contribute a lot to, for example, PyPI and the Homebrew project and, and a few other things. But my personal projects, blessedly, are not very widely used. They, I maintain a lot of sort of low level stuff in the Python ecosystem, but I don't consider those like my projects. I'm not the original author. I'm just the current maintainer on them. But I believe - honestly, I would have to check - but I believe that people have filed ReDoS reports against some of, like the - some of the CacheControl, for example, which is the low level caching API for, or library for, Python.

Dan McInerney 08:12

And do you just tell them, “hey, read this blog” and then frowny face <laugh> as soon as they post it?

William Woodruff 08:17

I do triage it because, so sometimes, like, so one of the things I think - I'm very happy that people read that blog, and get, and take away from it that there are negative incentive structures here. But I do think there are legitimate cases where a ReDoS can be serious, and I do take security seriously. When people file a report. I do drop everything and I read it and I attempt to reproduce it, and I attempt to determine whether the scope is what I need to be concerned about. 

But I would say nine times out of ten, it's not. It's not. And yeah, then I say, please remember this is, this took a lot of time for me to drop everything and triage. This is effectively a denial of service against my ability to continue to maintain the project and respond to other reports, so.

Dan McInerney 08:59

There's, because we do have some maintainers on our platform who will flat out send us your link for any ReDoS, and there's been at least one time where the ReDos we felt was actually real.

William Woodruff 09:14

Mm-Hmm.

Dan McInerney 09:15

And there wasn't a lot of convincing we could do. Like, can you give an example of where you think a ReDoS is actually going to be a vulnerability that is worth fixing?

William Woodruff 09:25

Yeah. So what I, what I would say is, so like there's this general availability class of attacks, right? Where a service that, if an invariant of the service is that it must remain online, and that there is no way to fall over or restart once a worker - and it fails - then I would consider a ReDoS a valid attack against that service. 

The reason why I say 99% of the case, 99% of the time this isn't the case, is because at least in the Python ecosystem, you tend to have like WorkerPool. And so if you hang one worker, scheme just restarts another worker, and obviously you could spam up the worker queue with these, but eventually someone's gonna get alerted and they'll just put a length limit on the thing that had the input form. 

And so, but I, like I said, I can [conceive - it’s a programming] - you can do anything with a programming language, right? There's almost certainly code bases that have critical availability requirements where these constraints aren't in place. And in those contexts, I think you can make an argument that

Dan McInerney 10:26

Yeah, because the one I'm thinking of was it - it would spin up some workers, and if you sent one request with the payload in the request, it would kill one worker. And then if you sent another, it kills the other worker, but it doesn't kill 'em, it just hangs 'em. So they never actually restart. 

And that was like the one time I'm like, man I read your blog a thousand times just to make sure I was like parsing everything and I knew everything about this vulnerability. And that was the one time I do think that was an actual, real ReDoS vulnerability that couldn't, wouldn't have just been automatically fixed through normal means in the Python ecosystem.

Adam Nygate 11:00

I think it would also be good to like, just to like bring, bring the audience in a bit here on like, you know, denial of service vulnerabilities. 

I mean, how, what do these look like when they are exploited in live applications? I think it might be good just to help the audience kind of contextualize what we're talking about a bit.

William Woodruff 11:17

Yeah. I think the one that - at least the most conventional example - this is like griefing, You wanna not necessarily compromise a service, but just keep it offline for as long as possible. I think typically this is also the case that elicits the most eye rolls from the open source community. They're not really concerned with this kind of attack because we're, we're all network connected. 

If someone is really motivated, they can find any number of vectors that will grief you and take you off the internet or at least bubble you up. A more sinister version of this is in a complicated like multi-system or multi-process system, you can imagine that like the degradation and availability of one process results in like a fallback system. And that fallback system has different properties or different invariants, and then that could be exploited in a second stage by the attacker. That's much more hypothetical. 

And then the argument there is, well, it's not necessarily the building block libraries responsibility to make sure that you maintain invariants between the components of your system. 

Dan McInerney 12:15

Yeah. Denial of service is kind of a tricky one in general because it's like, you're right. You ride this weird gray area, and this is kind of one of the issues with running the bug bounty program is there's so many vulnerabilities that ride this gray line between an actual bug, an actual vulnerability and just a software bug. Yeah. It's just there - you have to question - one example: GitHub runner issues, GitHub Actions issues. We sit there…we had to like debate this internally for a long time. Github,for anyone who doesn't under know about these, GitHub runner issues are when there's a vulnerability in GitHub's CI/CD pipeline thing that you add onto your project and you can sometimes inject code in that, then send a pull request, and in certain circumstances you'll now execute code onto the actual GitHub code base. So this isn’t really - 

Adam Nygate 13:06

Sorry, just on the whole, like CI/CD, I just want to make sure that the audience is, you know, clued up on CI/CD just in case they don't, which is: CI/CD is effectively the build system of the software. 

So what we're talking about here is like a vulnerability in the build system of like a software project, but not necessarily the software itself. And so this is where kind of like there's a questionable gray area of like the, should these count, these kind of things. But, Dan please continue.

Dan McInerney 13:29

Because like this is not affecting the actual users of the library. This is affecting a kind of third party component where the affected component is also the third party. And so, on one hand, I wanna give a reward for these because it is security around the library. I don't know if we can or should, and I'm kind of curious about your opinion on something like the GitHub Actions vulnerabilities.

William Woodruff 13:56

Yeah. I think it's a fascinating space. And I am personally of the opinion that as an open source project, if you're, if your project, if your repository contains CI/CD configuration, that is part of your code base and exploitable context or exploitable state in that configuration is a valid security target both for research and for reports. I would happily receive a security report for like an insecure CI/CD configuration.

Dan McInerney 14:22

What's the kind of - what's the weirdest security issue you've had reported or the dumbest one?

William Woodruff 14:28

<Laughs> The dumbest one. Oh, I have to think about that.

Dan McInerney 14:30

I've seen quite a few myself. I know you get them.

William Woodruff 14:33

'Em. Yeah. Let's see. This is, I've had, I'm not gonna name names here because I, this was, I think this was a genuine, an honest and fair report, but a few months ago I received a CVE request for a project that I maintained that…this project is explicitly marked as experimental and a third party reverse engineered implementation of a thing. It fundamentally is incorrect in a certain sense. Like you could not use it to replace the original thing, it was purely like a research tool. 

And I received a report that was essentially: “your thing is incorrect. There are these edge cases in it that an attacker could use to bypass a cryptographic check.” I was like, oh yes, that is how I designed it. 

Dan McInerney 15:17

Oh boy. <Laughs> 

William Woodruff 15:19

It was intentionally not designed to be the correct thing because I lack the ability to pull the, basically the root of trust that the real thing would need. 

Dan McInerney 15:29

Oh, man. If I see cryptographic in most of these reports, I'm like, hmm, not sure this one's gonna be valid. A lot of people going, well, you're using MD5, that's an insecure hashing algorithm. I'm like, yeah, but he's hashing the file name. This isn't like hashing a password here. There's no - you clearly just did a static analysis tool.

William Woodruff 15:50

Yeah, we actually see a lot of those. But I will say there are a surprising number of places where just using those things can cause surprising vulnerabilities down the road. 

Well, one thing actually I did, I did an audit of PyPI - the code base behind PyPI a few months ago - and one of the things we found was you could cause basically domain confusion within the code base, because one of the legacy modes was to use MD5 for package [contents]. And in a normal context, this wouldn't matter at all, but through like, through a chain of steps, you could basically confuse PyPI about the state of its storage backend and serve the wrong file potentially. 

So yeah. So I would say definitely scanning for MP5 does not guarantee that the thing is a vulnerability, but we often treat that as a strong signal, like to look further and make sure that people aren't relying on the collision resistance of MD5 for any sort of security.

Dan McInerney 16:48

That's true. That's - the collision resistance in any security context. 

So when you're doing a PyPI review, what is the process? Because this sounds like it'd be quite useful to bug hunters to hear an expert like you kind of describe at a high level the steps of code review.

William Woodruff 17:06

Yeah. So, oh, I'll say this. This is what my company does professionally. So this is like a, it's a formal thing that we do. It's - we've done hundreds of these over the past decade.

Adam Nygate 

Hopefully no trade secrets are being spread. 

William Woodruff

No, no, no. This is all - anyways I'm also happy to share a link to the report. It's public that you could drop it in the -

Dan McInerney 17:24

Yeah, we’ll add - 

William Woodruff 17:25

But the, essentially it's a combination of manual and automated review. So we have both internal and open source tools that we use to put - to initially perform static and dynamic analysis of the code base. 

We triage those results, and then after that we basically go from like top down, we look at the import, the places where an entire can control inputs to the system and how those inputs sort of percolate through the code base, how they get transformed and then - can an attacker manipulate them in some way that that changes the, the state of the system in the way that triggers (inaudible).

Dan McInerney 17:57

What, what open source tools do you use for that initial step?

William Woodruff 18:01

Yeah. So, we're very big fans of Semgrep for just quickly detecting suspicious patterns in large code bases. We also use CodeQL. We use, there's a really great Python fuzzer from Google called Atheris, and we use that to find a bug actually in one of PyPI’s dependent libraries.

Dan McInerney 18:17

So to give a background on all these tools; Atheris is a, what he just said, is a Google fuzzing tool that's used for Python code. CodeQL is a GitHub tool. You can actually just go to codeql.github.com, I think, right?

William Woodruff 18:33

I think so, yeah.

Dan McInerney 18:34

And then that helps you trace out where all these inputs are going throughout an open source project. And Semgrep is similar to grep. You can give it patterns to look for and it'll search all of the code base for those specific patterns and help you find - it'll also spit out security vulnerabilities that it finds in the code. 

Yeah, Semgrep is a really good one because it's super customizable. We use that a lot when I was doing pen testing for 10 years. And then you said the second part of that was tracing it all out. I'm sorry, can you describe the second part in a little more detail?

William Woodruff 19:07

Yeah, that's the manual review part. So that really is just eyes on code going from the top user input components, or user facing components, to basically do data flow throughout the system, figure out where the data goes, how it gets transformed, how it turns into, like invariants within databases, how it turns into invariants within short term caches and things like that. And making sure those invariants can't conflict or be manipulated.

Dan McInerney 19:32

Right. And that's where like the real expertise and engineering expertise comes in. That's when, that's where, you know, you're gonna have to have some experience coding.

William Woodruff 19:40

Yeah. That's certainly where you need to have familiarity, both with the programming language and also the code base itself.

Dan McInerney 19:46

Right. Do you ever use like LLMs to help guide this work?

William Woodruff 19:51

So, not directly this, we have experimented in the past with using LLMs to reverse engineer C++. And obviously it doesn't correctly reverse engineer it, but it does actually do a shockingly good job of recovering things like templates inside of obfuscated or just complicated, compiled binaries. So we've used it for that in the past. 

And then we've also thought about using LLMs to do seed generation for fuzzing. So take a corpus and ask the LLM to transform the corpus in interesting ways, and then pass it into the fuzzer - 

Dan McInerney 20:24

That's a good idea.

William Woodruff 20:25

Yeah. For the mutation. But those are currently sort of research ideas that we have…not things we've immediately pursued.

Dan McInerney 20:31

So this was just a, like on a list of things to do, but it wasn't experimented with yet.

William Woodruff 20:35

Yeah. It's, we have a laundry list and then those are two - the C++ stuff we've actually done internally, but we haven't turned it into a tool that we currently apply yet. 

Dan McInerney 20:47

Yeah. We've been shocked with just the prompt engineering of these LLMs to help us with things like just parsing reports. Like we can just, we've got a report summary parser, we can just copy the whole thing, paste it into, you know, an LLM very engineered prompt and it spits out, you know, good quality graphics about where the inputs are going all over. 

And then a nice summary, because it sometimes it'll make your head hurt going from one report to the next, where they describe things in a very different way and they use very different proofs of concept. So we've had actually a lot of success with that. I think that's gonna be a pretty bright future is code analysis via LLMs.

Adam Nygate 21:26

Even in this conversation, it was really interesting, like when you, when you both were talking about like MD5 usage, right? And like some people would look at MD5 reports, like, look at like low quality MD5 reports were just saying, oh, you're using MD5, MD5 is insecure, stop using it, kind of thing, right? 

And think, oh yeah, this is just low quality report. Like I shouldn't care about it. But then, you know, William, as you said, you know, there, there can be legitimate cases where like, it is actually a security issue and it can literally and like the determinator could strictly be like the, the quality of the report and like whether the researcher has gone through the time to draw out that ultimate conclusion of why this usage of MD5 leads to like whatever the problem is.

And so it's like you could have two people who are reporting the same insecure method usage, right? Of the, you know, MD5 one saying, you know, MD5 is bad, don't use it. The other saying, oh, don't use MD5. Because It could cause like domain confusion or, you know, in your example or something. 

And just by putting a bit more thought or creativity into like reporting and kind of taking that attack vector to like a, some kind of ultimate goal can help maintainers or, you know, researchers just kind of better explain the problem of the vulnerability at hand. And maybe, maybe LLMs will help there, you know, help kind of people take their ideas, take that from inspiration to something, you know, more developed and more well thought out.

Dan McInerney 22:55

Yeah. So, I'm curious, William, about your opinion on the CVSS scoring system because this, man, I certainly have a lot, a lot of things - having done a lot of CVSS scoring, where do you see the strengths and where do you see the weaknesses of this scoring?

William Woodruff 23:12

Yeah. Well, I'll start positive. I'll start with strengths. 

So I think, high level, I think it's important to have a taxonomy for describing vulnerabilities. And I think CVSS is a noble attempt to taxonomize the components of what makes a vulnerability severe or not severe. I think most of the categories that it enumerates are sensible ones, and that it correctly evaluates the sort of the severity of each subcomponent. 

With that being said, I think the fundamental issue with CVSS and all of these sort of vulnerability scoring systems is they're fundamentally context insensitive. They fundamentally say, here is this unit, this function that we know has an exploitable vulnerability in it, or potentially exploitable vulnerability. And we know how severe that is, but we have no idea whatsoever if that is actually reachable within a given code base, much less exploitable by an attacker, much less remotely exploitable. 

So like you know that the function might be conceptually remotely exploitable because it takes an arbitrary input, but you really don't know, especially in the case of like libraries, you really don't know how it's being used. And the only way you can get that is to actually know, go through the entire world and look at each use case of that, but that's obviously not something that the triager can do.

Dan McInerney 24:34

Yeah. Completely agree. So, CVSS 4.0 came out recently, and actually I really, I think it's a great improvement. And overall - I'm actually, I can't, I'm not gonna, you know, dump on CVSS - I think for, given the constraints that we have on such a gray area as vulnerabilities, I think CVSS was fantastic. I think they did a genius job in developing it. And then CVSS version four, though, sounds really good. Have you, have you read anything about CVSS 4? Before?

William Woodruff 25:01

I actually haven't read anything about it yet.

Dan McInerney 25:03

I, so I just brought it up because I'm not an expert in this at all, but it adds a lot of those things. Sorry, go on, Adam.

Adam Nygate 25:10

Yeah, I was just gonna say, and just for the audience, CVSS is just a standard for how organizations can classify the severity of a vulnerability. So saying whether, you know we use terms in the industry like low, medium, high, critical, but there's actually a formula how we, you know, determine, like determine that label and that's, that's CVSS. And then it has different versions, one through four, as an example.

Dan McInerney 25:32

Yeah. And so number four adds like exploitability scores, they add, you know, exploit maturity urgency, if it's automatable. Like they added a lot of this depth to the CVSS scoring system that I think is gonna help a lot for bug bounty programs. You had mentioned Hacker One. Do you contribute to Hacker One very often? 

William Woodruff 25:51

Not, not super often. I have in the past. I have worked both with open source projects to help manage their Hacker One, and also I have filed a few reports on Hacker One over the years. But typically I just email people.

Dan McInerney 26:09

Yeah. Might as well just reach out personally.

William Woodruff 26:13

Yeah, I think I've never had a CVE or no, I've had, I guess it depends on how you count it. I've triaged many, many CVEs. I've never had a CVE attributed to me, I believe. I usually just file the bug directly. 

Dan McInerney 26:25

Yeah. I kind of felt like that was, um, when I was a student, I really wanted a CVE. When I was up and coming.

William Woodruff 26:31

Yeah, that’s part of the incentive.

Dan McInerney 26:32

(Inaudible) CVE on my resume. And then after you’ve been in the industry for a few years and you have all these, you know, super high findings in, you know, Google and Amazon and Microsoft, you're like CVS are meaningless to me. It's kinda like certifications.

William Woodruff 26:43

Yeah. That, yeah. That's part of the incentive structure that I think I talked about in the blog post is it's a very understandable one. I think people think I'm being very negative. I, actually, I'm very sympathetic to the idea that the incentives here encourage this behavior. CVEs are sexy. They look really good on a resume for a certain group of people. And so it makes sense that people want to pursue them and pursue reports for them.

Adam Nygate 27:05

It is interesting though that there's like, there is this correlation that I think Dan was just alluding to, which is that once you have them, they become less attractive. And it's, and it's almost like you want one at the start of your career, because maybe you'll get a leg up to kind of getting into that dream job or something. But once you're in the dream job or once you have a couple of CVEs under your belt, you care about them less, and that's when maybe you seek the, you seek the more quality findings rather than, at all costs get a CVE, whether it be for a ReDoS or something else.

Dan McInerney 27:36

Yeah. And I see the bug bounty programs as being very useful for that. I mean, you can get those early stage hackers in their careers and they can find these, we'll call them low level CVEs even, but they'll be, you know, valid CVEs Mm-Hmm. Put them on a resume, and now they can actually start hunting down the really good stuff. Because what we find is that I think bug hunters tend to specialize in one or two vulnerabilities. And what we've found is at least a couple of them have tried to specialize in ReDoS because it's all over the place. 

But as your blog post says, it's - a lot of that code's just not reachable. A lot of that code is irrelevant if you throw the ReDoS vulnerability into it. But I think, you know, after a couple closed reports, the message is kind of clear that let's, let's throw your attention somewhere else. 

What was like one of the biggest vulnerabilities you found, William?

William Woodruff 28:24

That I found? Let's see. There's two that I can't talk about that are not, that are not public yet. One is in a large network-reachable code base that is, if you run Linux or another sort of like open source operating system, it is definitely running on your computer. That was pretty recent. That one's, I mean, I'm not against denial of service vulnerabilities generally. I think this one is a denial of service, and I think it's a pretty severe one. Hopefully that'll be public in the next few months.

Dan McInerney 29:03

What was the process if finding it?

William Woodruff 29:05

Yeah. That one actually was pure luck. I was working on an unrelated project and I ran an input into this program and it crashed. And I was like, well, that's weird that that should absolutely never crash at that point because that implies that someone could, could crash a website, for example, with this.

Dan McInerney 29:26

Huh. Adam, you were gonna ask something.

Adam Nygate 29:28

Yeah. Just outta curiosity, how, how long has the vulnerability been under embargo, you know, thus far?

William Woodruff 29:34

A few weeks. It's not been that long. 

Adam Nygate 29:39

Okay, fair enough.

William Woodruff 29:39

Yeah, and - 

Adam Nygate 29:40

You do hear these horror stories.

William Woodruff 29:42

Sorry, sorry.

Adam Nygate 29:43

I was, I was gonna say, you do hear horror stories about vulnerabilities staying private for years or, or something like that.

William Woodruff 29:49

No, this has been pretty rapid actually, all things considered.

Dan McInerney 29:53

Yeah. Sometimes I think people don't realize how much of like bug hunting can just come from accidents. Like if you have a crash, yeah. Go try and reproduce that, that that could be worth $10,000, you know, if it's in Chrome or something. 

And that's actually how a lot of these bugs end up getting reported is - we've had quite a few people call…submit reports because they're like, I was using this program and then it just like failed on me. And then I tried to reproduce it and it turns out this is a bug, and they just hit, you know, thousands of dollars worth of money just for doing their job without any expertise at all.

William Woodruff 30:27

Yeah. But yeah, I'm typically, in my day job I don't really actually look for bugs. I mostly just hit them accidentally. Most of the day I'm just engineering on open source programs.

Dan McInerney 30:38

Well, that's great. Adam, did you have any other questions?

Adam Nygate 30:42

Yeah, I mean, your blog post was really interesting on the whole like incentive structure and this kind of thing. I mean, what do you think would have to happen for like, or what would the ideal situation look like, not necessarily just with ReDoS, but like how the industry could change such that, I dunno, we move towards more qualitative vulnerability findings. Or, you know, decreasing kind of fatigue on developers, you know, all this kind of stuff. Is it purely by modifying the way in which we like determine severity? Or, yeah, just curious to hear your thoughts there.

William Woodruff 31:21

Yeah. So, I think we've already seen some movement towards fixing this. I think like, if you look at the history of the CVE and CVSS system, the history is these corporate software vendors who are extremely hostile to receiving security reports would refuse to accept vulnerability reports. And so the CVE system - 

Dan McInerney 31:41

“If you’d just stop doing research, we wouldn't have any more bugs.” <Laughs>

William Woodruff 31:42

Exactly. And so you would have these legitimate vulnerability reports come in and they would just get black holee by these large companies. And CVE was an attempt to shift the incentive structure away from ignoring reports towards publicly shaming and publicly notifying the larger software community that there actually are serious things that need to be patched and remedied. And basically it puts the pressure on the company to fix the process. I think that was a wonderful shift in the way that we treated software vulnerabilities as sort of a larger software like the, the software engineering community.

As open source has sort of consumed everything, that original incentive system has I think sort of shown its some of its weaknesses, that open source is not like a commercial vendor. It doesn't have, there's no dedicated security team. Often there's no dedicated triage time. There's like, oftentimes it is taken extremely seriously, but also we have no idea how the code is being used. It is just a library that is being thrown out into the world. 

And so that is, that's where the weakness comes up. And I think the path forward that I see for sort of correcting this, re-correcting the incentive structure here, is helping individual software ecosystems become their own CVE numbering authorities and helping them make the decisions that make the most sense for their ecosystem, for their projects in terms of accepting or rejecting things like ReDoS, things like other general vulnerability type categories.

Dan McInerney 33:10

Right. What do you think the threshold is for the project? Like how many, you know, downloads per month or GitHub stars do you think before they start doing this?

William Woodruff 33:16

Yeah. I think it, honestly, it's probably really hard to generalize, but I, so for example I think we made the news yesterday. The Linux project and the Python Software Foundation both became their own CNAs. Both I think, I think last week and last month respectively. And that process now means that Python as a community can begin to triage on its own these types of reports, both I think against CPython itself, and then potentially, I'm not, I'm not actually sure about this potentially in the future, the Python, like a library ecosystem. 

And so obviously that is much more than 10,000 downloads a month. That's hundreds of millions of downloads a day. I think that scale definitely qualifies. And I think - I don't know, I'm not sure what the cutoff point would be. I don't have a good sense there.

Dan McInerney 33:59

We've been struggling with that, too, because in the AI world, there's so many tools that people just build - well, the majority of the tools I've seen, people build it for themselves - and then someone else is like, hey, that's useful. You should publish it on GitHub. And all of a sudden it has 10,000 stars. And they're like, you want me to go triage all these CVEs and stuff? You know, we see that all time.

William Woodruff 34:16

Yeah. That's research code, right?

Dan McInerney 34:17

And so we're trying to figure out, you know, where is that threshold of when we could start expecting them to actually respond to CVE issues. But it's kind of an ongoing problem, I think.

William Woodruff 34:31

Yeah. It's definitely a challenge.

Dan McInerney 34:32

I feel like once the community is built a little bit, once there's a community to help you with the security triage, that feels like the right time to start doing your own maybe CNA or just having a CVE program in general.

Adam Nygate 34:45

Well, William, this has been great. Thank you so much. Once again, I am co-host Adam Nygate. Thanks to our listeners for your continued support of the MLSecOps Community and its mission to provide high quality AI Security educational materials. 

And thanks to our sponsors, Protect AI and to my co-host Dan McInerney. And last but certainly not least, thank you to our expert guest today, William Woodruff. Be sure to check out the show notes for links to William's contact details and other resources mentioned throughout the episode. We'll see you all next time.

[Closing] 


Additional tools and resources to check out:

Protect AI Radar

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard - The Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for listening! Find more episodes and transcripts at https://mlsecops.com/podcast.