<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4373740&amp;fmt=gif">
MLSecOps-favicon PAI-favicon-120423 icon3

MLSecOps: Red Teaming, Threat Modeling, and Attack Methods of AI Apps


In this episode, Johann offers insights about how to apply a traditional security engineering mindset and red team approach to analyzing the AI/ML attack surface.  We also discuss ways that organizations can adapt their traditional security practices to address the unique challenges of ML security. 

Johann Rehberger is an entrepreneur and Red Team Director at Electronic Arts. His career experience includes time with Microsoft and Uber, and he is the author of “Cybersecurity Attacks – Red Team Strategies: A practical guide to building a penetration testing program having homefield advantage” and the popular blog, EmbraceTheRed.com




Introduction 0:08 

Welcome to The MLSecOps Podcast presented by Protect AI. Your hosts, D Dehghanpisheh, President and Co-Founder of Protect AI, and Charlie McCarthy, MLSecOps Community Leader, explore the world of machine learning security operations, aka, MLSecOps. 

From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. This is MLSecOps.

D Dehghanpisheh 38:20

Welcome back to The MLSecOps Podcast, everyone. Charlie is out today, but with me is our guest Johann Rehberger, who is currently the Red Team Director at Electronic Arts, but he also happens to be a very prolific published author of “Cybersecurity Attacks – Red Team Strategies: A practical guide to building a penetration testing programs. He's also a cybersecurity researcher and of course a fellow entrepreneur. So welcome to the show, Johann. Thanks for coming.

Johann Rehberger 1:08 

Yeah, thanks for having me. Really excited to be here. 

D Dehghanpisheh 1:11 

So give us a little bit about your background before we get started and how you landed where you're at, and then maybe we'll talk a little bit about machine learning security itself.

Johann Rehberger 1:21 

Sounds great. So my career basically started out at Microsoft as a software developer, consulting - in the consulting space, actually. I'd done that for a couple of years and learned a lot about security engineering, software engineering at that time and database systems and so on. And after that I moved after doing that for a couple of years, I moved into the SQL Server division at Microsoft. So came to Seattle. 

That's really where my security career kind of started. And it kind of became very interesting because Microsoft had the security push and I was like doing code reviews and I found a lot of security problems and it kind of really got me into that whole security space.

And then throughout my whole career with Microsoft, basically I stayed there for a very long time and it was a really great place, I think Microsoft, I really love Microsoft and throughout my career the job always kept growing with me. So that's why I really stayed for a very long time.

So I was doing a lot of SQL Server threat model - basically I was involved in threat modeling the entire database system, as well as then going into the web with reporting services, Power BI, and then we moved to the cloud and that's sort of when we started doing more of the kind of red teaming work where I led a team called the Azure Data Attack Team.

So it was a red team in Azure Data that I kind of created. And yeah, that was super exciting times. That's also where I started Azure Machine Learning. I was actually doing pen testing for the Azure Machine Learning service back, I think that must be seven, eight years now when the first kind of preview versions came out. 

After Microsoft, I was at Uber for a brief time. And then during COVID I helped kind of co-create a startup in the cloud space, not necessarily security related, but helping people molve development machines into the cloud. It's called DevZero. And yeah, now during COVID also I built that Machine Learning Attack Series that's in that [Embrace the Red blog] and then I started at Electronic Arts, leading the Red Team program here. 

D Dehghanpisheh 3:26 

So let's rewind the tape back to when you started coming up with that ML Attack Series or Machine Learning Attack Series. When did you first start thinking about the need for bridging the gap between traditional security DevSec or DevSec ops if you will, and ML security and AI system security? What prompted that thinking? 

Johann Rehberger 3:47 

I think initially you don't know what you don't know, right? And for me, when we did pen testing on machine learning systems, it was always focused on Python notebooks and isolation between instances, very traditional security thinking. And then when I started learning more about machine learning itself, model building and so on, that's really kind of what - when I realized I took a lot of online classes, which I recommend from Coursera, from Deep Learning AI. 

There's a lot of content out there where you can really learn the foundations of not that I really understand them, but at least I can talk about them and have a feel about how to do the matrix multiplications and so on, how models work, how forward propagation works, back propagation works, right? And then it really opens up a whole new world that I was not aware of before, basically. Right? 

And then you realize how - that's when I kind of started the Machine Learning Attack Series, as more a way for me to document my thoughts? How would I approach building such a system and then also attacking it and defending it. And that's kind of really where I realized this kind of gap between tradition, calling it traditional security, I guess all of it is security, but I guess more traditional security space.

And the concept of adversarial machine learning, which I think is really the core of the academic research and so on, where there's so much great work happening. And so that's where I really try to learn as much as possible and then kind of combine the two fields. 

D Dehghanpisheh 5:17 

So you mentioned adversarial machine learning as a threat vector, if you will, for ML systems and AI applications. Maybe you can talk a little bit about how you would describe the current state of security related to machine learning systems. How do you think about that today beyond just adversarial ML? 

Johann Rehberger 5:40 

I think that's a very good question. I think there's two, kind of, ways I look at it. First of all, a lot of things change constantly in the threat, in the overall threat landscape, but there's like a technical component, I think, where things are just constantly evolving. Like last year there were attacks around, like backdooring pickle files or image scaling attacks, which I really love the concept of image scaling attacks and so on. And so there's like these technical things that happen, and it's kind of important to stay up to date and just know about these kind of attacks as a red teamer, right, because you might want to leverage those during an exercise. 

The second part is really more, I would think, about what strategically actually changed or is changing in that space. And this is where I think a lot of progress is being made. If you think about the [MITRE] ATLAS framework, how we really have kind of started having a common taxonomy, how we can talk about some of these problems in the machine learning space, and especially also it helps fill that gap in a way, right? ATLAS contains a lot of traditional - and I'm a very big fan of MITRE. I contributed a lot to the [MITRE] ATT&CK framework.

D Dehghanpisheh 6:55 

So do we; we had Dr. Christina Liaghati on last week.

Johann Rehberger 6:58 

I was just yesterday listening to it, actually. It's a really great podcast. I really like the idea of this framework and the case studies and so on, and the traditional ATT&CK framework and of course, also in ATLAS. And I think that really kind of shows the scope of the problem space, which we - I think we’re actually still mapping out what this actually means in many ways.

D Dehghanpisheh 7:25 

So as we think about, kind of, mapping that out against the ATLAS framework and one of the things you mentioned is you said, hey, there's really two fundamental pillars in your view. The first is kind of traditional backdoor attacks that might be staged from a data perspective. 

And then the second is really kind of the traditional, what I would call, types of approaches to take advantage of exploits or take advantage of issues or vulnerabilities in the development of those systems. And you referenced ATLAS.

If you had to pick two or three as a red teamer, if you had to pick kind of two or three threats that are facing ML systems what would you say are the most common ones that, as a red teamer working on ML and ML systems that you think are out there that maybe are not being addressed? 

Johann Rehberger 8:20 

The supply chain, the integrity of the infrastructure, the integrity of the model. That is probably on the top of my list, in a way where there’s just a lot of opportunity for supply chain to go wrong. Right. And it's not specific to machine learning, but just a few days ago, I read about this new, actually indirect supply chain attack that happened that [...] discussed recently, right, here a vendor was actually shipping a binary science, but they had consumed another third party, like, library, right. 

D Dehghanpisheh 8:53

A dependency attack of some kind, 

Johann Rehberger 8:55

Indirect, even like multiple stages, and it came down to right - again - somebody just then pulls that library in or that piece of code in, and then the chain is compromised.

So I think this is from a traditional red teaming perspective, also, this is something we kind of, I think, know well and how to do and how to emulate that from a response perspective to challenge the blue team to make sure we do have detections in place and organizations do have detections in place for such attacks. 

So that is not really necessarily just machine learning, but there are components to supply chain that are very specific to machine learning, which is the libraries used and then also the model itself. As we spoke, the model could be backdoored, the model could actually be backdoor in a way that it runs code, actually with pickle files when they get loaded. 

This was one of my realizations where when I started building my own models and then using this library and loading them, I was like, oh wow, there's not even a signature validation. This is just an arbitrary file that I'm loading. I was like, wow, why is there no signature on this? This was my realization. 

D Dehghanpisheh 10:00 

There’s no cryptographically hashed attestation on any of the assets. 

Johann Rehberger 10:03 

Yeah, and I think this general concept is kind of missing a lot. I still think there's very early stages. And I think there was also this very recent poisoning, the large scale web poisoning attacks, which kind of had a similar problem, right? These URLs - the data of these URLs was retrieved, but then it was not cryptographically verified that these files didn't actually not change. 

And then in this research, it was actually possible to modify the training data because of the validation, like the integrity check being not there. So supply chain, I think, is really very high on the list.

D Dehghanpisheh 10:46 

Around the supply chain attacks, I like to say a lot of times that there's a lack of original art, if you will, in ML systems and just kind of this massive reliance on open source in AI applications. And that means that very different supply chains exist. And you've talked about that.

I'd like to actually kind of press a little on how large language models (LLM) and other large foundational models are now becoming part of that supply chain, both in kind of like a vendor supplied API notion, but also increasingly as like a pretrained open source model that then gets modified. 

Do the rise of those large scale foundational models, whether it's large language models or vision models or other types, how does that give you any new concern as a new attack vector inside the supply chain? 

So, in other words, if things aren't signed, if they're not hashed, if there's not built in security type of mindsets in these open source assets that are being used, how do you think about that as creating cracks for new vulnerabilities or cracks for traditional exploits? 

Johann Rehberger 11:58 

It starts, right, if you look at the large foundational models, I think a lot of them are trained on public data, right? So it's probably, for most cases, unfortunately not actually even verifiable or not reproducible if you think about OpenAI's models, it's not really reproducible how these models were actually constructed. So we don't really know what is inside. And that, I think, is sort of one key component that concerns me a lot and based on the data, right. 

I gave this example just last week to a friend was - why we don't know what is inside, and there might be some bias inside that it acts on certain days differently than it acts on other days. If you think about April Fool's Day, maybe next year on April 1st, language models will, just because of training data it has seen in the past, just start behaving very differently? We don't know, right. And because it's not really verifiable, in a sense, it's kind of problematic. 

How does that relate to the supply chain scenario is really that when we start integrating these, if an adversary can figure out some of these biases in these large managed models, then they can leverage that as a universal attack. This is kind of, I think, where things might become problematic. If it works against every single system that leverages a large language model, where you might have a universal exploit that then can be triggered on each one of them. 

D Dehghanpisheh 13:26 

So as you think about the model lifecycle, right, and you think about how supply chain is used and whatnot, is there a particular part of the lifecycle that you think is more ripe for exploits and vulnerability? Like is the training portion of the model lifecycle or the data labeling kind of data cleansing? Or is it inference? How do you think about that?

Johann Rehberger 13:53 

This is more gut feeling, not really having any sort of data supporting it. Right. I always think training, modifying the training data and then getting a backdoor into the model that way seems to me very plausible. I don't really have that knowledge on how many - like if you take a large language model and you try to retrain it right, you might need a lot of new training data to really modify its behavior. So that threshold, I think, would be very interesting to understand, like how much of the training data you have to modify. 

Or like in my machine learning attack series, I had this example of I had these images and then I put like this purple dot on the lower left and when that dot was present, it was a backdoor. And then it was just a very few examples when the model actually started learning that as soon as this dot is there, it's a backdoor, right? So the amount of data needed to be modified is maybe not that large. So that concerns me. 

And then there’s these attacks. We’ve seen this large scale web application attack recently? Then as well as - there was another scenario where, I can’t recall it right now, but there was another scenario - oh, image scaling attacks where some attacker can take one image and modify that image so that when the machine learning pipeline rescales it, it actually becomes a different image. So the image then the model or the system is trained, is fundamentally a different image than what the user or the system initially collected. 

So those attacks, I think, are very powerful in the beginning of the chain, but then also in the very end, a very different attack that I'm worried about is just somebody replacing the model with a different one. Because of all the lack of integrity checks in the pipeline, somebody can just SSH into the production machine and replace the file. 

D Dehghanpisheh 15:41 

Oh, wow. Yeah, that's a pretty easy, typical

Johann Rehberger 15:46 

That’s what a red teamer would do, right?

D Dehghanpisheh 15:48 

Yeah, absolutely. One of the things you mentioned, and I just want to reference this, was kind of large scale poisoning attacks and that I was curious if you or anybody listening, you should go listen to our podcast with Florian Tramer who talks about large scale poisoning attacks and how prevalent could they be? Anybody who's listening, and Johann, maybe you've had a chance to listen to that one as well. I think there's a lot of similarities in terms of your view and that case study that we talked about. 

So coming back though, to the point of what you just said, like, hey, a red teamer could go in and replace that model file in terms of that and launch a type of attack - have you ever employed or what are some of the techniques that you have employed as a red team or on ML systems for either pen testing or red team engagements? What are some of those other things that you've done that catch your clients and or your management by surprise? 

Johann Rehberger 16:50 

I think one of the very first times I had done something like this, I wasn't really actually fully aware that this probably would be considered a machine learning attack, but I was more focused on telemetry and trusting, like the collection of data, basically. Right. And so what happened in this case, let's call it, with a client, so to speak, was that, you know how companies build software and they put telemetry into the software, so when the software is launched, the software would ping back saying, hey, I'm being installed on this operating system at this time. And then whenever the software launches, it would ping telemetry, how the user would behave, what the actions the user takes, and so on. 

So the attack that the red team that I lead at that point performed was the following: that we thought about - so it came out a little bit of more of a fun experiment in the beginning, which actually then had significant impact - which was there were dashboards at the company that would show you this piece of soft is currently running on so many computers, and this is the operating system, like Windows, Linux, and so on. So you would see that right; very visible. 

And so we were just wondering, what if we just create telemetry requests and replace - because we're reverse engineering and start a telemetry request, and it would include the operating system name in the telemetry quest. So at that point, I was just, so what if we replace that and just instead of Windows and the exact version number, or Linux and the exact version number, we just replace it with Commodore 64, which is one of my favorite computers. Then we created millions and millions of these requests. And what happened was that the actual dashboard within 24 hours changed and all of a sudden we saw Commodore 64 being installed. Like you stopped it being installed on a Commodore 64. That sort of really led to a lot of investment and how to make sure that telemetry pipeline, the data that comes in, right, because

D Dehghanpisheh 18:55 

It’s an interesting man in the middle attack, almost kind of

Johann Rehberger 18:57 

Yeah. The key point is the data integrity. That the data you get and then you use to train or to make decisions later on, right? That that data is really validated. It adheres to what you think it should be. Because what happens in this case, this data is put into machine learning models. It's used to make decisions, right? 

You could imagine you have competitors and let's say you have a web UI and there's a lot of analytics happening, and there's this one feature on the bottom right, that users never click. So the company should not invest any more time building this feature. Right. But then an adversary goes ahead and creates telemetry for this one specific button. So the company starts investing in a feature that they should not be investing, just purely based on bad telemetry. Right. 

So this is kind of this thinking where when you think about red teaming machine learning, it just becomes often very just different. It's more blurry, it's not so discreet. I think 

D Dehghanpisheh 20:00 

What you're describing are a lot of new novel procedures or novel techniques in terms of rethinking what the attack surface might look like, right? Johann, are there any novel approaches or unique ML security types of attacks or types of exploits or vulnerabilities or pens that you have done that you'd like to share with us? 

Johann Rehberger 20:24 

There's one particular thing that I haven't done a lot of research, but I think it probably needs a lot more attention. So I wanted to share that area where I think problems might lie that are not being focused on is actually the GPU. If you think about how training works and how inference works, a lot of these libraries might communicate with the GPU. They offload the model, they offload the data into the GPU. 

And we know GPUs are built and designed to be fast, so there's no security on the GPU, basically. All the protection you have on the CPU with NX and all these kind of non writable, non executable data and so on - a lot of these attacks probably are possible in a GPU. The first time I learned about this was actually 2020 or 2019, a research paper that talked about having data overflow into the model and overriding the model on the GPU. And I think these kind of attacks with the libraries of the C code that is being involved could be real problems that are just very deep down technically in the infrastructure or in the hardware, actually.

D Dehghanpisheh 21:30 

Do you see any real world examples of those types of attacks that you've been called upon to kind of diagnose or where you're informing blue teams of those types of issues? 

Johann Rehberger 21:42 

This is where things are just, I mean, there's a couple of examples if you look in ATLAS, right, there's case studies for those real world attacks. But for some reason, and this is sort of interesting, either it is not discussed a lot or it's just not well known, or it just doesn't happen as much. I'm pretty convinced it's not the last thing I mentioned that it's not happening. 

I think what could be related, and this is where my policy knowledge is not so good, but it could also be related - that when a company has an ML related breach or the model is stolen, or some attack like that happens, there's no - and this is where I don't know, I'm not a lawyer - but I don't know if there's any sort of data breach notification required in that case. If the model is stolen, technically it's not PII that was involved, that is stolen, right? The training data is stolen, maybe that's very different. 

D Dehghanpisheh 22:36 

Yeah. The training data set is hijacked, then it has PII identified.

Johann Rehberger 22:42 

Then of course the question is if the model - like what policies around when a model is stolen? Should there be a data breach notification required or not? Right. I don't know. But it could be just for reasons like that that we just don't know so much about when certain attacks happen. 

D Dehghanpisheh 22:58 

Interesting. What are some of the lessons that you've learned, if you will, that you would like those who are listening to this? What are some of the lessons that are most common for you that you'd like to convey? We talked about the notion to that blurry kind of threat surface and thinking about new creative ways as a red teamer or pen tester to go after a target. What are some of the things that you've learned and that you'd want to pass on?

Johann Rehberger 23:28 

From a purely technical point of view or just generally, would you say? 

D Dehghanpisheh 23:33 

I think both, actually. So the technical point of view is one, but there's another, right? If you think about the red teaming and pen testing of, say, financial institutions, where almost every ted team or your pen test team is asked to come back with personally identifiable information credit cards, user accounts, whatever the case may be. 

While that's valuable, do you see red teams coming back saying, hey, guys, I stole your fraud detection system? I knocked over the fraud detection model so I can get around it.

How should companies think about that from a tasking perspective as well as what are the technical elements that red teamers or pen testers could do? Because you sit at this interesting nexus where you're kind of advising on both. 

Johann Rehberger 24:16 

Yeah, I think a red team usually thinks about impact the most. Like, what is it that you can do that has the most impact? And I think ideally you could correlate that, at least for many organizations, like, what does it mean financially? Right? What is the most impact an adversary could have financially on an organization? That's I think one way a lot of red teams would look - like what is the worst that you can do that has the biggest amount of impact? 

So I think from that you start off right, and that's where I think you have these traditional attacks that's just on the very top - compromising the domain controller, active directory, all these traditional attacks, right. [But] then very quickly you reach the data, very quickly come to the data, and then the data is sort of what leads you to machine learning, because it is all based - like this is the foundation of machine learning. It's the data.  

Right. So I think the focus on data security and compromising data lakes. Right. Large organizations even like - GDPR testing. When I was at Microsoft, we did privacy red teaming. My team was focused on finding GDPR, making sure we are good in that space. Right. So I think for me, data security is very foundational in the sense of focusing data on PII, making sure that is protected because that then is what actually goes into the model. Like not the PII necessarily, but into the training to establish a good machine learning model that is useful. 

D Dehghanpisheh 25:51 

What about the intellectual - you talked about economics, right? Where is the biggest value? What can you do? So many companies are now investing in artificial intelligence and machine learning intellectual property. They've hired lots of people, they're doing lots of things. They're putting it in products, they're putting it in business processes. That itself is a big value creation. 

Do you think organizations where red teams and pen test teams are utilized to kind of highlight where vulnerabilities are? Do you think that organizations think about the economic value of the model itself, of the model intellectual property, of that model registry, of that model feature store? Are they connecting those dots between the amount of money they're investing and the need to protect those things? 

Johann Rehberger 26:41 

I think this is one of the jobs that - why red teams need to start looking at this more. Like to raise awareness and to actually help kind of analyze this better. I think it's just not well understood what the actual value in a discrete dollar amount really would be, right? If you do Monte Carlo simulations, like where does that fall into the bucket? If this thing is compromised? 

I think there is probably not such an in-depth understanding of what it means and it might be triggered again coming back to some companies do know if a million user accounts are compromised this is the amount of money it will cost us legally, right? Yeah.

This is data some companies have been to do Monte Carlo simulations about what is the threshold of what we need to invest in, and so on. So the model itself, again, what is the value of it itself? Right? Maybe it's just not so discreet. It's very difficult to put. For instance, you think about how is OpenAI right now thinking of protecting the model, right. How important is that to them? I don't know, it's a very good question. 

D Dehghanpisheh 27:49 

Or how much is the fraud detection model at say a major bank, a major commercial bank how much is that fraud detection model? Or how much is the personalization and recommendation engine model of an online retailer? I can see lots of areas where trade decisioning right, sentiment analysis for stock trading, those types of things.

And I guess that leads me to this notion of specific ML red teaming and specific ML red teams within companies. We see the technology companies doing this. Nvidia has a pretty prolific team, we know some members there, Electronic Arts has people like you. And we have some indication that some of these major financial institutions are just now starting to do it. Why do you think it's not more prevalent? Why isn't it more widely adopted and say hey, we're pen testing all of our web facing elements, we're pen testing our database components, we're pen testing all these other - why not pen test specifically the ML red team? What do you think is stopping that? 

Johann Rehberger 28:51 

That is a really good question and I think about it a lot. This is a question I think about a lot. I think what might be one of the main reasons - that there hasn't been this moment of where everybody realizes how bad it actually can be. There's sort of this wake up call maybe that is just not 

D Dehghanpisheh 29:11 

There's no, like, buffer overflow circuit 2003 problem. 

Johann Rehberger 29:16 

Yeah, there are some machine learning attacks that you can exploit during red teaming that are very easy to understand, right? Like authentication bypass or something, right? If an ML model would just use - if a system would just use ML for authentication this would be a very easy to understand vulnerability, but I think a lot of the vulnerabilities that are in machine learning models are less discrete, they are more like blurry. 

They are maybe not so easy to understand and a lot of the research happens in the academic space and I think this is what I struggled with in the beginning. Wow, there's all these documents, I was like [...], there's so many, right? And then you start reading them. And there's so much value in these research documents. But I think that value in these documents maybe is not often really translated into the product world. 

And I think this is really where just reading some of these research papers is just mind blowing in my opinion. But that transition, this gap, I think this is really the gap that we need to keep exploring to kind of bridge these worlds and have red teaming machine learning systems just become a normal task. It's very specialized knowledge, especially if you think about adversarial machine learning. But I think red teaming is much, much broader, right? 

Red teaming is always about an end to end scenario that you want to exploit and then the question of can you actually detect it? And so that big realm - and there's probably a lot of research that is needed.

If you compromise an ML system, how would you know? Can you build a detection inside the model that you know at which layer? If this node triggers in the layer that this should actually be flagged somewhere that this node should have never been touched, right? Or like this area in the neural network. When this gets triggered, it should be an alert somewhere we saw because somebody injected something that triggered an area of the neural network we don't want to ever have been triggered. Right, these explainability problems in machine learning. 

I think this is all very early, at least that's my understanding. Again, I'm not an adversarial machine learning expert. I think that's just these thoughts about you can apply all these red team thinking inside the model as well, right, and that really leads you down that explainability route - robustness and explainability, I think, which is a big challenge.

D Dehghanpisheh 31:35 

And that's how we think about it at MLSecOps as a community, right? We think that there's basically five key domains, if you will. There's Trustworthy AI, which includes things like robustness, explainability and things like that. There's supply chain vulnerability which you talked about at the start of this talk. There's governance, compliance and reporting. There's incident response and management, and then there's adversarial machine learning, right? 

And while there's lots of ways to say take adversarial ML and extract a model, or extract data, or evade it, or figure out where the IP weakness is, you could very easily just apply traditional attack methods. You could assume the rights, roles and privileges of a data scientist, traverse the system, knock over the model registry, knock over the feature store, pull it out and then you're just taking advantage of a traditional exploit or maybe knock over that. And there's some research around

Johann Rehberger 32:36 

A very good example is, I think a lot of red teams typically try to check in code as a developer. This is a common task where you try to inject malicious code as a developer, compromise the developer and then try to check in code and is that being detected? And a lot of similar exercises, right? What questions - what happens if you compromise a Python or a Jupyter Notebook, right? What if you insert the backdoor in the Jupyter Notebook right, and things like this.

D Dehghanpisheh 33:05 

That’s happening. We know that's happening. Hence the NB Defense product.

Johann Rehberger 33:09 

Yeah, and this is kind of these exercises that I think red teams should be exploring a lot more. That moving away from the traditional software engineering pipeline, moving over, applying the same attack ideas, but applying that to the machine learning pipeline. 

D Dehghanpisheh 33:23 

And the machine learning assets in those pipelines.

Johann Rehberger 33:25 

And the assets. Yeah, the assets. So I think a really good way for a red teamer to get more involved in that space is just install Jupyter Notebook, see what it does. I remember the very first time I used Google Colab and I was doing the deep learning classes and so on, and I used Google Colab, and then all of a sudden Google Colab wants me to connect my Google Drive. 

And then I was like, this is strange. So then I realized that this is very common, right, that people that use Colab mount the Google Drive. So if you backdoor the Python notebook, you get access to the Google Drive, which then allows you because Google Drive is often synced to the PC or the workstation of a data scientist, that then allows you to drop or modify a macro in a Word document, for instance, or drop a binary which then shows up on the desktop of the user or wherever the drive is on the PC. And then it would open the Word document and then you have taken over the computer. Right? 

So there's like, a whole lot of pivoting that can be done that is kind of novel, but it's just not so well explored because we need to know the tools and use the tools to kind of see how we can. 

D Dehghanpisheh 34:38 

In that case, you could see a lot of enterprises who, in the example you're giving, might have said, okay, in my Jupyter Notebook, I'm going to allow these permissions. I'm going to allow these tags and sets and their existing SCA tools do not pick them up. They don't scan for it because they don't know how to scan for it. And even if they did, line 1041 error of XYZ doesn't mean anything to a data scientist. Right? That context is often lacking. So it's really fascinating to be talking about this and looking at some of the tools like NB Defense.

Johann Rehberger 35:11 

I have a second, a very good example for that, actually.

So when I started using Visual Studio code, there's a Python plugin from Microsoft that allows you to modify Jupyter Notebooks so you can then develop Visual Studio code. And there I had found a cross site scripting vulnerability, which kind of allowed you to do very similar things. So you could backdoor when you read remote data, you could actually insert and run code within the Jupyter Notebook right, and do a cross site scripting attack, which Microsoft fixed very quickly. I think that's when I started looking at these tools, and I was like, wow, this is a very interesting problem in space.

D Dehghanpisheh 35:46 

So just to kind of close out at the end here, a couple of last fundamental questions. And this has been a great conversation. I'm so honored that you came on. Thank you, Johann. 

What are some basic things that organizations need to do today to kind of adapt their traditional security postures to address unique challenges of ML security? If you had to tell the audience two or three things that they should be doing today, what would they be? 

Johann Rehberger 36:11 

I think a good way to start is just applying my regular security engineering mindset. That's what I learned, how I think, right. But applying that to the ML world. I think threat modeling is, in my opinion, where everything starts. And it doesn't have to be official. Write it on a napkin. Where you talk with other people about what is it that you're building? Where is the data coming from? Who is touching the data? What are the access and security controls around the data? Where does the data flow? Which is exactly what threat modeling is about. Where does it go, who has access, at which stages? Right? What happens when it goes into production? How is it decommissioned? 

All these questions, they can all be answered in a discussion in a threat model. And as I mentioned, it can be on a piece of paper or something, but you can also do it in a mirror board and just have various stakeholders participate in a conversation to talk about what could go wrong and what would you do about it when it goes wrong. I think this is to me very - it kind of starts with that, in my opinion, with threat modeling. It's just super critical. 

And the next stage, I think is, and this might be a little strange, but doing some form of static analysis, like searching for credentials, I see a lot of clear text credentials always in machine learning pipelines, in Jupyter Notebooks, there’s AWS credentials, right? And it really is applying more of a zero trust mindset to the whole machine learning pipeline that I think is machine learning systems, or the components and components and tools and libraries being built often don't even have authentication capabilities. 

I know, actually - I think it was actually from your company Protect AI, very recently - MLflow remote code execution. Right. And isn't it mind boggling that these systems, these libraries don't even have authentication? And then I was looking on YouTube videos on how to install it. Random YouTube videos show you just install it hosted on the public Internet, right? No authentication, no authorization. With the vulnerability, which is a local file include, an attacker can just move right onto the machine. And these things, I think, are really critical to get right. And static analysis can help with that. Looking for keys can help with that. 

The third thing is applying some form of dynamic testing where you have some form of pen tester or security engineer just to evaluate, especially now with large language models and so on, what this actually means. If you start integrating them with prompt injections and indirect prompt injections, and the data that comes back, if you can trust the data or not - you should not ever really trust data that comes back from a large language model. Need to correctly encode it and filter it to be used in the right context.

So those three things, I think.

D Dehghanpisheh 39:07 

So deploy more SCA on things like Jupyter Notebooks and assets, which we're big fans of. That's why we released that NB Defense product and open sourced it. So if you want to perform SCA on your Jupyter Notebooks, make sure you guys check out NBDefense.ai

Second, is employ some Zero Trust Architecture principles into the system and constantly re-verify those assumptions, permissions and elements. And then third is to provide some dynamic testing. 

But above all else, I hear you saying, get creative in applying your traditional elements on these ML systems and you might find some interesting vulnerabilities in the ML supply chain. 

Our guest today is Johann Rehberger. Pick up a copy of his book, and we will be giving one away as part of this promotion. So I just want to say thank you again, Johann. It has been an absolute pleasure and we look forward to talking again soon. 

Johann Rehberger 40:06 

Yeah, thank you very much. It was a pleasure to be here. 

Closing 40:10

Thanks for listening to The MLSecOps Podcast brought to you by Protect AI. Be sure to subscribe to get the latest episodes and visit MLSecOps.com to join the conversation, ask questions, or suggest future topics. We're excited to bring you more in depth MLSecOps discussions. Until next time, thanks for joining.

Additional tools and resources to check out:

Protect AI Radar

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard - The Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for listening! Find more episodes and transcripts at https://mlsecops.com/podcast.