Audio-only version also available on your favorite podcast streaming service including Apple Podcasts, Spotify, and iHeart Podcasts.
Episode Summary:
What does it really take to secure AI systems in defense, intelligence, and federal environments?
In this episode, Leidos’ Rob Linger joins Protect AI’s Jessica Souder and Charlie McCarthy to unpack the practical and political challenges of deploying mission-ready AI. From ATO hurdles to securing agentic workflows, this is a must-listen for anyone working in or around government tech.
Transcript:
[Intro]
Charlie McCarthy (00:07):
Welcome back to the MLSecOps Podcast. Thrilled to be with you here today. My name is Charlie McCarthy. I'm one of your MLSecOps Community leaders and your co-host for today's episode, where I am delighted to be joined by one of my colleagues at Protect AI, Jessica Souder. Jess, welcome to the show.
Jessica Souder (00:25):
Thanks Charlie.
Charlie McCarthy (00:26):
Yeah, and then our very special guest today is Rob Linger from Leidos, but I won't dive too deep into intros. Jess, why don't I pass it over to you for a quick introduction about yourself, and then Rob, we'll circle around to you.
Jessica Souder (00:39):
Sure. So, I'm the Director of Government and Defense for Protect AI. I actually started with the company in 2022 as an advisor, and I've been thrilled to see how the company's grown. My career is in national security. I started with the CIA as an Operations Officer in 2008. I've got over 20 years of experience in the space, and now I focus on emerging technologies like securing AI.
Charlie McCarthy (00:59):
Fabulous. And Rob, for our listeners, if you don't mind, what is it that you do at Leidos? Let them know a little bit about that and kind of what brought your journey around to Leidos today.
Rob Linger (01:12):
Sure thing. So you know, Leidos is a, you know, approximately 50,000 employee company. And we work across a wide range of customer sets from DOD, IC, Energy, Health, you name it. Right? And Leidos is there. We look to really solve our customer's most vexing problems using, you know, best of breed technologies and solutions. And obviously our very talented engineering and professional services staff.
Within Leidos, I am the Information Advantage Practice Lead. And to us, information advantage really encompasses everything from your data, your artificial intelligence, machine learning automation, that entire set of capabilities, right? So it is very broad. It's a very exciting space, obviously right now to be in. And it's a space where just like with software development, the security of how you implement these things, how you build them and how you deploy them and secure them across the entire life cycle is very important to us.
Charlie McCarthy (02:34):
Yeah, paramount for sure.
Jessica Souder (02:38):
Just looking at your background a little bit, Rob, it's a little different from your average AI leader. I mean, a lot of AI companies are scrappy. They come from the startup community in the West Coast, but you've been in the Marines, you've served on a city council, even ran a tech company. Tell us a little bit about that journey and how you ended up working on AI at Leidos.
Rob Linger (02:56):
Sure thing. So, you know I've always been very passionate about technology from a very young age. You know, not to date myself too much, but I was in high school when the high school got their first computers. Right? And I was fortunate enough that I had a computer at home and I know that's weird to say, right? In this day and age, like, yeah, I had a computer at home. You know what I mean? But, you know, I had a big old compact Barco, you know, big monitor, the speakers on the side, "Where in the World Is Carmen Sandiego?" and all that. So I was very passionate about that from day one, you know, since the dial up days. But from there, you know, I decided first to, you know, go into the Marines and serve in the infantry.
Rob Linger (03:47):
So I was an enlisted infantryman. So, you know, never give up on your dreams, you know what I mean? And, you know, as I was there, you know, I still had my sort of baseline passion of technology, but also business. I'm very interested in how businesses work, and what makes things tick. So after leaving the Marines, I went to college and got a degree with the goal of starting my own business right out of the gate. That was what I wanted to do. So of course, I started a business in the federal contracting world and worked that for a while and did fairly well, you know, as a small business. Ended up selling that business and going on to become a Chief Information Officer for a college.
Rob Linger (04:40):
And doing that work is where I really started to dig into first cybersecurity, and then I started digging into data science, right? Because I got interested in you know, gathering important information about the college and about the students and their outcomes. You know, how they interact and interplay and to help make data-driven decisions at the institution. So sort of bringing that confluence of skills together. From there, I went on and got my master's degree in data science. And that's how I sort of broke into the data science world. And actually my first job in data science was here at Leidos.
Charlie McCarthy (05:29):
Wow. That's incredible. Are there any principles you would say, Rob, that you brought along from your time in the Marines? Thank you for your service, by the way. Or, you know, even as an entrepreneur, how did all of those experiences shape the way that you're approaching AI today at Leidos or over the course of even that career back from when you started just in data science?
Rob Linger (05:55):
Sure. So there's a few core principles that I like to follow. One of 'em is to always provide value, right? You know, if you're not providing value, then you need to really reassess, you know, what you're doing. So getting time to value to be very low is extremely important to me. It's extremely important to Leidos, right? That's one of the reasons I really enjoy working here.
But also, you know, you have to keep your eye on the holistic picture. By my nature, I'm a fast mover. I like to, you know, kick in doors. I like to get things done. But that doesn't mean that you can ignore the holistic picture that includes you know, security governance and all of those items that come along you know, with providing a really you know, production grade solution for your customers, right?
Rob Linger (06:57):
And that starts at the beginning. Just like with software development, you know, you want to introduce cybersecurity through that entire lifecycle of developing capabilities. And my mind working with AI and delivering AI based solutions is that very same thing. It is, you know, software engineering and it follows all of those same best practices.
So just like when you're developing, say, microservices and you toss those up on Kubernetes, hey, you know, what's an agent really, right? And if you're not injecting the cybersecurity principles and best practices from the beginning, and you're treating it as an afterthought, you're gonna have problems. So you really need to be able to bring those in across the entire lifecycle.
Charlie McCarthy (07:49):
Absolutely.
Jessica Souder (07:51):
You know, it's funny you bring up all the use cases for the university. When I think about you a lot, I think about the service background. I know you spent time in Iraq, I spent time in Afghanistan. We talk about mission a lot when we are around the intelligence community, or DOD, and I think that that's an unusual word when you're dealing with Silicon Valley, they're not always talking about mission, and they feel funny saying it. But it makes me wonder, how do you think about DODs approach to AI and the capabilities that you bring to the table to enable that mission?
Rob Linger (08:21):
Yeah. So you know, every little section of DOD is slightly different. You know, they have a holistic view. And I keep saying holistic a lot. Why do I keep using that word, but they do have a view...
Charlie McCarthy (08:36):
That's the best word for that.
Rob Linger (08:38):
They have a really good view and, you know, they have produced artifacts and guidance on how to do things. But it depends on the customer. And it really depends on what their mission is right on how quickly they're gonna move forward. You know, we have some customers that are just phenomenal at leaning forward. They're out there doing things I would never even have dreamed they're doing, especially in the time that I was in the Marines, right? You know, we have people throwing stuff on laptops and taking them out for deployment to really rapidly find out, you know, what the actual value proposition is for some of these AI solutions.
So, you know, meeting the customer where they're at, helping them identify gaps or any concerns that they have or, you know, if they need to start from ground zero and build up an entire solution we like to do that and really ensure that we're bringing that best of breed solution to the customers.
Jessica Souder (09:46):
I appreciate that. Yeah. I think a lot about what we used to do in the battle space and how we used to look for targets and parse through information. And it was similar to you and I both remarking that we had a computer back in the day. You know, things have changed so much.
Rob Linger (10:00):
Yeah. And you know, it's interesting because you know, the types of work that you did and the types of work that I did were actually two separate sides of the same coin, right? Where you were out there gathering the information and distilling it and getting that important information down to what matters to the war fighter. I was the one out there receiving the information. And depending on the veracity of that information to, you know, really ensure that we were safe and that we were making the right decisions out there on the battlefield.
Charlie McCarthy (10:39):
That's a really good point. The complimentary nature of your time spent in service. And I was just gonna kind of take the question you just asked Rob, and flip it back to you, Jess, your time, you know, in the CIA and the ways that you served, did that influence at all how you're kind of thinking about governance and secure AI adoption now in defense settings? Or even with your work with our Leidos counterparts?
Jessica Souder (11:02):
Absolutely. I mean we were talking about mission earlier, and we really believe in that concept when you're putting yourself at risk to do things for our country, or when you're putting yourself away from your family in situations where you're not seeing them or you're doing things for long periods of time, you want your work to be meaningful. And so one thing that's become very clear, especially over the last 10 years, is that AI has the ability to not only make us more efficient, but more competent, and that rapid adoption needs to happen and it needs to happen as soon as possible.
I think about the tools that AI brings to the table and what we could do with those tools against compartmentalized and sensitive information to stay competitive with our adversaries and just to stay current with all the data that we have. And we very much need to support that adoption. So when I think about Protect AI's mission and our ability to mitigate risk and to work with partners like Leidos to do that and to help that happen faster with their bigger footprint and know-how, as well as access to the community, to me, it's just a win-win. And it's exactly where I think we need to be. It's why I came here to work.
Charlie McCarthy (12:04):
Yeah. You know, both of you, Jess and Rob, coming from service backgrounds, you clearly understand how difficult it can be to responsibly deploy tech in government. What, can you tell us a little bit about what brought Protect AI and Leidos into alignment and kind of some of these shared, you know, core, I won't use the word values 'cause that's a very specific set of things, but like related to AI security, how you recognized that this relationship was going to be important?
Rob Linger (12:36):
First of all, you know, I have to give a shout out to you know, the Leidos external tech teams, right? Like, we have some really wonderful people that do a lot of work. You know, sort of scanning the disruptors out there and, you know, the startups that are out in these emerging fields to bring 'em to the table, to find out if there's a there, there. And, you know, we happen to have a team member that brought Protect AI to the table. And obviously you know, Protect [AI] was very willing to, you know, get a bunch of people, bring 'em down, you know, to our office. And I remember the first meeting we had where, you know, we had the big whiteboard in the back and we just got up and just started just scribbling stuff out.
Rob Linger (13:26):
You know, we started saying, you know, okay, here's the products. And then I say, well, this is a representative environment, right? And then we started plugging in where all the different offerings fit in. And right away it was very clear to me the value and the importance of it. Because sort of, you know, as I alluded to earlier and I didn't get into this, but my previous role before this current role, I was the AI Software Architect within our office of technology. So I was looking at that whole picture up on the wall and thinking to myself, this, you know, this is just like securing, you know, a large scale software application end to end. And I think about the types of things that we do as a part of our software development life cycle here at Leidos.
Rob Linger (14:17):
Hey, now you know, now we're working with AI and there's different types of artifacts that you have to pull in. There's different places you're pulling those in from. So instead of say NPM or PI PI registry or you know, Maven, wherever you may be pulling your software artifacts from, now we are looking at things like Hugging Face, right? And hey, you know, what does Protect AI do with Hugging Face? You know what I mean? So I saw immediately that Protect AI had got ahead of the crowd and started building that tooling that can treat all of these artifacts the same way that I was familiar with the software development artifacts. And it all just clicked in my mind and I said, this is perfect. Right? This is what we need to do.
Charlie McCarthy (15:07):
Love to hear that.
Jessica Souder (15:08):
Absolutely. And the whiteboarding session was I think over a year ago at this point. And it's amazing how much we can come together to support different aspects of the government, not just the intelligence community and DOD, but others as well, like healthcare and veterans healthcare. Lots of opportunities there. And I think we're pretty excited by just the sheer reach of Leidos. Like I said, we're scrappy and small, and so right now there's only about 120 of us, and so it's hard to keep up with what you guys can do. Well, speaking of scrappy Rob, I'm really curious, what are you actually building and enabling at Leidos with regard to AI? I'd love to hear more about how you hope to work together and just your day-to-day mission.
Rob Linger (15:51):
Sure thing. So, you know, obviously we have a lot of different areas that we're looking at. The scope is massive. The large driver right now, obviously around LLMs and agents, agentic workflows, chatbots, automation, all of those areas are running really fast right now. You know, what we're aiming to do here at Leidos is build an end-to-end capability for our customers that can cover a broad range of use cases you know, from your, you know, standard automation use cases all the way up through your anomaly detection, your LLM chatbots, agent, agentic use cases.
And in order to do that, we have to look at the entire infrastructure. And as a part of that infrastructure, we look at the networking, we look at the hosting environment, you know, the cloud environment. We wanna look at the user experience, we wanna look at the cybersecurity aspect of it as well and how it may impact efforts in the future for our customers when it comes to getting and gaining knowledge out of their existing data and using that information to also help them identify and prioritize places where they can modernize.
Jessica Souder (17:22):
Along those lines, who do you find most kind of either reluctant or supportive of this idea of adopting security for AI with your government customers? Do you feel like the cybersecurity teams are real pro, like, how do you feel like the machine learning teams and developers are responding?
Rob Linger (17:41):
Thus far? Everybody's on board, right? I have not heard anyone be reluctant about adding security especially in the way that Protect AI adds security, right? So as far as say the developers go, for the most part it's very unintrusive. They don't notice that it's there unless something happens and they need to be protected.
A great example is, you know, sort of when you're doing some forward-leaning research and you're out there and you wanna pull, let's say, you know, for instance, if you're working on an agentic flow, and instead of having a centralized LLM that all your agents are gonna communicate with, you wanna experiment with possibly a subset of agents or each individual agent having its own small LLM to use right there that you can, you know, fine tune, you can get it very purpose built, very mission specific, and you just wanna pull down the top five, you know, say four gig models or 4 billion parameter models.
Rob Linger (18:56):
A lot of times you're not gonna sit there and do a lot of heavy research into exactly what that model is, who built it, where it came from. You're not gonna go through and look at all the layers but Protect AI will do that for you. So behind the scenes, you know, if it's a safe model, then a researcher or developer can just pull that. If not, then they're stopped and they get a reason why they were stopped. So you know, it's not something that interrupts workflow, it doesn't slow down workflow. And generally when you run into resistance to adding security to something, it's because it's gonna add, you know, time or effort on those developers and researchers' part. But thus far, no, we haven't had any of those issues using Protect AI.
Charlie McCarthy (19:50):
That's fantastic. And I love the way that you articulated that, Rob. In the industry, there has been a lot of debate over the last couple years about, you know, security controls versus innovation. And to your point, it doesn't have to be that way. Like, if the security is good, you know, you can almost innovate faster because you've got the assurance in your tools or, you know, models that you're pulling down. You can go even faster when you know that they're safe, and you're not gonna have to be worried about model failure in the future and having a rollback plan in place. It's just that extra reassurance. So I thought that was a really good call out.
Rob Linger (20:26):
Yeah, a hundred percent. Right. And there's always a push and pull between, you know, speed and security. But one of the things that we pride ourselves on at Leidos is providing speed, security and scale to our customers. And when we find you know, teams and products like Protect AI that enable the speed and security we can bring scale to the table. Perfect.
Jessica Souder (20:57):
That's awesome. Thank you. And it's funny, when I worked at the agency, I worked a lot with different contractors of course, and had some Leidos colleagues, and you guys were always great. But I'm curious, you've had an interesting journey. I'm gonna zoom out a bit, working with government customers, what's that been like and how has it changed over time, especially with regard to machine learning and AI?
Rob Linger (21:17):
Yeah, so, you know, I like to say I've been doing machine learning since before it was cool, right? And that, you know, there's people that have been doing a lot longer than I have. And I'm fortunate that I work with a lot of great people who have a lot more experience than I do directly in the field. But sort of just from the path that I took across, you know, being a CIO and running my own business, and I also had a stint there where I did offensive cybersecurity work. So that was pretty cool. I sort of had this overall view of each of those pieces of the puzzle on how they fit together. I've done it wrong enough times in my life that I have a pretty good view of what right looks like.
Rob Linger (22:11):
And so you combine that with working with, you know, government customers across the past over 20 years, and just seeing the challenges that they have faced across time and how those challenges have changed and morphed across time. And just consistently trying to keep track of new processes, technologies, frameworks across time to help resolve those pain points for our customers. Because if you boil down to the root of the issues that folks have in the technology world, the root cause is often the same. The only things that change are the technologies that they want to use. The technologies that we wanna use to help solve any of those issues.
And then obviously the volume and velocity of data has just exponentially grown. So, you know, that's to say that on the high level, it looks like a lot of things have changed across time as far as challenges go, but once you boil down to the low level it comes down to we need to take data and we need to turn that data into actionable insights. And how we do that and how much we can do that, that's what changes over time.
Jessica Souder (23:50):
Absolutely. Reminds me of the targeting conversation or the conversation we were having earlier about, you know, me collecting and you using. At the end of the day, we were doing it full, we just weren't doing it well. So, curious to see where it goes.
Rob Linger (24:04):
And I always joke 'cause we talked a little bit about college, and I know that you just recently graduated with yet another degree, right?
Jessica Souder (24:13):
I did.
Rob Linger (24:15):
And then I think I told you, I said we could learn a lot from the folks that run the campaigns to get money from alumni, right? Like, those guys are more persistent than skip tracers. They'll track, you know, you can just disappear, buy a plane ticket and fly off and leave everything behind. Wherever you end up, you'll, you'll get a piece of mail from your alma mater
Jessica Souder (24:41):
That's funny you say that. I know. I'm a Penn State graduate undergrad, and there's something, a crazy number of graduates from Penn State, and they find me every six months. So they do manage to keep track of all of us. Probably using AI to do it.
Charlie McCarthy (24:56):
Rob, from where you're sitting right now, you know, we've talked about trying to implement more secure measures or use of security tools for AI systems possibly with some of your government clients, but even the security piece aside, what are some of the biggest challenges that you are seeing your government customers face and trying to adopt AI in general? You know, I know that you mentioned as far as the security piece goes, it's been well enough received among developers and cybersecurity teams.
Are there other challenges that are being faced when, you know, is it a matter of like, going through the procurement process, maybe trying to figure out if you even have a use case for AI? Or what roadblocks are you seeing there, if any?
Rob Linger (25:40):
So, there's a number of things that are always sort of in motion behind the scenes. And in some of our spaces especially in defense, IC, or even some of our other customers such as IRS, SSA, you know, where you're holding PII or PHI, depending on the customers you know, the security is a big piece of it. But sort of on the heels of that is also policy. You know, policy has to continue to evolve.
The difficult part there is that the technology evolves so fast, it's difficult to evolve policy at the pace that technology evolves. And policy really has to do with not only, you know, policy for using the technology, but your procurement process, your procurement cycles, right? The policies that go behind all of that. Another big one that we see right now is if you start talking about ATO's, right? There's a push right now to do a lot more automation and the ATO process within the DOD.
Jessica Souder (26:59):
You mean the Authority to Operate process, Rob?
Rob Linger (27:02):
Yep. The Authority to Operate process.
Jessica Souder (27:03):
Explain that a little bit. Some of our audience isn't quite familiar with it, so.
Rob Linger (27:06):
Ah, okay. Sorry. Yeah, so in order to gain authority to operate there a lot of work that has to be done to ensure the security of the things that you're bringing into a government system. And that follows the risk management framework. And there's a bunch of other pieces that go together to make all of this work. So as technology evolves quickly, and we want to get these newest capabilities into our customer's hands, we have to, you know, work with our customers to ensure that we're getting it to them in a way that they're comfortable with. We will also wanna give them solutions where they can stand behind the results of that AI, like the explainability and the observability. Each of those pieces need to be there as well.
Rob Linger (27:57):
So it is a very complex situation that the whole world is in right now as far as these technologies go. But that's where, you know, Leidos comes in and that's where we shine. You know, we have so many amazing people that you know, they have that really deep customer knowledge. They have the relationships so that we can talk through and help guide our customers down the path to get the solutions that they need and to ensure that they meet all the requirements of the customer.
Jessica Souder (28:37):
Thanks for that, Rob. I mean, I've also been kind of working through some of the challenges with regard to ATO, just on the Protect AI side. With regard though, to adoption, what have you seen that works? What really moves the needle in terms of adoption?
Rob Linger (28:52):
So, with adoption, the best way to do that is to really focus on a particular use case, right? An item that is of high value to your customer and tackle that one thing. Okay? And if you could tackle that one thing and show them the outcome immediately, right? Get 'em that value as fast as possible, that is what will really help with adoption, because once you show them that you're able to solve a very specific problem and you're able to provide that value.
And then from there, you can drill down and show them under the covers all the complexity that's there, all the security that's there, you know, how the networking, how the data flows, why it matters, but give them that value first. And as soon as they see that value, then that eases the door open a little bit more to say, okay, let's broaden the scope of this project. Let's tackle items two through five on my top five list, right?
And as you're able to work through those and really be very transparent about what that process looks like and what you're doing to secure all of their assets, all of their data, then it really, really, really helps with adoption. So that, again, all the way back to the time to value, you know, if you get them value quickly, then adoption will follow.
Jessica Souder (30:22):
It's interesting, for me with some of the customers that we don't share, the first challenge right off the bat has been model scanning and wanting a way to just mitigate the risk for bringing models from outside environments into their environment. Is there any one thing that you would say has been like a common one thing that your government customers have been asking about with regard to AI/ML?
Rob Linger (30:47):
Yeah. So the primary thing that always comes up is you know, is my data secure? What's happening with my data? What data is being sent and what am I getting back? Right? So obviously you wanna have tooling where you can show them and for us, Protect AI is that tooling.
Jessica Souder (31:09):
We're very glad it is.
Charlie McCarthy (31:10):
Okay. So on the heels of, you know, the topic of securing AI responsibly in the future, what, and I'll pose this question to both of you, Jess, if you wanna jump in also what do you see as the next big thing in the AI security space over the next couple years? Or what do you hope the next big thing will be?
Jessica Souder (31:30):
I'll let Rob go first.
Rob Linger (31:32):
Sure. So you know, I'll give maybe two years because I think that going anything beyond that it might be a fool's errand, right? To try to predict. But I think one of the next big things is gonna be that agent to agent communication and securing that capability. Because if you think about sort of, you know, having an orchestrator agent and a number of agents below that orchestrator that are working maybe agent one has access to tools and is allowed to use certain tools that agent two is not allowed to use.
How do we make sure that agent two doesn't just use the tools through agent one and pull the data back through? So we need to really start putting some thought into how we are going to secure and observe the communications in a very rapid way between large numbers of agents.
Jessica Souder (32:33):
Absolutely. And it's something that I think we're starting to think about as a team too, Protect AI. I spend a lot of time in the defense tech space. Rob mentioned that I graduated recently. I've spent a lot of time in Boston up at MIT, and I keep running across more and more the focus on drone technology on UAS and on autonomous systems and robotics. And I think that we will someday soon reach a point where we have to start talking about securing the AI/ML that's driving those systems.
I don't know what the timeline for it looks like, and I don't know how long of a projection that would be, but I definitely think that could be the next thing after the agent's discussion that Rob was just talking about. Who knows where Protect AI would be at that point, but hopefully we're ahead of the curve.
Charlie McCarthy (33:19):
I don't doubt they will be. Okay. Groovy. Well, this has been a fantastic conversation. Thank you, Jessica. Thank you Rob, so much for being here. We're gonna wrap it up for our MLSecOps Community members. You can find this episode and many more at mlsecops.com. We will provide some links within the show notes so that you can go read more about what Leidos and Protect AI are doing together, and we will see you next time.
Rob Linger (33:49):
Thank you very much.
Jessica Souder (33:50):
Thank you.
[Closing]
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Model
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.