<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4373740&amp;fmt=gif">
MLSecOps-favicon PAI-favicon-120423 icon3

Responsible AI: Defining, Implementing, and Navigating the Future

 

In this episode of The MLSecOps Podcast, Diya Wynn, Sr. Practice Manager in Responsible AI in the Machine Learning Solutions Lab at Amazon Web Services shares her background and the motivations that led her to pursue a career in Responsible AI. 

YouTube: 

Diya shares her passion for work related to diversity, equity, and inclusion (DEI), and how Responsible AI offers a unique opportunity to merge her passion for DEI with what her core focus has always been: technology. She explores the definition of  Responsible AI as an operating approach focused on minimizing unintended impact and maximizing benefits. The group also spends some time in this episode discussing Generative AI and its potential to perpetuate biases and raise ethical concerns. 

Introduction 0:07 

Welcome to The MLSecOps Podcast, presented by Protect AI. Your hosts, D Dehghanpisheh, President and Co-founder of Protect AI, and Charlie McCarthy, MLSecOps Community Leader. Explore the world of machine learning security operations, a.k.a. MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies and best practices with industry leaders and AI experts. This is MLSecOps.

Charlie McCarthy 0:34 

Hello MLSecOps community, and welcome back to The MLSecOps Podcast. Thanks for listening. I'm Charlie, and I'm here with my co-host, D. And today we have the pleasure of speaking with guest Diya Wynn. Diya is Senior Practice Manager in Responsible AI underneath the Machine Learning Solutions Lab at Amazon Web Services. 

Diya, we are so honored and excited to be talking with you today. Welcome to the show. 

Diya Wynn 1:03 

Thank you so much for having me. I always love talking about this topic, so I'm glad to be here. 

Charlie McCarthy 1:08

Absolutely! 

D Dehghanpisheh 1:09 

And it's a pleasure to be back with a friend and colleague, Diya, so super excited that you're supporting us. Thank you. 

Diya Wynn 1:15 

Excellent.

Charlie McCarthy 1:16 

Yeah, speaking of which, will you tell us a bit about your background and what brought you to the machine learning space, Responsible AI; how you got interested in it? 

Diya Wynn 1:25 

Sure. So, I can share two things to give you some context. One is, I have to say that I stepped into Responsible AI as a little bit of a career pivot for me. My background is in computer science and technology. I've been in technology all of my professional career and a little bit in my college career too. 

But I have two boys, and so for me, I was looking and trying to think about, what do we make sure that they know in order to be prepared for what they're going to encounter in the world, right? In the workforce tomorrow? And I really didn't feel like education was doing that for them. So, as the mom that I am, I'm exploring resources. And in my research, I came across the three trends that were shaping the workforce in the future. One was data. Two, artificial intelligence, machine learning, and robots. And three was the virtual, AR VR world. 

And as I'm doing all of this exploration - this is roughly about five years ago - of course, most people are not seeing me on the screen, so maybe they don't know that I'm an African-American woman. But what became apparent to me was that there was an absence of voices and/or perspective and inclusion of people that look like me and would look like my son. 

So that was one of the reasons why I thought this is great. I love and have always done things in inclusion, diversity, and equity from a passion project perspective, something that we say we do on the side of our desk at AWS or Amazon. But I've always been centered and core in technology. And this notion of Responsible AI, Responsible Tech is at the intersection of the two. So, I thought it was a great way for me to revive my next career, if you will, and make a little bit of a pivot. And so that's what brought me into Responsible AI. Now, interestingly enough, what brought me into the work specifically at AWS was I wrote a narrative, that thing that we get to do, everybody–

D Dehghanpisheh 3:28 

Classic six pager! [Laughs]

Diya Wynn 3:29 

[Laughs] Everybody gets to participate in innovating and thinking big and I felt like we could be doing something more to assist our customers, very much like we do with Well-Architected. And that narrative received support and we started a practice focused on Responsible AI and helping our customers. And we were doing other things in other parts of the company, but this was specifically about giving our customers support and assistance as they built on top of our AI and those services so that they could build more inclusively and responsibly. 

Charlie McCarthy 4:02 

Awesome. 

D Dehghanpisheh 4:03 

So, let's talk about that, Diya. Your title has the term Responsible AI in it. And here on MLSecOps Podcast, we've talked with many guests who use terms like “Trusted AI,” “Robust AI,” “Ethical AI.” Can you talk to us a little bit about how you define Responsible AI? And is it the same thing as Robust AI, or Trusted AI, or Ethical AI? Are they all the same? Are they all different? 

Diya Wynn 4:29 

I think they largely are used interchangeably, although I feel like Responsible AI gives us a little bit more in terms of the full breadth, or areas of focus, that we can have. So, it's not just about ethics, although you could say what happens in the environment and all that kind of stuff might be considered that, but it really is broader than that. 

So my definition of Responsible AI, and this is core to the way in which we engage with our customers, is thinking about Responsible AI as an operating approach that encompasses people process and technology so that you can minimize the unintended impact and harm, or unintended impact and risks, and maximize the benefit. Right? And so that gives us some things to think about or a structure in order to be able to bring a lot of things into the picture. 

So our value alignment, considerations around privacy, security, and sustainability, that all becomes part of the conversation. I think by and large, I like the notion of trustworthy as well, because the intent is that we want to do these things, or put in place these practices and have this governance structure in a way so that we can engender trust. 

You all know people fear what they don't understand, and that generates a lack of trust. And we ultimately want to have trust in the technology because that's going to have an impact on our ability to be able to use it, and use it for the good things that it's actually capable of doing. 

I think that the foundation of trust is having Responsible AI. But by and large, I think all of these terms are used interchangeably with some of the same core tenets or areas of focus, like privacy, security, transparency, explainability. We add in there value alignment, inclusion, training and education, along with accountability. 

D Dehghanpisheh 6:32 

So you add in extras beyond just the technology, or extras beyond just that and go to the heart of the matter, if you will, pun intended, in terms of making sure that you're aligning to the values of a company, that if you consider yourself to be ethical in a certain dimension or have values of diversity in a certain dimension, that that flows through from technical selection all the way to data curation to bringing about people to determine whether or not the use case is appropriate. 

Is that a fair articulation? 

Diya Wynn 7:02 

Absolutely. You got it. So we're in an interesting position, of course, right? Because we are providing services for people in all sorts of places, and jurisdictions, and countries, and different industries, and we don't have full visibility into all the ways in which they might be using them and we're certainly not in a position of trying to police them. 

But I think that we get to come in, partnering with our customers to help them unpack, in whatever use cases they might be exploring, one, what matters to them and make sure that's being infused into the way in which they are looking at leveraging the technology. But also thinking about the things that they should be concerned about, or consider, to make sure that they're building on a solid foundation that will yield inclusive and responsible technology on the tail end. 

And when they do that, then they're going to have the trust of their customers and that also is going to help support the trust that the marketplace or industry will have in the technology and the products that are similar to those. 

Charlie McCarthy 8:09 

Diya, as we're thinking about the trustworthiness of AI and machine learning systems, and some of these terms that we're starting to hear more and more often in the industry, in addition to Responsible AI, another term that we're hearing a lot is “generative AI.”

Through the trustworthiness lens, does generative AI have potential to perpetuate biases and [raise] ethical concerns? Are there concerns about trustworthiness, more perhaps, than other forms of AI, and if so, how?

Diya Wynn 8:40 

I would say first, generative AI is AI. So, some of the same considerations or concerns that we have when we're thinking about artificial intelligence, machine learning–and I know we're using this word interchangeably, so in this context, I know the difference between having strong AI and what we do predominantly in machine learning–but we have the same considerations, right? The same things to be concerned about, but there are some exacerbations or opportunities for that to be increased. 

Let's take what everybody has been hyped about in the generative AI, or creative AI space with ChatGPT. And when we think about the source of that content being largely the Internet, where is that mostly represented in terms of the people who create the content, the language that the content is created in, its sources, who's contributing to that? It may not be entirely representative of all of the consumers and/or populations that the product might serve. And so this notion of inclusion or representative-ness, fairness in terms of bias, all of that becomes, perhaps, a bigger consideration. 

And then we have the things that everybody has been talking about a lot as well. Hallucinations and concerns around copyright and privacy. All of those are, I think, exacerbated. And then there's an element, of course, where how much is the inclusion or proliferation of this technology going to have an impact on how and what we do for work? And so, we've got to be very intentional about how we start to employ these systems. 

I think there's some great use cases and people are still trying to suss that out for their organization or for their company. But I think irrespective of whether it's generative AI, or if it's regular machine learning, we need to be much more intentional about how we're approaching products and projects around this so that we can actually make sure that we are considering people, process, and technology elements to be able to handle those potential risks. 

D Dehghanpisheh 10:59 

So, Diya, you mentioned there's a distinction–and we agree–between machine learning systems/machine learning environments/machine learning pipelines, if you will, and AI applications. 

And I think the thing that I have picked up in listening to you is that Responsible AI is a framework that encompasses not only the technical system components that a machine learning environment may have, not only an AI application in terms of how it's used and the people that it touches–customers, consumers, whatever the case may be, but also the people who are developing on both sides of those engagements. 

How does AWS employ Responsible AI, as the framework that you are responsible for? How does AWS consume that or practice that? 

Diya Wynn 11:47 

I think the first thing to think about is we've defined a strategy around Responsible AI that starts with having a people-centered approach. And that really connects directly with what I was just saying about Responsible AI being this operating model. It goes beyond the technology to also considering the people, process, and culture. And to be honest, there's a bit of a culture change that is necessary when organizations are starting to adopt AI and wanting to make it integral parts of their strategy. 

The other is taking a holistic approach to how we are building our services so that we're integrating Responsible AI in the entire lifecycle with our engineering, product, and development teams. That it's not just something centered in or focused on one team or one set of resources, but a part that everyone has to play in ensuring that we're building systems that are inclusive and responsible. 

And I think the other is a focus on helping our customers take what has been defined as best practices and theory, and actually operationalizing that. And that's where my team sits at the core of that. Organizationally, we provide resources and tools in order to help support our customers in realizing what needs to be done and doing that so that Responsible AI can be operationalized. And the key there for us is that we ultimately want Responsible AI to be the way that we do AI, right? Almost to say there's no AI without Responsible AI. 

The last bit of the strategy is to continue to focus on how we advance the science around Responsible AI. There are still areas that research is active. This is a nascent area. We weren't talking about Responsible AI five, seven years ago. And there is still a lot that is evolving and changing. And certainly generative AI, as we were talking about, has increased some of the areas of complexity. 

So, we have a commitment to continue to advance research, whether that's in some of the partnerships that we have or collaborations with institutions, or the NSF Fund in order to be able to finance research in this space. But we have a deep commitment there so those are the kinds of opportunities and/or knowledge that we get to bring into how we're building and ultimately get to bring into what we share with our customers. 

Charlie McCarthy 14:30

Right. 

Touching again on the generative AI piece, there's been this explosion and adoption and hype around large language models and other types of generative AI, like text-to-image tools and the ability for some possibly malicious happenings - deep fakes - and people who maybe weren't interested in this technology before or exposed to it are starting to dive into it and get really excited about it. And there's some interesting stuff that we're seeing as a result of that. And it's a lot!

So, I guess my question would be; has generative AI changed anything at AWS in regards to responsible AI given its scale and dataset size?

Diya Wynn 15:09 

No, I don't think it's changed anything. I think for us and so many others, it's been a great testament or reminder to us that we need to employ the best practices and the structure, the processes that we've defined using the tools in order to be able to address some of the challenges, or some of the potential risks that we're talking about. ‘

We're building our foundation models with Responsible AI in mind at each stage of the development process, right? Just like I was describing the way in which we want our customers to engage and thinking about that throughout the entire lifecycle, we're doing the same. And again, that hasn't changed because of generative AI. That's still a consistent part of the process. 

We do have to or have had to consider some other areas because there have been things that have circulated, like additional questions around intellectual property or copyright considerations. We've had appropriate use or acceptable use policies, and we need to be looking at that in the context of the data, and being able to filter out content or certain kinds of requests and understanding where that occurs. So there's certainly some additional dimensions that are being considered a little bit more, perhaps in the area of toxicity and how we handle that. 

It challenges us as well because what we're seeing with some of the generative AI is that it has broad applicability, and what we've been focused on is looking at specific application use cases and being able to define guardrails around that. And so again, I think the creative, generative AI space is adding some complexity. 

But change? I wouldn't say it's changing. It's required us to do some additional things in order to make sure that the systems are safe. And of course, like I was saying, with the research part of it, we're continuing to do research in this area too so that we can address the places of unknown or still requiring more science. 

D Dehghanpisheh 17:18 

So AWS has that storied leadership principle of customer obsession. And you're talking to customers all the time about Responsible AI, and you just talked about how Amazon and AWS have, maybe, adapted their Responsible AI capabilities to accommodate for the differences of generative AI. 

I'm curious though, as you talk to customers big, small, mature in their ML practice, immature in their ML practice, and you start talking to customers and helping guide customers in the Responsible AI journey:

What's the most common thing you see customers starting to do on their journey to Responsible AI? And then what's the one thing that you just are having trouble getting them to do?

Diya Wynn 18:01 

I’m going to start with the latter part first. So, I think that what we're probably having customers be a little bit slower to do is actually doing the work. So, let me put this in context. There was a study from 2022 that Deloitte did, and they talked about the percentages of executives from the basis of their study that had awareness and knowledge of some of the potential areas of risk and bias. And Gartner has a similar study as well. But one of the things that Deloitte calls out is something that they refer to as the preparedness gap. 

The idea that organizations have increased awareness of the challenges or potential risks, and even have set in mind a strategy around AI and sometimes Responsible AI, but that hasn't actually materialized in the organization. They talked about this as perhaps not having the skill or the awareness in terms of knowing where to start or the resources. So, this preparedness gap is one of the places that has an impact, or an effect, on customers doing some of the work around Responsible AI. 

D Dehghanpisheh 19:19 

Just to pause on that or maybe dive a little deeper, is it getting customers to actually do the preparedness components or is it just acknowledging that they have a gap in that preparedness and that they need to start filling in that gap? 

Diya Wynn 19:33 

No, I think it's more about the doing. And in some ways, I believe it's related to this next point that I was going to say. Some people and organizations need a compelling event. Something that is going to drive them to making some of these adjustments. And to be honest, this space right now, in terms of what to do, could be somewhat unclear, especially given that there are standards and regulations coming out in mass number that are being proposed. And all of them don't agree, while they may have some similar elements. 

So perhaps for some it's hard to figure out what to do. But I think that some organizations need something to motivate them to move. Not everyone is saying let me figure out how to handle Responsible AI because it's the right thing to do. I wish we had more of those, but that's not happening in large part. Some companies may have had some actual challenges, right? Uncovered something, or had some exposure that runs the risk of reputational damage and they've been moved to doing something. 

And then like I said, that latter group is those that need a little bit more motivation and I believe that regulation is going to be part of what takes them there. 

D Dehghanpisheh 20:53 

Yeah, I think an interesting thing that we always ask our customers is: What's your Incident Response Management Plan for when your robot goes astray or your chatbot goes astray? And they don't really have one, and they're like, uhh… 

And often they don't understand the brand and reputational damage from an AI agent that can possibly go haywire and tug it into motion rather than, say, a data breach. So a lot of people will take a data breach type of incident response management framework and apply it to an AI system, where we're like, that's probably necessary but insufficient. So it's interesting to hear that that compulsion to act may come from, “Oh, the house burned down. Maybe we should put in some fire alarms!” 

So it's kind of thinking through that…

Diya Wynn 21:38 

I think it's a great question that you're asking folks, right? Because even with all that we're doing, there is a potential for there still to be some challenges. One of the things I think is important is that folks don't just look at the technology alone. We can deploy a product to be able to detect and mitigate areas of bias, and you still may have unintended impact and risk with that system, right? 

All of the pieces that we talk about in the framework, in terms of value alignment, and inclusion, and training and education, accountability, privacy– All of those together actually create an environment or a platform to minimize the risk and reduce some of those areas. And we say minimize and reduce, right? A colleague in this space actually says, “Do less harm.” Because there is a reality that functioning and production; sometimes there is drift. Right? What we expect it to be behaving as, or the results we expect may not be what we see. 

And so, what happens then? You've got to have something in place to be able to manage that particular occurrence. I come from the old world of disaster recovery and business continuity planning. That is a necessity. You've got to think about what happens if, and how do we handle those circumstances. And the best ways that some companies do is connect into other security, risk and compliance areas to be able to handle that. But I think they absolutely need to think about that, and think about that before some things happen. It's insurance. 

D Dehghanpisheh 23:13 

Yeah, it's insurance to make sure that the AI agents don't take human error or human biases into the speed and scale of machine-oriented biases. Charlie, over to you. 

Charlie McCarthy 23:24 

Yeah, thanks. Y'all are touching on a really important topic I was hoping we could dive more into, which is AI bias. Will you talk to us a little bit about how machine learning models and systems become biased in the first place?

Diya Wynn 23:34 

Yeah, so I think it's worth defining bias for a moment. Bias in general refers to a leaning, or a favoring of one thing over and against something else, right? When we apply that to machine learning or data, it's that leaning or skewing of the data in a way that isn't complete or accurate. And again, it's going to favor one thing, or one demographic, or one group over another. 

At a high level, I want to mention that when I think about this notion of bias, and sometimes we can start off with bias being in data, in the algorithms, and in the people that are responsible for those systems or the data or that's training the model. And I like to start off sometimes with talking about this in the context of the biases, because we all have biases. So, this notion of our products having some degree of bias probably makes sense, or is understandable, because we are building those systems and potentially are infusing those biases into our systems. 

We have what we often talk about, unconscious bias, right? So these unconscious errors in our thinking that come up from things that we might have experienced, information that we heard, mental shortcuts that we make to simplify how we speed up, or how we engage and interact in the world. But the challenge is that those, as I mentioned, become embedded in our data, and in our products, and ultimately in the outcomes that can create opportunities of unfairness for different groups or for particular individuals. 

And just like we have to do intentional work to interrupt our biases when we're interacting with human beings, we have to interrupt the potential for bias in our systems and foster more inclusive systems. And there's work to be done around that, right? How do we remove or interrupt our biases so that we don't lean into the things that, perhaps, come easy because of a stereotype or a prior experience, but actually give space to interact in ways that foster inclusion and fairness? 

D Dehghanpisheh 25:52 

So on that, what is the upstream step beyond data inspection and data selection– What's the upstream step that you would guide teams to do to help mitigate AI bias? 

Diya Wynn 26:03 

So, one of those foundational things is training and education. And that's both in us as individuals, having this education around how we interrupt the biases ourselves and become more conscious of when and where they occur and how we lean into those, but also to understand how that occurs in our inner systems. 

Again, we were just talking about that in the context of data, the algorithmic choice that we might make. We've got to train our teams appropriately so that they are aware of where those come in. And they certainly can take specific action, whether that be technical or looking for other data sources, et cetera. 

I think the other pit of that is inviting and having other people diversity-involved because they're going to expose and uncover things that we may not, right? So, a great example of this for me is not technology related at all, but I've been walking around in a walking boot for a number of months because of an injury to my foot. So, I essentially have been operating as someone who is disabled temporarily, not permanently. But the things that I've experienced and seen, and become much more aware of, is different than when I walk through the world as an able-bodied individual. 

And so that sensitivity that comes from folks that have had different experiences, different ethnic backgrounds, different ages, different cultural contexts, different work experience, those things actually make a difference to be able to elevate and see where we might be driving towards certain biased outcomes or less inclusive outcomes. And so the teams, the people, resources matter in that equation as well. 

Charlie McCarthy 27:57 

Diya, walking back to your comment about that first upstream step you mentioned, which is training and education, and in thinking about the future of Responsible AI, how does something like public AI literacy make an impact as AI matures and kind of seeps into our everyday lives? 

How do you think about when we should begin that literacy journey? Is it in primary school? Secondary school? Later? 

Diya Wynn 28:25 

The “when” question is interesting, right? I think that we probably are going to need to engage in earlier education around AI because our children are growing up now, interacting with these systems, with this technology all the time, right? In earlier years we watched TV or watched certain shows, and we thought that those things were never going to come to fruition. And now we're seeing an age where we have robots in our homes, and et cetera. 

So we probably have to adapt to earlier education and awareness, so that they have some consciousness around that. But I absolutely believe that general awareness and broad understanding of AI is necessary because it's impacting us all, right? In terms of the way in which we live, engage, interact, the jobs we do, the way in which we do our jobs. 

Absolutely we need to have broad public understanding and I think it also helps us to be everyday consumers to hold companies and our legislators responsible, or accountable, for ensuring that there are the right structures for responsible technology, Responsible AI. 

Charlie McCarthy 29:38 

Right. 

Speaking of responsibility, what do you think about the widespread availability of large foundational models affecting the progress of future Responsible AI practices, and what needs to be happening now to ensure that AI and machine learning systems are developed and used responsibly? 

Diya Wynn 29:57 

More of what we were talking about earlier. The excitement around large language models, generative AI, and what we are seeing play out in the public, hopefully is a clarion call to folks that Responsible AI is a necessity, and we need to do more of it. Right? We need to make sure that we are taking intentional steps in our design processes, in our thought process around whether or not this is technology that we should be using and employing in this particular case. 

It's being considered at the beginning. It's much harder to regain trust that is lost and to go back, or walk back a product that you've deployed into production when it's having issues than it is to spend some early investment in these areas, or these questions, to ensure you are employing, again, more fair, equitable, inclusive systems. 

And I hope that folks are doing more or would be doing more, that this is the impetus to do more. Now, I will say that there's a lot of excitement, so folks are running to explore, but I hope their exploration is tempered with the right process and steps to make sure that we can enjoy the fruits of this innovation in ways that isn't creating harm. 

D Dehghanpisheh 31:12 

So for those listeners and readers who don't have the luxury of going to an AWS executive briefing in Seattle, we've talked about how you help customers get started. What is the one thing you want to leave listeners here with? The business leaders, the machine learning directors, the security professionals who are listening to you.

When it comes to getting started on their Responsible AI journey, what's the first thing you want them to go do? 

Diya Wynn 31:40 

The first thing? There's so many things I want them to do, but I think the first thing is probably… slow down to speed up. This idea, this notion of the intentionality that's required means that while they're eager to see the operational efficiencies, or the advancement in the area of innovation, that it requires–I can't say this word enough–the intentional thought processes and holistic look with a people focus. 

Perhaps it's slow down, just a little, to put these things in place so that they could speed up without the landmines and/or roadblocks that might come from a public or reputational affecting event, or trust lost from the customers that they intend to serve. 

D Dehghanpisheh 32:30

That's great. So it sounds to me like the first step to take on the journey of Responsible AI is to be intentional. 

And with that, Diya, thank you so much for joining us on the MLSecOps podcast. Thanks to my host Charlie, and our new producer, Brendan. So, thanks for coming on the show and we will chat soon. 

Diya Wynn 32:48

Thank you for having me. 

D Dehghanpisheh 32:49 

Thanks everybody. 

Charlie McCarthy 32:49

Thanks everybody. 

Diya Wynn 32:50

Bye, y’all!

Closing 32:59 

Thanks for listening to The MLSecOps Podcast brought to you by Protect AI. Be sure to subscribe to get the latest episodes and visit MLSecOps.com to join the conversation, ask questions, or suggest future topics. We're excited to bring you more in-depth MLSecOps discussions. Until next time, thanks for joining. 

Additional tools and resources to check out:

Protect AI Radar

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard - The Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for listening! Find more episodes and transcripts at https://mlsecops.com/podcast.

SUBSCRIBE TO THE MLSECOPS PODCAST