Home > Podcast > Science 4-Hire > Ethics by Design: Responsible AI is Blueprint, not a Band-aid

Ethics by Design: Responsible AI is Blueprint, not a Band-aid

March 12th, 2024

“This AI piece and all the ethics and governance and everything that goes around that… it really warrants a dedicated role and some specific communities focused on AI ethics and risks.”

Bob Pulver: Founder of Cognitive Path

Summary:

My guest for this episode is Bob Pulver, a seasoned expert in the intersection of artificial intelligence and talent acquisition, bringing with him over two decades of experience from his tenure at IBM to the forefront of AI ethics and responsible implementation.

This episode not only provides valuable insights into the mechanics of implementing responsible AI, but also frames a narrative that reveals the complexity and necessity of ethical AI practices in today’s technology-driven hiring landscape.

Bob underscores the importance of ethical AI development, emphasizing responsibility by design, speaking to the need for a proactive stance in integrating AI into people practices.  We both agree that compliance should not be a band-aid, or afterthought, but a foundational principle that begins with data acquisition and continues through to the implementation of AI-powered tools.

A big part of our conversation revolves around legislation related to the use of AI hiring tools, including New York City’s Local Law 144. Bob provides advice to organizations on navigating its anti-bias legislation and the broader implications for global regulatory landscapes.

In sum, Bob and I both agree that responsible AI is not a game of short sighted interventions, but rather a transformative shift that affects every aspect of talent acquisition. We provide our ideas on how to navigate through this period of intense change, focusing on the practical challenges companies face, from internal up-skilling to grappling with legislation that struggles to keep pace with technological advancements.

Takeaways:

  • Start with a Foundation of Ethics and Responsibility: Implementing responsible AI requires building your technology on a foundation of ethical considerations. This involves considering the impact on protected groups, ensuring accessibility, and integrating privacy and cybersecurity measures from the beginning.

  • Understand and Comply with Relevant Legislation: Staying informed about and compliant with anti-bias legislation, like New York City’s Local Law 144, is crucial. This law requires annual independent audits for automated employment decision tools, ensuring they don’t adversely impact protected classes.

  • Adopt a Holistic Approach to AI Implementation: Responsible AI transcends legal compliance to include a broader ethical framework. It encompasses fairness, privacy, cybersecurity, and the mitigation of various risks, including reputational, financial, and legal.

  • Engage in Continuous Education and Upskilling: All stakeholders, regardless of their role, need to be educated about the ethical implications of AI. This includes understanding how to acquire and test data to mitigate bias and ensure the responsible use of AI technologies.

  • Foster a Multi-Stakeholder, Cross-Disciplinary Dialogue: Creating solutions that are both innovative and responsible requires input from a diverse group of stakeholders. This includes technical experts, ethicists, legal teams, and end-users to ensure cognitive diversity and address the ethical, cultural, and practical aspects of AI.

  • Prepare for an AI-Driven Transformation: Recognizing that AI transformation affects every aspect of an organization is essential. This realization should drive a commitment to responsible AI practices throughout the organization, from product development to deployment.

Full Transcript

S4H Bob Pulver

Announcer: Welcome to Science for Hire. With your host, doctor Charles Handler. Science for  Hire provides thirty minutes of enlightenment on best practices and news from the front lines of  the improvement testing universe.

Speaker 1: Hello, and welcome to the latest edition of Science ForHire. I am your host doctor  Charles Handler, and I have a great guest today. Who is Bob Culver of Cognitive Path, IO. And  Bob is an example of I’m sure I’ve had somebody else on the show, but he’s somebody that I  pretty much straight up Matt on LinkedIn just from seeing his post and and what he’s doing and  and reached out. And, you know, there you start to see the the value of LinkedIn for being able to  share information and and build some good connections because Bob’s got a a really nice  perspective on some stuff.

You know, haven’t been trained and or certified in the New York City Law. I think we’ll talk  about that a good bit today. But but the goal of the show is really to to help create an  understanding based on the view of a real professional and myself as well who’s who’s focusing  on these things. You know, in these uncertain times with all this stuff, what are some good  takeaways or recommendations for folks who are interested in not only acquiring ethical AI  products, but also providing them. So we’ll have a good a good talk about that, and Bob  welcome.

And please introduce yourself.

Speaker 2: Absolutely. Thanks for having me, Charles. So My name is Bob Holver, and I have  an advisory practice called Cognitive Path, which is really focused on responsible AI within the  talent, what I call the talent transformation or talent technology space. But certainly, responsible

AI, when we when we think about responsibility, certainly, as you just alluded to, Charles, I  mean, this is about being responsible by design. So it’s not just who’s who’s using it, but how are  you, you know, acquiring the data, making sure that you’re mitigating bias, you know, from the  beginning.

And so there’s a lot of, you know, sort of upskilling, if you will, that needs to be done for  everyone regardless of what your role is, but it really starts at the inception of your products and  product ideas and, you know, where are we gonna get data, you know, from? Can we trust the  source? And then testing it for, you know, any you know, potential biases against, you know,  protective groups or, you know, people with disabilities or or whatever it is. And so I think a lot  of it is really an extension of what we’ve already seen, right, making sure that your your  products, your websites, etcetera, are accessible, and then the aspects of privacy, cybersecurity,  and things like that come into play as well. So responsibility when I say, responsible AI.  It’s really the umbrella term that includes ethics, fairness, privacy, cybersecurity, and obviously  you know, complying with with laws and and things like that. But it’s really about mitigating risk  whether that’s reputational risk, financial risk, you know, regulatory, you know, legal risk and and  things like that. But but we all have a role to play, and there’s all whatever role you’re in, whether  you’re using this technology, which is soon gonna be pretty much everybody or or you’re

building it or anywhere in between. These are principles that, you know, you really need to be  familiar with.

Speaker 1: Yeah. For sure. And, you know, one thing, I feel like you have well, I know you have  a a really interesting background. I mean, a lot of folks on this show are either from the IO psych  realm or from the talent acquisition realm, you know, kind of for their whole whole career. I try  to have a nice you know, mix of of folks that don’t have those backgrounds because we need to  learn from that too.

But, you know, and just kinda scanning that background seems like you spent a lot of time at  IBM, you know, doing some interesting stuff. So just kinda, you know, tell us a little bit about  your journey. How did you end up doing what you’re doing now, you know, because it’s not  always something that people are gonna just fall right into. Right? It’s kinda specialized.

Speaker 2: Yeah. For sure. So Yeah. I was in IBM for for over twenty years, so I did a lot of  different jobs. I mean, it’s one of the advantages of working for a big company like that.  They’re always doing interesting and innovative things. And, obviously, just through through my  networks, through my experiences, and, you know, at the intersection of some of these, I guess,  some of the baseline sort of trends that that led me in this direction. I got a taste for, you know,  wearing those different hats and understanding perspectives of everyone from frontline  employees, like, my first job at IBM was doing essentially customer service before I got into  projects related to software compliance and being an early adopter and and team lead as we  deployed, definitely dating myself if I didn’t already. But deploying LOTUS Notes, when IBM  had just required LOTUS in the mid nineties?

Speaker 1: Yes.

Speaker 2: And that was our first real exposure to anything that resembled sort of digital, you  know, communication with it.

Speaker 1: That was a big one.

Speaker 2: Yeah. For sure. And so I just got involved from there. I got involved in some SAP  projects, you know, modernizing, you know, some of the supply chain it’s basically part of  supply chain transformation, and then it was process transformation. So I’ve done everything.  I mean, I’ve been have done software testing, have done software compliance, have done market  intelligence, have done consulting, business analysts, have been the chief of staff to senior  leaders. So I’ve been in the room. Even if I didn’t have the, you know, the title in the paycheck, I  was, you know, sort of a fly on the wall for a lot of you know, the discussions that related to  professional development that related to, you know, leadership pipeline and, you know, saw  every every angle of this And then when I got to NBC, I really focused in the CIO’s office of,  you know, how do we help them sort of modernize some of their legacy infrastructure. So  moving to cloud, working with engineering teams, working with program and portfolio  management teams, driving automation strategies. So a lot of these experiences at IBM.  It was watch watching IBM Watson come out of the labs and seeing the impact that that had. So  have gone through that first wave of, you know, we might call it, like, predictive AI or analytical  AI before this wave of of generative AI. So So I’ve seen a lot of the struggles that companies  have when it comes to transformation, whether that’s, you know, business and process

transformation or technology transformation, digital transformation. And so there’s a lot of things  that can go wrong. And really, it comes down to how do you enable change?  How do you change people’s behavior? How do you get them to think differently about how they  approach technology and and the impact that technology has? On on human beings. And so,  obviously, there’s some interplay with some of the user experience you know, work that that  we’re all doing, what is the future of the interface with with AI. Right?

You just have a verbal you know, command and then it goes and talks to all all its other sort of  sub agents and,

Speaker 1: yeah,

Speaker 2: there are co pilots that go and execute specific, you know, commands so that you just  have one interaction point for the user. But but ultimately, you know, the talent space really  attracted me because I know what it’s like. As I’ve just described, it was not a linear Correct. By  any strength. Good.

I use the word lattice.

Speaker 1: Yeah. Yeah.

Speaker 2: Uh-huh. But that has given me a lot of exposure to a lot of different areas that I can  come into a problem and sort of anticipate and empathize with a lot of people wearing a lot of  different hats. And so, like I said before, I mean, we say, responsible AI. It’s really everyone’s  responsibility. So I think that helps me sort of align, you know, some of the some of the  messaging and some of the potential inhibitors or roadblocks you might encounter as you drive  this significant change.

And I would say, this AI driven transformation that a lot of organizations are going through now  is, you know, sort of the mother of all Do

Speaker 1: you have?

Speaker 2: Because oh my god. No one is no one is spared, I would say.

Speaker 1: No. But let’s go back. This might be a little aside, but it’s it’s something I really want  to talk about. So you mentioned LOTUS node. So I’ve heard there’s still small corners of the  world where that is still being used, but it was pretty much I’m just gonna admit it.  You know, I think I was in the workforce then, but I’ve never used Lotus Notes, and I’m not I  know it exists and kind of about it, but just tell us that there may be younger listeners here who  have no idea what it is, but Just give me a nutshell. What did that do? Just a communication  pathway between people and organizations?

Speaker 2: Yeah. I mean, you could think of it as before there was Microsoft, you know, three  sixty five, before there was, you know, Facebook and Yammer and and Mhmm. The tools, even  LinkedIn as you mentioned. Yeah. Before there were these, you know, sort of cloud based, you  know, software as a service, you know, applications.

You did have network communication, but it was basically the first modern email you know,  platform that they’re connected to, you know, calendar or whatever. So

Speaker 1: Right. Right. Right.

Speaker 2: You know, if most organizations and in say say the late nineties, you know, we’re just  starting to use, like, either Microsoft Outlook Yeah. Or we went with a Lotus Sweet, which was  loudest notes, and then they had an instant messaging and and web conferencing tool called same  time. Which connected to it. So it was more than email because you could plug all these things  into it. So imagine

Speaker 1: Right.

Speaker 2: You know, on your navigation pane wasn’t just you know, a bunch of menu choices.  It was, here’s my instant message embedded. Here’s my, you know, single click to, you know,  sort of, like, browser extensions.

Speaker 1: Yeah.

Speaker 2: Yeah. Yeah. Right. Right. That panel you may have in Google where you can see  your, you know, your Zoom.

Single click to Zoom. Single click to Google Maps and other utilities like that. So but one one of  the things that was great about that was that It was like a inside IBM. It was like a big sandbox.  Right?

Speaker 1: So Right.

Speaker 2: Developer could basically take the the developer toolkit and create sort of their own,  you know, widgets and plugins that you could plug in to the sidebar. So you join a meeting you  immediately see every you know, who’s joined the meeting and what time zone they’re in and,  you know, things like that. Or you can see who’s online and who’s who’s away. So your full list  of who’s if I need to go reach an expert, here’s all my

Speaker 1: my goal.

Speaker 2: Here’s all my contacts and who’s who’s on who’s available right now. So you had a  concept of presence, like a visual indicator of their presence and and awareness, and it just  helped to facilitate you know, conversations and you you literally didn’t have to leave your own  sort of email, you know, client to

Speaker 1: Right.

Speaker 2: See who might be available for a quick chat or whatever. But honestly, it enabled me  to to be a remote employee, you know, twenty almost twenty five years ago. I was gonna be a  remote employee because I had all those tools in that that connectivity. Now, obviously, we didn’t  have, you know, broadband access. So most of it was audio only.

Speaker 1: Yeah. Remember those days? Yeah.

Speaker 2: Yeah. So very early days of of broadband in you know, I was living in New York City  at the time. I remember having a Broadrunner cable had Oh, yeah. Been installed. Right?

But, yeah, I mean, I think it was a precursor to a lot of the platforms and the design principles  that that were developed you know, after that, but but certainly things got more complex.

Speaker 1: Yeah. So, you know, my reflection is it it and that was kind of ahead of its time. I was  an Outlook guy just by virtue of my company set me up with. That’s why I didn’t really have, you  know, much exposure in the early, you know, I got the late night of the late. Nineties early, two  thousands, you know.

But the first time we come in touch with a technology that’s didn’t it was it’s almost binary. Like,  we just didn’t have this before. We had no experience with it. All of a sudden, we have it. You  know, my example is a little a little different, but I was starting I guess, from the very first  version, I had one of those t mobile sidekicks.

Right? So it had a it had a quality keyboard and you could do your email on it. So it’s probably  two thousand one two thousand two. And I just remember sitting at my friend’s house, watching  football going I could work from here. Like, I never had a BlackBerry because those were more  corporate.

Right? But but this was pretty much the same thing. It had a little App Store, but they controlled  that. So there was there was only a few lame apps on it, but, you know, it was the iPhone  basically, but before that happened. And, you know, I just watched the documentary about the  history of BlackBerry.

I don’t know if you’ve seen that, but it it was pretty cool.

Speaker 2: I heard about it. I heard it was very good. And Yeah. I didn’t have to watch that.

Speaker 1: Yeah. So they went head to head, you know, to try and beat the iPhone and and and  lost, and and the iPhone wasn’t on the radar, and then boom, there it is. So, you know, that’s my  that was just I couldn’t even believe it. I felt like my whole life had changed because I don’t have  to be chained to my desk. I could be doing emails and I could be super responsive.  Which is a big thing for me as I like to respond right away. And now I think generative AI gave  me that same feeling in some sense. Right? I I still Marvel, and I use it a lot as as a lot of us too.  And I’m finding more ways to do it and it’s getting more integrated, more multi modal, all those  fun things, which we can talk about in a minute.

But but that’s how I feel still. I have this Gidi feeling. And then, you know, I break it down and  learn about it. And I’m like, it’s a prediction engine. It’s all it is.

It’s predicting the next word, the next token. But it just seems like, oh, how does it do that so  fast? How does it know how to connect all these things and help all these people? And it’s not  we’re living in a in a time where we’ve got another one of these first touch, you know,  experiences. And you know, as they get more complicated, Lotus Notes probably didn’t have a  lot of ethical, you know, considerations or whatever.

But as it gets more intelligent and more just functional in terms of the things it can do. We begin  to to start having these this baggage that comes with it, this other reality, the other side of the  coin, the coin, the other side of the coin, not corn. You know, let’s let’s kinda dig in then. So what  do you do in these days? I mean, but again, the goal here is people listening.  I’d like them to kind of leave with the idea of you know, what what is this responsible AI all  about? And how can I help navigate through my company with it? You know? And we won’t  have all the answers because the answers aren’t available fully yet, you know. But but tell us a  little bit about kind of stuff you’re doing now.

I know you just were you a sponsor or a host of a of a pretty cool little symposium. I think that  happened, I think, last week or two weeks ago in New York. And so that’s one thing. But but  spend a little time getting us oriented to your world.

Speaker 2: Yeah. Yeah. Happy to. So where where I’ve been spending a lot of my time, you  know, as you sort of alluded to Charles in terms of what the audience wants to take away. I mean,  I do spend a considerable amount of time sort of educating and evangelizing, you know,  responsible AI.

I talk to vendors all the time to understand you know, are they thinking about this because it’s in  their best interest to think about responsible AI by by design as they’re developing solutions if  they’re coming out with new products, new features that are, you know, using AI or, you know,  algorithms to, you know, come up with you know, scoring and, you know, stack ranking and and  things like that and advising them about the legislation that that’s that’s coming. So You  mentioned New York City. New York City has an anti bias, you know, legislation called local law  one forty four. So if you are doing any hiring of potential employees who live in New York City,  and that is the five boroughs of New York City.

Speaker 1: Yeah. That’s New York City. All of them.

Speaker 2: And so if you if you have an office based there, headquarters is there or you’re hiring  people who are New York City residents and you are using a a hiring tool that is considered an  automated employment decision tool or ADDT. Then you are subject to this Antibias legislation,  which basically says you have to have an independent audit to make sure that there’s no adverse  impact against basically, checks for gender and ethnicity. Yeah. Yeah. I’d say that there’s adverse  impact against either of those protected classes or the intersection of those two classes.  And so you’ve got to basically hire someone like Miriou, Charles, or you know, an AI software  vendor who who who do those kinds of engagements or, you know, another audit firm. So I I  partner with with a couple of those. But you basically gotta go in, and this took effect on July  fifth of twenty twenty three. It’s supposed to be an annual audit. You’re supposed to buy you  know, July fourth essentially, you’re supposed to have this audit done, and then you would have  the results of that audit post made available publicly, and then you also have to on your career  page somewhere give candidates essentially an alternative route to get their application reviewed  if they object to the use of AI and hiring process.

Oh. Yes. The requirements are to do the requirements are really to do the audit. This is an  important clarifying point, Charles. You the the the requirement is to do the audit.  Whether you have whether that audit shows adverse impact or not is actually not the point.  Right? Analyticalization is really do the audit, which is a little weird. Right? I mean,

Speaker 1: it is only our

Speaker 2: I don’t need you to study for this test because you don’t have to pass or fail. You just  have to take the test, and you only get in trouble if you don’t take the test.

Speaker 1: And you don’t have to do any remedial work, really, if the test, if if you fail, you don’t  have to come back and study again and take it again really. It’s the whole thing’s fascinating. I  mean, talked to a lot of people about it. I’ve studied it. I’ve written about it.  It’s interesting. One of the things though, there’s there’s not a lot of precedent But what the most

interesting thing about it, and there’s a lot of places folks can learn about it in-depth. But if we  start thinking about the application of it for you know, end users, people who are hiring, it sure  doesn’t look to me from the sources I’ve seen. I think you might have shared that the GitHub  linked to was that you that has kind of a a posting of some of the ones that have been done. And  in my experience, I’ve talked to a lot of companies about it as an adviser.

Nobody seems to actually be doing, like, I would wager that there is a truckload of companies  that have hiring happening in New York who are not looking at this or or doing these yearly  audits yet and kinda waiting to see what’s gonna happen, which I feel like is really interesting.  It’s not really striking fear in the hearts of people very much, I think. And Yeah. This is an  isolated thing. This is one thing.

But this kind of stuff’s gonna start happening happening globally. And I don’t think people are  gonna be able to ignore it as much, but it’s an interesting phenomenon. And why do you think  you know, what’s your experience been? Like, why do you think companies who definitely need  to be doing this are are just saying, hey, you know what? Not right now.

You know?

Speaker 2: Yeah. Yeah. It’s it’s an interesting situation. I think there’s there are multiple reasons, I  think. One is even the legal HR and legal teams at these organizations, we assume for the  moment that they’re well aware of this situation.

They are not exactly sure how enforceable It is because at the there’s and I mentioned that on the  panel I was on last week at the AI for talent, event, virtual event, and I was fortunate to be on a  panel with Pete Sonnerling the EEOC Commissioner among other very bright analyst. So he and  I have talked a couple times about this, but it just seems like the language and and even the final  legislation is not a hundred percent clear because the categorically, a piece of technology isn’t  necessarily by itself categorized as an AEDT. Yeah. You know, yes or no, it’s it also depends on  how you use it and the way Yep. That its output areas in your decision.

Absolutely. And so you can imagine there’s between the ambiguity of the language and the  subjectivity where of the, you know, the target, you know, the client that could always say, well,  you know, we combined, you know, this you know, score or the the results of this, you know,  algorithm with, you know, five other things, plus our own team’s, you know, human judgment,  you know, who are who’s New York City to say that this had, you know, material weight and  Yeah. The descent. So the language I think says, like, if it’s significantly influences or makes a  decision?

Speaker 1: Yeah. Which we’ve we’ve kinda taken as fifty one percent of the decision or  something like that, but that is the hardest part of all of it. Right? Because it’s kind of binary to  you qualify or not. And I advise folks on this all the time.

I mean, you could it doesn’t have to be an AI based tool. There’s a thing in there about how the  algorithm works. And I think that this the intention was if it’s kind of an automated self updating,  you know, algorithm that you don’t have full control of. But the reality is that the actual criteria  are are simply like, does it is a human involved? Can a human override it?  You know? And is it a substantial part of the decision? So you could have a plain old, you know,  multiple choice tests that’s as old as the stone age, but you have a cut score. And if you don’t pass  that score, you know, at high volume, nobody ever sees it, you just get dispositioned out. Well, in  my mind, that’s an AEDT.

And in my mind, you have to know, comply. I don’t know that everybody sees it that way. You

know, the AI part of it is is clearly what’s what’s driven this. Because of the, you know, the  unpalodability. But what advice then and and there’s really two factors as well.  It seems like the people in my experience that are more concerned are the vendors who are  selling these tools and who want some kind of an audit done. Keep in mind everybody I’m sure  you would you would echo this, but there’s no mandate that the vendor at this point have any  kind of audit. So there’s kind of this preemptive you know, let’s let’s provide this document to our  prospects and clients so they could feel good about the fact that we’re doing the right thing, but it  doesn’t satisfy any kind of New York City audit. So that’s that’s one flavor. But let’s let’s focus on  how would you advise, like, give us give us a little bit of of a snapshot of, you know, I’m a  multinational corporation.

I’m global, but I have offices in New York. I contact you. I’ve been I’ve had some of these  conversations too, but I’m curious how they go for you. And they say, look, we’re we’re hiring in  New York. We don’t know how to comply with this law and in general what other laws are  happening that we need to be aware of, we need some input because we we don’t know here.  And you’re probably talking to legal, you know, and and and other folks too. So so give us a  quick sketch of you know, how how do people fly in and and what’s your process a little bit with  this?

Speaker 2: I think it really depends on where, you know, what sort of triggered the the  conversation. So, you know, I can I guess there’s different starting points depending on who who  the client is and and where we’re coming from? Because as I said, responsible AI is is broader  than complying with any particular, you know, piece of legislation within a a jurisdiction,  whether it’s a you know, a major municipality, like like New York City, or it’s an entire, you  know, continent, like, full continent, but region and you know, collection of of countries like the  EU, which is, you know, they’re they’re finalizing legislation around the EU. AI Act, which  builds upon a lot of, you know, human rights legislation, data privacy, GDPR, and things like  that. So so really what I want anyone to think about whether it’s a, you know, vendor building AI  powered, you know, solutions doesn’t even have to be, you know, full blown AI to your point,  Charles.

It could be any, you know, algorithm is autonomous process that’s coming up with some kind of,  you know, score or rank that allows you to allows a human to essentially filter out, you know, a  set of candidates. And so So that’s one of the things that I have people thinking about. Like, you  can’t you’re gonna be playing whack a mole if you try to keep up with these things one at a time

and keep up with even the patchwork of legislation that’s coming across the United States.  Right? So New York City, it’s hiring and promotions as it relates to local law one hundred and  forty four, but Illinois or Colorado may have something related to, you know, facial recognition  with with video interviewing or even going beyond HR use cases.

I mean, you’ve got, you know, facial recognition or if you think about more regulated industries  in financial services, discrimination and potential bias in loans from from a bank or life insurance  policy or charging people higher rates based on, you know, historical data that has nothing to do  with your own driving behavior or your own, you know, health and wellness or whatever it is.  Right? So what I try to do is narrow my scope, but then things that not only affect people, but  people in their livelihoods and their employment, which is why I spend most of my time with HR  vendors and talent acquisition teams at these clients. The vendors, to your point before, about the  vendors They’re not on the hook for New York City, and that’s partly because as we talked about  part of this is who is impacted? Where’s the adverse impact?

And who is impacted? And so how it’s used affects that adverse impact, but the vendors should  definitely not consider themselves off the hook because places like you are going to hold them  accountable for for the algorithms that they’ve that they’ve built. Right? And so when it comes to  complying with the EU, AI Act or maybe whatever the UK comes up with. And, you know,  there’s legislation, you know, perpetuating around the globe.

There’s stuff in in South America, Latin America and Southeast Asia. So the guidance that I  typically give is you need to be paying attention to the legislation that’s coming and basically  move to where you’re being the most where you’re mitigating the most risk and where you’re  building systems that comply with the strictest guidelines, almost like finding, you know, sort of  the common denominator overall these these pieces of independent legislation. Because if you  can comply with the EU AI act, for example, you’re probably going to comply with most other  legislation. But the tricky part is how do you keep track? How do you try to anticipate all this?  Because obviously, you know, some of these legislative documents and proposed bills are  incredibly,

Speaker 1: you

Speaker 2: know, detailed and long. I mean, you actually have to would be you either need a big  team or you actually have to use some of these AI tools to summarize, you know, the key points  or you follow, you know, some of the influential, you know, voices who keep up with it. Charles

Charles, I know you’ve put out some document was frequently asked questions around New York  City, one of my AI governance platform partners. Fair enough. AI has put out a bunch of  resources and they try to keep up, you know, we’ve we’ve read this.

We’ve read this all and interrogated it so you don’t have to kind of approach which is great. But  yeah. I mean, I think, eventually, we’ll see this just sort of embedded as normal course of  business just like we’ve seen with data privacy and and cybersecurity. I mean, you wouldn’t do  business with people that you you knew or you didn’t think your data was gonna be secure. Your  password wasn’t secure

Speaker 1: and Yeah.

Speaker 2: Things like that. So, eventually, I think this is gonna be one of these, you know, must  have. So so the vendors they should think about it, not just with in terms of complying with any  particular legislation or who’s who’s on the hook, but they have a responsibility to build tools that  use, you know, data that is that is trustworthy and build algorithms that are trustworthy and can  be audited at any time. And just like you would for before you move to production, you check  check it against privacy and and cyber and all these other checkboxes. I think this becomes  another one of those things.

It may even be that we’ve got a new sort of onboarding, you know, compliant annual compliance  training module that goes into, you know, all kinds of things around basically an extension of of  bias and and other types of discriminatory,

Speaker 1: yeah,

Speaker 2: you know, training for for employees so that they understand we’ve got a we could  we have a weakest link kind of situation here where anybody that misuses, you know, an AI tool  could expose company. But you’ve gotta think about your reputation. You’ve gotta think about

doing the right thing, of course. And you’ve gotta think about, you know, that customers are  gonna start to get more and more informed, and they’re gonna start asking you these questions in  their RFI and RFP, you know, process. All else being equal wouldn’t you rather go with a vendor  even if it was a little bit more expensive or implementation was gonna take a little bit longer or  whatever.

Wouldn’t you rather know that your your vendor is truly a trusted a trusted data source and a  trusted partner in a responsible AI sort of context Yeah. Yeah. And without that, the vendors are  gonna be in trouble. Yeah.

Speaker 1: It’s a much broader. Right? I think the thing is we we tend to focus on what’s the  glaring thing that we’re all talking about, which, you know, would be New York City here in the  US. But it’s a much broader look than that. And I think if you’re flying in at that narrow look,  you’re missing the bigger picture.

You know, I’m starting to think and I’m sure this is already happening that, you know, companies  are gonna need an an AI compliance officer, just just sitting in the, you know, a CAI CO or  something like that. I’m sure they’ll come up with something like that. That’s really responsible  for that house within the organization. And we’re probably seeing some of that and, you know,  there’s every vertical that touches it. It could be supply chain or outside of HR.  Let’s say marketing, sales, whatever it is. There there’s all these same governance things that are  that are happening. Right? The the hiring has been flagged by the EU, justifiably. So it’s a high  risk use case.

Right? It has a direct impact on people’s lives, you know. One thing as an aside though, I made a  note here, the Illinois law has been around for a while, and you never hear Jack about it. Like  nobody say anything. It’s a whole state.

There’s a lot of people getting hired, especially if, like, Chicago’s probably not exactly as big as  New York City, but people didn’t talk about it. It’s been around since, what, twenty nineteen or  something. And, you know, I’ve never had one person inquire to me about it. And maybe it’s  more narrow because it it’s automated, you know, interviews with maybe it I don’t know for sure  if it’s just face recognition. But at at any rate, I mean, What gives with that?  Why don’t people care about that? I like Chicago just as much as I like New York?

Speaker 2: I think it’s getting pulled into when people talk about the sort of patchwork of  legislation, I think it’s getting pulled in. So, I mean, we see this a lot when people are trying to  either prove their point or, you know, demonstrate the the scope of of where we’ve got some  challenges. So I think depending on the context people are pulling in legislation that relates to  any type of discriminatory, you know, practice even if there’s if it’s not a, you know, the cleanest,  you know, line to to connect the model. But you’re right. I mean, some of that technology, you  know, predates a lot of this current wave of, you know, concern and and legislation.  But I think you know, I I was not sure how I felt about OpenAI, you know, just releasing GPT to  the world. You know, there was a part of me that thought, you know, that just knowing how long  it takes for people to you know, embrace and adopt technology, having gone through that wave  of automation, technology, you know, research and and putting a business case together at at  NBCUniversal. You know, I’ve already seen this, I guess, contentious, potentially contentious  situation where people are worried about, you know, job displacement and if you start taking you  know, it’s a slippery slope, like just like people push back on any legislation that might take away  any rights, you know, whatsoever because they think it’s just you know, a bellwether of of what’s

to come. If they take this away, then they’re gonna take Right. Time they’re gonna take this away.  And then all of a sudden, our rights are taken away. So I think people are starting to, you know,  think more deeply about that. And so, I guess, one of the benefits of of that releasing this through  the world is is that it woke everyone up to Yes. What’s possible and the risks in. And so I think in  that sense, it was good.

But now, unfortunately, we’re still we have to sort of play catch up and, you know, legislation  may never keep up and probably never will. Keep up with advancements in technology. I mean,  that’s the situation for the dawn of time. Right? What?

Speaker 1: Be interesting if ChatGTP starts writing its own legislation. Right? Oh, hey. Write me  some legislation. But it is I just was reading, it’s the fastest adopted technology, you know,  globally.

I mean, maybe that’s because it doesn’t cost anything and, you know, even for twenty bucks a  month, which I marvel when people kinda waiver on whether to spend twenty bucks a month on  this thing because it’s, you know, you get a lot for your for your money there. And it’s interesting  just policy wise. Like, I’m doing a series of interviews right now for a project of, you know, I O  psychologists over seeing assessment programs at global companies and kinda asking as part of  that, what do you have policies about chat, GTP, or not? And there’s a wide range. You know,  some Some companies will still let people use it, some buy their own copy where they can  control it a little bit more, you know, some completely ban it and people use it anyway.  I mean, I think that’s going to be like many other drugs and things because I I almost think about  it like that. I’m addicted to it. Right? People gonna use it anyway. You know, they might have  their own computer they’re using it on, which but may help.

But anyway, I mean and that’s a whole another millions of conversations, but and it may factor  into this, but I’m gonna kinda bring up. But, no, what’s your typical use? I’m more interested  almost on the on the consumer side. Is it really just kind of breaking down what a company is  doing now and helping identify the risk that they’re that they have and and maybe how to  mitigate that risk? Is that commonly the type of thing that companies think about and why they  would need someone like yourself or or me, you know?

Speaker 2: Yeah. I I think that For adoption to take off, I think people are gonna have to be  people when I say people I mean, both individuals and organizations are gonna have to get more  comfortable and set some parameters. So the analog I I think of social media. Yeah. Yeah.  Right. Right. And, you know, everyone said, whoa. Whoa. Like, no.

We’re gonna we’re gonna put the brakes on this. I don’t know. Anything about this. I don’t know  where that data is going. You should not be going on to Facebook or at the time it’s probably

Speaker 1: Yeah.

Speaker 2: I spend.

Speaker 1: I spend. It’s friend store. There you go. There’s one from the plat from that last.

Speaker 2: But, like, and, like, I think Twitter was Mhmm. Was blocked in a lot of places. I I  forgot if LinkedIn was lumped in with them or not. But the point is, organizations had to  institute, like, some some guidelines. Right?

These are at IBM, we had we basically looked at it like there was a set of business conduct

guidelines, you know, expecting you to be like an adult and, you know, not discriminate and, you  know, just be a good, you know, IBM or And then when social media came out, people came up  with what they call the social computing. God Yeah. So this is how we need to trust you. Right?  We’re gonna trust that you’re gonna behave like an adult.

We’re gonna we’re gonna talk about that. You are not gonna disclose anything. You know, if you  work in in research or or M and A or, you know, whatever, product development, you know, you  should know this already, but don’t even think about sharing, you know, any proprietary, you  know, information. But otherwise, you know, have that. It helped us, you know, grow and this  was the start of you know, brand investor Exactly.

Programs and and critical stuff.

Speaker 1: Critical stuff.

Speaker 2: Yeah. So really, really important stuff. And you know, it turned out that, you know,  that this wasn’t just a b to b to c thing. This was like or a c to c thing. This was had a lot of value  for for that be.

And and so, you know, social marketing and, you know, social, you know, that’s when I went  into market intelligence actually, and I was focused purely on social. Insights. So looking at, you  know, all the conversations and and helping brands understand that they’re their customers are  talking about them whether they like it or not. You don’t have to be part of the conversation, and  this is happening, and it’s public, and you can either ignore it or you can join the conversation  and try to, you know, guide people to learn more about all the great things that you guys are  doing and building and whatever. So So I see it analogous to to that in a sense that you can’t  they’re gonna use it.

They’re already using it whether you’ve sanctioned it or not. You have to treat them like adults,  but trust is a two way street. So when some people say that they’re fearful of of AI. Well, let’s  let’s unpack that. Right?

Are you fearful of what AI can do in general? Are you fearful of how your company is going to  use it to, you know, replace you or monitor you or, you know, not give you that promotion or not  give you the to pay equity that, you know, you deserve or, you know, there’s all kinds of things  that you need to understand. And so organizations need to communicate very clearly what their  expectations are and to have some level of transparency so that you know exactly how your  company is is using AI in all you know, across, you know, people process and technology so that  you know, you know, what what the impacts are and so that you are sort of aligned with the  organization and your team in terms of the value that AI can can bring while still understanding  this may mean some adjustments for you as more things get taken off your plate and and  automated, which we all yeah. For what what does that mean? Are you going to are there things  that I does my job role change?

And if so, does that mean I have to, you know, upscale and and learn some new things? And if  so, are you going to support that? So how are you investing in your workforce, not just by  investing in the technology itself, but how do you retain and keep your employee base engaged  such that you still have this, you know, mutually beneficial relationship because otherwise things  start to things start to break down. Right? So So a lot of my conversations are around, what does  this mean?

How are we using it? Let’s make sure we’re experimenting but aligns to specific, you know, use  cases and understand the ramifications to to any people that are that are impacted by that,

whether you’re, you know, on the team or you’re a customer of that you know, solution or or what  it is, but we really need to think, you know, deeply and broadly about the implications. So one of  the things that the nonprofit called For Humanity that that issued me my certification. And it  wasn’t just, you know, here’s how to audit New York. It was Here are the foundations.  There’s a foundational sort of course and certification that talks about what does it mean to be an  independent auditor what are, you know, AI, algorithmic, and autonomous, you know, systems.  And how do I do this? And and how do I coach organizations about how to set this up. So you  can try to sort of incorporate this into some other, you know, cross functional groups that might  be sort of preexisting. But really, as you alluded to before, Charles, this this AI piece and all the  ethics and governance and everything that goes around that, you know, it really warrants a a  dedicated, you know, role and some specific communities focused on AI ethics, AI risks, and that  needs to be this isn’t just, you know, top level people it’s people that, you know, have the  knowledge and understanding across lines of business, across disciplines so that you’ve got the  cognitive diversity and you have, you know, what, for humanity, calls multi stakeholder input.  For all these things. Not those diverse perspectives, especially as a if you’re a global company,  you need to really understand some of the ethical, you know, considerations, some of the fairness  considerations that people in different, you know, cultures and different countries and regions  think about. And so without bringing all of that to the table to make sure that you’re using, you  know, AI in a you know, constructive and fair way, then, again, things can can fall apart. And  you’ve got a weak link in your in your chain that I I Yeah. There’s a lot for people to be  concerned about, but there’s definitely some practical things that they can do to stay on top of it,  and it’s not just the technical

Speaker 1: Right.

Speaker 2: This is not this for, you know, the CIO to to worry about it. Your data officer, it’s, you  know, everyone’s involved. Right? The head of, you know, communications, head of, you know,  marketing. I mean, everybody’s gotta be aligned and aware of what this all means, and they’ve  gotta be sort of singing to the same.

Speaker 1: Yeah.

Speaker 2: Yeah. So that’s part of my objective, not just as an independent adviser, but I’m just  launching a a partnership with h r dot com to build a a learning community around all of these  topics that are critical to to move the ball forward and make sure we’re all thinking about these  types of things before we start building or

Speaker 1: getting Yeah. It’s just it’s philosophical as much as it is empirical. Right? I mean, in  the philosophical part, is what leads you to the empirical part because if you’re not  philosophically interested in this stuff, you might not use it. And then if you’re not  philosophically interested in the safety aspect of it, you might ignore that.  And it’s becoming you know, a critical a really critical thing. As far as the certification goes, I  mean, that’s pretty interesting. I first found out that existed from you, but But I have I’ve had two,  maybe three different instances where people not from the US have contacted me to talk about  the New York law and compliance, and they said, well, we have to find a certified auditor. I’m  like, that that’s absolutely not a requirement. I think I might have even checked with you about it.

There’s no such thing in the requirement, and I didn’t even know there was such a thing. So it’s  cool to hear that. Is it fair now? No. It’s that that’s working for

Speaker 2: community is the or give me the certification. I mean, there’s other certifications, I  think, even they have a partnership with one of these AI and advisory firms called called Babbel.

Speaker 1: Yeah. I’ve heard of Babbel before. Yeah.

Speaker 2: I mean, they offer a a program. I don’t I don’t know if it’s just they offer some kind  of, you know, training to to do it. So you’re right. It doesn’t there’s no requirement for  certification because there’s no certification standard. But I liked the approach that Boardmaniti  was taking just because they took a more comprehensive view than just what the law, you know,  provides, but they do keep up and get accredited or they’re curriculum by the different governing  bodies so that get certified.

So I have my nerd your thirty one. I can also get one in GDPR. I’ll be able to get one in EU. And  Right. Act so that even if I don’t even if I’m not acting as an auditor, which I can do, when I  provide pre audit sort of advisory services to do an assessment of you know, where how they  would fare if they did go through a formal, you know, audit.

I know that I’m checking the right, you know, criteria And and some of those platforms like Fair  Now would would allow you to basically take the same data set and algorithm. Right. And  outcome or check it against, you know, a variety of different legislations. So they keep up with  that so that, yeah, have to. And as new legislation passes, they can just add that to their to their  platform.

So you can do it And just to make sure you’re passing the audit, you could just do it on an annual  basis. Or, again, these these vendors will allow you to call it as well as assess solution to do  continuous monitoring because one of the things about this is, you know, your algorithms might  change as they learn, you know, ingest more data and

Speaker 1: they can learn whether that’s typically well. Right?

Speaker 2: Sometimes at the at the vendor as they push new leases they may release, you know,  update to to the algorithm and then, you know, you may have concept drift and and and the new  model may you know, create adverse impact where there wasn’t before. So that’s why you gotta

continue the vendors should be continually, you know, checking against you know, these these  kind of criteria. And that’s why this isn’t a one and done kind of thing for for clients necessarily  because, you know, some of these solutions, you can not fine tune, but, you know, people  familiar with, like, ATS is where you can go. Or maybe a better example is, like, a an AI powered  sourcing solution. Which may not be an AADT, but it’s still telling you who to go Yeah.  About a retouch.

Speaker 1: Bigger problem than than the other side.

Speaker 2: Yeah. Like, each recruiter might be able to find tune ins. Say, show me more like this  and Yes. Like this.

Speaker 1: Recommendation engines to call that. Right?

Speaker 2: Might have looked like a simple thing, but you just made an adjustment to the  algorithm. So

Speaker 1: totally. You just biased it based on your input when it may not have been biased to  begin with, so that’s pretty significant. So it seems to me like I would give the advice to either  side, vendor side, or the the consumer side that maybe they should start having internal people

certified on these things as well so they can really be up on what the requirements are, you know,  as an internal advocate advisor team member. I think that seems to be something that, you know,  one would like to to think about. One one thing you did, which you just opened up, which in a  few minutes we have left, we can’t get into too much, but I think it’s important for people to think  about.

I think probably the biggest Satan and all this stuff is what you talked about. The the who sees a  job ad, you know, the algorithms that do that, the algorithms that will allow a recruiter or  someone to say thumbs up, thumbs down, and then train the algorithm to to do that, which could  perpetuate bias Neither one of these things fall within the actual funnel of the proper hiring  process. Yet from a pure statistical probability standpoint, if you’re not putting plaid marbles in  the top, you’re not getting plaid marbles out the bottom at the ratio that you want, and that is a a  big, big, big problem and a lot harder to check because it’s just out there. And and people may  not even know where to look. For that kind of stuff.

So that will be an interesting frontier. Within hiring, we’ve got it locked down at least to here’s a  process. Here’s a workflow, here’s a beginning, here’s an end, and it’s got some brackets around it  that you can box it. But but as you go more free range, you lose that and it probably gets, you  know, even harder. So at least we’re focusing now on on a small part of it, that’s important.  You know, it’s more within the individual’s control who’s holding the titans of the industry who  show job adds responsible, you know, who knows? They should be holding themselves  responsible, but who knows what their algorithms even know, you know, what’s going on?

Speaker 2: No. I think I mean, I guess that’s that’s where I wanna go. Like, I it’s important for  people to understand New York City, you know, legislation. So that’s and that’s important  because, you know, you are showing, you know, where there might be some some adverse  impact. But you’re right.

I mean, you know, one side of the argument for all the tools that are used in the hiring process  before candidate actually applies. Are not gonna be considered under, you know, in scope for a  lot like like New York City, the way it’s defined. But and some might say, you know, well, you

know, no one’s you know, who’s being harmed, you know, by by this by some of these tools  earlier in the in the process. Well, I guess that’s where I try to go further than any particular  legislation and and be responsible because Yeah. Like you said, I mean, somebody’s doing using  an AI powered, you know, sourcing tool or even, like, programmatic advertising.  There’s algorithms behind all of these things. And so if you’re you know, there’s a global talent  pool and opportunity it’s the opportunity that’s not, you know, evenly, you know, distributed even  though talent exists, great talent exists everywhere. So if you’re not casting as wide a net as you  should or as you could, because of that algorithm, then you’ve already sort of, in some ways,  tainted the talent pool or skewed the talent pool, and so you’re only sourcing people that, you  know, fit a certain

Speaker 1: Yeah.

Speaker 2: You know, bold or you’ve you’ve inadvertently, you know, circumvented or or not  notified or not on outreach to people who might be in one of those classic. Oh, it’s it’s sort of this  hidden it’s, like, hidden

Speaker 1: land loss.

Speaker 2: It doesn’t have an immune that, you know, affected you know, party or or class. But Speaker 1: yeah.

Speaker 2: But it still comes to responsible, you know, AI practices. I would want people in  building those solutions to think about these things.

Speaker 1: A hundred percent. And everything that comes the fruit of the poison tree in some  sense if you go to the legal doctrine. Well, so my takeaway, you know, and we gotta wrap it up  here is you gotta think broadly. Even if you’re worried about one thing, this has gotta be a holistic  thing where you you look across the board, you think forward, not just tactically in one,  situation. The bigger strategy can help you deal with the local situations better, but this is this is  not just a little fly in the ointment.

You know, this is really our new reality. It’s gonna get and we’re just in the infancy of it. Right?  So so we’re gonna see hopefully some things level out and some some good. I think we’re gonna  see some bad too.

I think that’s that it’s inherent in the nature of what we’re dealing with here that there’s gonna be  good and bad and bad operators and people who accidentally use things wrongly and, you know,  people who try their hardest to use it. Right? And and still don’t get it. Right? So it’s a it’s an  ongoing thing.

People like yourself are super important to a lot of folks right now, including jobs seekers and  people who are on the other end of this stuff. So I really appreciate it. Let everybody know, you  know, what’s the best way to follow you, get in touch with you. I know I always say, like, I  should just say, find this guy on LinkedIn every show, but there’s probably other stuff you’re  doing. So close this out here with a quick quick promo.

Speaker 2: Yeah. No. I appreciate it, Charles. No. This has been great conversation.  My email’s bob at cognitive path dot I o. You can go to cognitive path I o check out my website  with different service offerings that I can provide. And I would say LinkedIn is easy. I think I’m  the only pop over on LinkedIn. And the other thing is, you know, just just to promote quickly the  the community that I’m launching with h r dot com.

So If you go to a I x, as in AI exchange, a I x on h r dot com, you’ll see we’ve got a landing page  there. We’re gonna be kicking off We’ll have a press press release shortly, but we’re gonna be  kicking this off with a a pavilion at HR West, which is a conference coming out the first week of  March. In Oakland, California. So we’re gonna have a half day sort of summit there with some  some very informative sessions in a keynote speaker and can be a lot of AI experts and folks  there. So but it’s gonna be an online community that kicks off from there.  So it’ll be you know, leveraging some of the h r dot com, you know, resources and some of the  community that sort of preexists there. But it’s really a place where people across, you know, HR  and beyond can learn more about responsible AI, AI driven transformation, and what this all  means for skills hiring for for skills and potential.

Speaker 1: Awesome. Congrats on that. There’s a lot. Yeah. Yeah.

Well, they’ve got a lot of visibility and a lot of a lot of content for many many years. I know  Debbie pretty well. So they’re they’re They’re good stuff. So thank you very much. Appreciate  your time today and and all the good wisdom that you shared with us.

Speaker 2: Absolutely. Glad to be here, Charles.

Speaker 1: As we wind down today’s episode dear listeners, I want to remind you to check out  our website rockethire dot com. And and learn more about our latest line of business, which is  auditing and advising on AI based hiring tools and talent assessment tools. Take a look at the site.  There’s a really awesome FAQs document around New York City local law one forty four that  should answer all your questions about that complex and untested piece of legislation. And guess  what?

There’s gonna be more to come. So check us out. We’re here to help.

The post Ethics by Design: Responsible AI is Blueprint, not a Band-aid appeared first on Rocket-Hire.

The B2B Marketplace for Recruitment Marketers

Find the right recruitment marketing solution for your brand and for your talent acquisition needs.

Create your account

[user_registration_form id="9710"]

By clicking Sign in or Continue with LinkedIn, you agree to RecruitmentMarketing.com's Terms of Use and Privacy Policy. RecruitmentMarketing.com may send you communications; you may change your preferences at any time in your profile settings.