Home > Podcast > Science 4-Hire > Crafting the Future of Ethical AI in the Workplace

Crafting the Future of Ethical AI in the Workplace

March 26th, 2024

“Legislation… has not kept up with the pace of technological advancements, posing significant challenges for ensuring fairness in AI-driven hiring processes.”

-Matt Scherer

Summary:

In this episode of “Science 4-Hire,”  I welcome Matt Scherer, Senior Policy Counsel for Workers’ Rights at the Center for Democracy in Technology, a non-profit based in Washington, D.C. The CDT champions the advancement of civil rights in the digital age, striving to ensure technology respects and enhances individuals’ rights and democratic values.

Matt and I have an enthusiastic conversation about the importance of understanding and navigating the evolving landscape of AI and automation in hiring processes. Matt brings his expertise to the table, dissecting the intersection of emerging technologies with workplace rights, the nuances of AI legislation, and the vital role of public policy in safeguarding fairness and privacy.   Matt and I dive right into some great dialogue about the challenges posed by electronic surveillance, automated management systems, and the quest to elevate worker voices through technology. 

We spend a good deal of time focusing on the critical evaluation of AI hiring tools, highlighting New York City’s Local Law 144 and its implications for a broader regulatory framework.   Matt provides some really interesting and important points about the criticality of using a design-first mentality in developing AI tools as a critical part of ensuring they serve to enhance worker empowerment rather than diminish it.

Insightful Moments:

  • Matt discusses the Center for Democracy in Technology’s (CDT) mission to advance civil rights in the digital age, focusing on the workplace implications of emerging technologies such as AI and automated management systems. The CDT’s commitment to this cause is grounded in ensuring technology serves to enhance, not undermine, workers’ rights and privacy.
  • The conversation highlights New York City’s Local Law 144, examining its strengths and weaknesses in regulating AI hiring tools. Despite being a pioneering piece of legislation, Matt suggests that the law is riddled with loopholes that many companies exploit to avoid compliance, demonstrating the challenges in crafting effective regulatory frameworks.
  • Matt emphasizes the importance of design-first thinking in developing AI technologies for hiring. He argues that most challenges associated with AI and automated hiring tools stem from design issues, advocating for a holistic approach that integrates ethical considerations from the outset.
  • The dialogue touches on the role of transparency in AI-driven hiring processes.  Current practices often leave candidates in the dark about when and how they are being evaluated by AI, stressing the need for legislation that mandates clear disclosure to candidates.
  • An exploration of upcoming legislation reveals a split between stronger regulatory regimes advocated by civil rights groups and more loophole-ridden proposals pushed by tech companies. This tension underscores the ongoing debate over how to effectively govern AI in hiring while protecting workers’ rights.
  • Matt shares insights into the civil rights standards for 21st-century employment selection procedures, a document aimed at modernizing and expanding upon the 50-year-old uniform guidelines for employee selection procedures. This initiative reflects a broader effort to update legal and ethical standards for employment assessments in light of advancements in AI and technology.

————————————–

Full Transcript

S4H_Matt_Scherer 

Speaker 0: Welcome to Science for Hyre. With your host, doctor Charles Handler. Science for  Hire provides thirty minutes of enlightenment on best practices and news from the front lines of  the agreement testing universe. 

Speaker 1: Hello, hello, and welcome to the latest edition of Science for hire. I am your host,  doctor Charles Handler. And with me today, I have mister Matt Cherer. Matt is gonna be an  amazing resource for listeners and myself because I’m here to learn more than anything. That’s  why I’m doing this.  

Right? He’s gonna be a great resource about legislation related to AI hiring tools, hiring tools in  general, etcetera. It’s got a great background and experience and somebody I’ve been really  happy to to have met over the last year. It’s provided me with a lot of good substance and a lot of  good food for thoughts. So as I always do, I let my guest introduce themselves because who  knows them better than them.  

And, Matt, welcome to the show today. Please introduce yourself and let us know where you’re  coming from. 

Speaker 2: Yeah. Great to be here. Thanks for having me on. So My name is Matt Cherer. I am  Senior Policy Council for Workers’ Rights at the Center for Democracy in Technology.  And CVT is a nonprofit faced in Washington, D. C. That focuses on advancing civil rights in the  digital age I’m actually based in Portland, Oregon. I sometimes refer to myself as the Pacific time  zone representative of CDT. Yeah.  

So my work focuses on a few different things. Pretty much, it covers everything. Where there’s  an intersection between emerging technologies in the workplace. But my particular areas of  focus are the use of artificial intelligence and automation and hiring the use of electronic  surveillance and automated management systems to monitor and collect data on workers, and  then looking for ways to elevate workers’ voices and help workers empower themselves and the  use of data technology. 

Speaker 1: Super cool. So do you have you ever had like a five AM conference call or anything?  Because you’re by coastal? 

Speaker 2: Yeah. And not not often, but but I’d say probably once a week I have a call that starts  at seven or seven thirty. And then about once every other month, I’ll have something that starts at  six AM. 

Speaker 1: Yeah. 

Speaker 2: That just goes with the territory. Fortunately, the nice thing about being on the West  Coast, everybody else’s on the East Coast is that if I do have one of those early days, for the most  part, everybody’s offline by two PM, so I can take a nap and Yes. And in the afternoon if I have  to get up too early and I’m operating on three hours of sleep. So

Speaker 1: Yeah. It’s always trade offs, you know, I’ve found because I’ve worked globally. I  lived in the West Coast and had clients on the East Coast too’s, but there’s the the only little from  you to triangle there, whatever is, like, when you’re when you’re getting US, Europe, and Asia on  that somebody loses in that one. Somebody’s got a really ridiculous time that that you have 

Speaker 2: to know. 

Speaker 1: But in general, not so bad, you know. So you get off early basically. Right?  Everybody signs off, which is nice. So cool.  

Well, we’re only two hours apart, which The other thing is, two and three hours, you wouldn’t  think it makes that big a difference, but for me, it it really does. I feel like Pacific to Central, not  that big a deal, but 

Speaker 2: No. But much easier to deal with Chicago and Nollins than 

Speaker 1: Yeah. Yeah. 

Speaker 2: Yeah. New York and DC. So 

Speaker 1: Yeah. Yeah. Well, you know, the center for democracy and technology, what a cool  thing? You know, I was going back to find there’s a there’s a paper that you guys put out a couple  of years ago. I think it’s a bill of rights, I think, of of some sort for I think it wasn’t for it’s not  candidates, but it is more for what you can tell us.  

But 

Speaker 2: The civil rights standards, is that what you 

Speaker 1: Yeah. Yeah. Yeah. I think that’s it. Yeah.  

I’m pretty sure that’s it. Right? So we you can tell us a bit about that. But what what I thought  was so cool is just the scope of the things that the CDT works on. I mean, it’s a pretty pretty large  coverage across all areas of our society, which we we need stuff like this.  We really do. So 

Speaker 2: Yeah. Happy. I I mean, CDT is a fantastic organization and, you know, I’m I’m on the  privacy and data team. And so we’ve got people just on my team who work on civil rights in the  context of housing and also, you know, kind of general consumer privacy related stuff and health  privacy. We’ve also got teams that work on freedom of expression and security and surveillance  kind of you name it, if it touches on digital technologies and how it impacts society.  We have an elections team as well. You know, like, we we’ve got somebody working on it. I was  actually the first person that they brought in with a background in labor and employment issues.  So Right. I’m I I wouldn’t quite call myself a one man band on those issues Right.  CDT, but I’m usually flying solo. 

Speaker 1: Yeah. What’s the most Let me see. What’s the craziest thing that, like, you guys are  tracking that where you’re just like, oh my gosh, this is nuts. We better regulate It could be  hiring, but is there something else where it’s just so bizarre or just surreal that you can’t believe it  sometimes?

Speaker 2: If we’re going out across everything that CDC covers, probably, like, election related  disinformation stuff, like, with the rise of deep fakes and the rise of it it’s It’s one of those things  where everybody realizes there’s a need for regulation, but it’s very hard to think of what  regulation looks like that captures all the things that everybody agrees shouldn’t happen, which is  like deceptive uses of political candidates faces and voices to make them appear to say things  that they don’t in order to hurt their campaign. Versus, you know, creating satire of candidates. 

Speaker 1: And, yeah, yeah, 

Speaker 2: you know, where do you draw the line between those things is really hard you know,  nice segue into what we’re gonna talk about. Like in like in the employment setting, it’s an  instance where the technology has moved much faster than not just government abilities to  regulate it, government’s ability to regulate but actually faster than maybe society’s  understanding of what is even happening. Absolutely. And that creates a lot of challenges for,  okay, what’s what’s the best way to deal with this when a lot of the times the only people who  have information on the capabilities and the the potential threats that are posed by new  technologies are the people that are putting out the technologies that are causing those threats  and creating those potential harm. 

Speaker 1: Yeah. I mean, that applies to all this stuff. I I I was listening just this morning to a  really good podcast. It’s the Wall Street Journal’s Technology Podcast, and they had a guy on  there, a professor who was talking about he’s an expert in deep fakes, and he was talking about  how fast the technology’s evolved and and regulation, how we’re gonna stop it. And he’s like, the  only way to do it is to is to, like, put it at the the level where people are receiving the information  so that it can be tagged.  

And he basically saying, you know, regulations not it’s gonna be hard to get regulation to be able  to actually provide the defense that’s needed. He was kinda like it’s a coin flip. It’s either gonna  go where everybody adopts because there’s a there’s a organization. I can’t remember the name of  it that that’s working on trying to regulate the the deep fakes and everything putting watermarks  on them or whatever. But he’s like it’s a coin it’s a it’s a coin flip whether it’s gonna go south and  just get nuts or go in the right direction.  

I think hiring is a good of all this stuff’s a good example. So, yeah, let’s dig into the hiring Part of  it, things are getting wacky these days, and generative AI is even more, you know, up in the ante  here. But But let’s talk go back a second. You were talking about I keep calling it the biller rights,  and I I that’s not right. 

Speaker 2: The civil rights standards. Yes. 

Speaker 1: The silver rights standards. Talk about that. Document a little because I love seeing  things that are coming from different places than you know, we I O Psychologists necessarily are  putting out, but are recognizing the principles and things that we know are important. It it shows  that we’re not a bunch of crackpots off here in the corner. You know what I’m saying?  So talk to us a little bit about Yes. CDT has come about that and you know, and and what your  role is, how how you how you basically see that as important, you know. 

Speaker 2: So The civil rights standards the full title is the civil rights standards for twenty first  century employment selection procedures. As that title implies it’s not just about artificial 

intelligence in the use of hiring. It’s really meant to take a civil rights lens to all of the ways in  which employee assessments work today. And there’s there are a couple of Impatai. I don’t know  what the plural of Impatis is off hand for it.  

And one was just the fact that it’s been fifty years since the uniform guidelines for employee  selection procedures Yep. Were first drafted. And those have never been updated to reflect  changes that have occurred both in social science, but also in civil rights laws since then. The  uniform guidelines for example don’t say anything about disability discrimination. It predates the  Americans with Disabilities Act by more than a decade, you know, and all of the considerations  of accessibility, accommodation, and fairness, that’s not, and and the threats in which, if you  don’t provide accommodation to disabled workers, how that can threaten validity, none of that’s  contemplated by the uniform guidelines.  

So, impetus one was, okay, what would it look like if a kind of something like the uniform  guidelines, which is meant to implement civil rights laws with respect It’s a combination of a  civil rights document and a scientific validation document. 

Speaker 1: Right. 

Speaker 2: Interesting. What would that look like if you came along with it today? But the real  immediate thing that led the group of organizations that worked on that to get together and do it  was actually the New York City law. And specifically, what had happened was the when the New  York City, LL one forty four, back then it was just a bill. When that came along, civil society was  kind of all over the place on how to respond to it.  

Some nobody liked it. But there was a disagreement over, should our strategy be to improve this  thing and get it to a point where it’s going to meaningfully advance civil rights in the use of these  technologies? Or is the bill so broken that we should just oppose it. And there is one group that  basically said, forget just working on this bill and opposing it. We think that the use of AI and  hiring should just be banned.  

So civil society couldn’t really get on the same page. And I think that that was one of the reasons  that the law that the law ended up going the way that it was when it was enacted Mhmm. Which  was of, unfortunately, a very, very weak piece of legislation that was riddled with loopholes. And  

we’ve seen how few companies have complied with New York City’s lawsuits. And this is kind of  as, you know, that happened down the road, but a lot of our predictions, unfortunately, about the  loopholes and how companies would exploit them to avoid compliance with the law.  Those seem to have come true. I think that you and I talked briefly offline about a study that  came out a few weeks ago showing that very few companies have posted the information that the  law contemplates. Yeah. That conversation about, okay, what’s the best way to respond to this?  That led to, okay, well, instead of either saying no, And also, instead of just using the New York  bill as a starting point, what do we think a pro civil rights approach to employee assessment, and  particularly with respect to these automated assessments, what would that look like?  And so that idea was behind developing the civil rights savings. And then we brought in a bunch  of different civil rights and workers rights organizations. We brought in the AAPD. We brought  in, you know, the ACLU and the n double a c p legal defense and education fund and all of these  different organizations kind of had input and help develop this set of standards that you know,  implemented these different principles regarding transparency, regarding impact assessments and  tests for validity, And one of the things that we really harp on and that, you know, may well be of  interest to your listeners is, you know, we really hammered on the point that you shouldn’t rely 

just on correlation as a basis for validity when you’re developing a tool in the digital age.  Because it’s just too easy to capitalize on Jan’s correlations and to end up having a tool that  simply recapitulates cultural aspects of the current workforce rather than the ability of candidates  perform essential job functions.  

So we we kind of really hammered home on you need to be able to demonstrate that if you’re  gonna if you’re using assessment, you’re able to demonstrate that it is measuring somebody’s  ability to perform the essential functions of a job. Which is what the Americans with Disabilities  Act requires. Mhmm. And, you know, kind of that was the the North star principle, I think, that  guided the development of the standards. 

Speaker 1: So how does that different? And, you know, we talked offline as well about job  relatedness is king. I mean, that’s the litmus test. If you can’t pass that test, you’re you’re not in  compliance you’re not you don’t have a tool that is really worth a shit, honestly. Tell me what the  difference in job relatedness the concept of job relatedness would be but between, you know, the  the work you’re talking about and the uniform guidelines.  

Are there subtle differences, real differences, is it really the same thing? I mean, the uniform  guidelines looks for correlations. But if you have a job analysis and content validity, you know,  typically, you can if you have a strong evidence for that, you can you can fly by with that, you  know. Is there something different in 

Speaker 2: And there is something a little bit different. You know, the uniform guidelines, it’s  interesting. I’m not I O Psychologists by Transat. I’ll just play one on TV. But I do know enough  about I O Psychology to know that the way that the concept of validity is discussed in the  uniform guidelines does not reflect the way that validity was understood even in the late nineteen  eighties Yeah.  

Much less today. The uniform guidelines talks about criterion related validity. Content validity  and construct validity as if they are three independent concepts Uh-huh. Rather than as three  potential independent sources of evidence for belief. 

Speaker 1: So I’m gonna enter up here real quickly. Have you read This is good beyond the geek  thing for probably a lot of people who are listening. Have you read Landy nineteen eighty six?  Frank Lanny’s article from American psychologists in nineteen eighty six, 

Speaker 2: I might have the the one that I’m 

Speaker 1: stamp collection versus sign. 

Speaker 2: Not that, but something yeah. Oh, oh. That’s right. That that does that rings a bell  somehow. The article that I that I always turned to as did I thought did a great job of explaining  validity back then and even today, what was it?  

It was Samuel Mexics article. I wanna say it was nineteen eighty eight or nineteen eighty nine,  and it was literally just called validity. But it really explained the concept in terms that if you  read it then and then you read the most modern rendition of the APA standards today, like, you  know, he kind of laid out in really, really well what validity means. And I and the fact that that  was, you know, again, just ten years after the uniform guidelines. 

Speaker 1: Yeah.

Speaker 2: And it it it presented it in such a starkly different way than the uniform guidelines.  That that just shows, you know, the uniform guidelines were obviously almost at the time that  they’re published. And the the way that tests were developed back in the nineteen sixties and  seventies you mentioned it, you know, you would do a job analysis, have a team of experts that  would figure out, okay, how will we measure the knowledge, the skills that are the abilities that  are necessary in order to perform this job. You design the assessment. You test different you  know, items and components of the assessment on different people, and there’s this entire  developmental process.  

But it was very structured and it was conscious and you knew that you were measuring four  specific things as you were developing the product. Yeah. That is not at all. How the vast  majority of automated tools work. The way that it works is we’ve got this massive pool of data  that’s available a lot of the time. 

Speaker 1: Yep. 

Speaker 2: And we are doing the equivalent of a Jackson palette painting. We’re just throwing  everything at the wall Uh-huh. And we’re seeing if patterns emerge. You know, and 

Speaker 1: you’re making me laugh because I keep it up to you. I’m sorry. Good. I I’m gonna  forget Jackson Pollak paintings are always the ones where everybody looks at and says, I could  do that. Oh, that’s not so complicated.  

It’s just paint on a wall. That’s what people are building. These tools are saying too. Right? We’re  just gonna throw the paint on the walls.  

It’s not that hard. We look at the numbers. Everything’s cool. But it’s not. Sorry. 

Speaker 2: Yeah. Well, it’s certainly not easy to develop these tools. But at the same time, you  are not starting you know, Jackson Pollack didn’t just, like you said, flick painted the wall, but he  knew what colors he was going to use and what patterns he was going to use. I guess, I don’t I I  don’t wanna be, you know, Cassis versions too too harshly. But, you know, if Jackson Pollack  just put on a blindfold and randomly flung colors at the wall without knowing what colors he had  available to him or what colors was on his palette, and then afterwards looked for patterns in  what he 

Speaker 1: Mhmm. 

Speaker 2: Had thrown against the wall. To me, that’s kind of in some sense a little bit closer.  Yes. What I I I’m being I’m being facetious. It’s not that random, but at the same my point is they  put in a lot of stuff knowing a lot of these vendors that make these tools.  

They put in a lot of stuff that they know is garbage. That has nothing to do with job performing. Speaker 1: Exactly. 

Speaker 2: And they’re looking for patterns that emerge They they hope that some within that  garbage, there is some usable material that actually has predictive value. The problem is that if  you throw in a lot of garbage, you’re going to have some of that garbage that seems to have  predictive value just by chance. That’s just, you know, if you have one hundred thousand features  that you put into a system and you know, you have a even if you have a pretty strong threshold 

for statistical significance, the odds are that you’re going to have quite a few of those features  that have statistically significant relationships just by chance. Even if you use one percent  threshold, that means that there’s a good chance that you will have one thousand irrelevant  factors that make its way into your model. Yeah.  

So it’s it’s that fundamental difference is I think part of what’s so deeply problematic about a lot  of the approaches that these vendors who are developing resume screening tools who are  developing other data driven rather than content and skills driven assessments in this space are  presenting. 

Speaker 1: Yeah. Yeah. For sure. Wow. So do you think you think a a million monkeys with a  million cans of paint could reproduce Jackson Pollak.  

I was thinking probably easier than they could reproduce a Shakespeare work, I think. 

Speaker 2: I I that that’s what I was gonna say. Right? The like, what’s the old saying that if you  give them, you know, a group of monkeys a type router and put them on an island that’s long  enough, eventually one of them is going to right handle. 

Speaker 1: It’s a million monkeys. Yeah. Yeah. 

Speaker 2: It’s you know, and it’s but my favorite example is actually, if you flip a coin a few  quadrillion times. There is it’s statistically almost certain that you’re going to have at least one  instance that you get you flip heads ten thousand times in a row. 

Speaker 1: That’s insane. 

Speaker 2: You know? Like, If the n is large enough, the highly unlikely random event will  appear will appear as a pattern eventually. But that doesn’t mean that that pattern is stable over  time. It’s just a function of the fact that when you deal with very large numbers of random events,  patterns will seem to emerge at certain points. 

Speaker 1: Yep. 

Speaker 2: That’s why, you know, again, like, in the civil rights standards, we keep pushing we  we we try to push the focus back to you need to demonstrate that the tool is based on somebody’s  ability to perform the essential functions of the job. If you cannot explain that, if you can only  say While we can’t tell you what essential functions of the job this test is measuring, all we know  is there is some correlation between the outputs of this assessment and whatever potentially  flawed measures of job performance we 

Speaker 1: have. Yeah. 

Speaker 2: If that’s all you’re going on, that correlation you can’t tell me that that’s based on the  essential functions of the job rather than, you know, flips of a coin just randomly coming up  heads enough times because you flipped the coin enough times. 

Speaker 1: Yes. And, you know, what you also brought up, man, there’s so much is what we call  in our field the criterion problem. I mean, the criterion is

Speaker 2: is crap. 

Speaker 1: I mean, you know, if you’re using performance ratings, those are politically  motivated a lot of times. If you’re using if you’re lucky enough to get objective data, which you  can’t get in some jobs, a sales job, a contact center job, but most jobs don’t have objective  measurable data you can use. That’s my Pipe Dream is some kind of universal job performance  measure that, you know, everybody could carry around with them, but it’s that that’s very  probably impossible. I’m trying to think of how a large language model could do that. I don’t  think it’s it’s possible at all.  

But, you know, 

Speaker 2: I I always go back to if there’s one class of jobs that is quantified to the nth degree.  It’s in athletics. 

Speaker 1: Yeah. 

Speaker 2: It’s it’s professional sports. And even in basketball and football, to use two examples,  where you have, you know, the NBA draft and the NFL draft, they put these players through all  of these objective measurements. They measure their speed in these different ways. The NFL, at  least for a long time, and I think they still do. They have them take the WonderLIC test and you  have statistics and videotape that you can analyze of them playing the exact same sport that they  will be playing at the professional level.  

You have video of them playing at the you have all of this objective information. About how they  were performed. And still, the GMs miss more often than they hit Yeah. On their assessments of  these players. Yep.  

Even with all of that great information that is available to them. Yep. And the vast majority of  jobs, they are not nearly that. 

Speaker 1: Well, not nearly as well 

Speaker 2: a comp. You do not have yeah. And so the the notion that, you know, when you talk  about things that are ready to be automated, there is a great series of slides. And if you go to if  you Google AI snake oil 

Speaker 1: Oh, I’ve got that one. Yeah. Yeah. I got that one. That’s so good. 

Speaker 2: Yeah. Yeah. And, you know, it uses examples of these are things that AI does well.  And one of the best examples is spam filters. Yes.  

And the reason that we can do so well with spam filtering is that pretty much everybody will  agree on whether or not a particular email is spam. Yeah. It’s not that you know, like, you’re not  going to if you ask a hundred people Is this email in this person’s inbox spam or is it not spam  email? You’re gonna get agreement usually a hundred times out of a hundred, if not a hundred  times out of a hundred ninety nine times out of a hundred. And the ones that are right on the  borderline, you usually forgive them for doing Yep.  

You know, like it’s But that’s why spam filters work out. So, well, we under we have a good  understanding and there’s broad agreement in in our society of what a spam email is and what it  isn’t. Very rarely is that level of agreement and recognition present when it comes to job 

performance. If you ask ten different managers to assess a single worker’s job performance, you  may well get ten very different answers. 

Speaker 1: Yeah. 

Speaker 2: If you ask ten different people even, what are the most important factors in job  performance? You may not even get agreements on that. And the point that this snake oil article  made is that the former, the spam filter, where there’s broad agreement on what good, and bad  looks like. That’s when you’re ready to automate something. When you have that broad level of  agree. 

Speaker 1: Right? And 

Speaker 2: when you don’t have that broad level agreement and when something is based on  complex social factors and cultural factors and that are difficult to quantify and difficult to reach  consensus on, then that is not something that is ready to be automated. That is not something that  AI is good at. 

Speaker 1: Interesting. 

Speaker 2: I I would say that probably even most I O psychologists would agree that we are for  most jobs, there is not broad consensus on this is what good job performance looks like that we  are not good at defining and measuring, good job performance for most jobs that exist today that  we are getting better at it. There are ways to shed light on good job performance. But are we  capturing most of what matters for most jobs in the assessments that we give workers today in  the work place? I don’t think so. 

Speaker 1: No. We’re getting closer. Again, it’s better than flipping a coin to our points. And, I  mean, there are competencies and things about people that you wanna have in the workplace, and  you could just take a flyer and say, you know, okay, these things are important. Although, if you  look at that, there’s a lot of things that are important.  

That’s why as the job gets harder too, I like to think about it as a pie of use this analogy. Right? If  you’re an entry level job, the pie is sliced into are you gonna be nice to people? Are you gonna  steal from me? Are you gonna show up for work?  

And can you add in track, you know. But you get to a more complicated job and there’s fifteen or  twenty pieces of pie and they’re all about the same, but they have combinations that can, you  know, kind of come together. So it is complicated. Proudly now can say I’m neurodiverse. My  goal was to, at the beginning, I totally didn’t do it was to say, this is the goal of our conversation  today, which is to talk about legislation.  

So we’re about halfway through and we haven’t talked about legislation much. And I am really  enjoying this dialogue and I’m I’m impressed with your your knowledge of this stuff. And it  overlaps with mine, but but where mine is not as good as yours is in the legislation stuff. So let’s  shift gears a little bit. You mentioned the New York law.  

Right? We did talk about it a little bit, so that’s good. The the adjective like milk toast kinda  comes to my mind with that. I mean, it’s just really pretty ineffective. But it the interesting thing  about it is that, you know, Illinois had that that algorithmic face recognition or whatever law for  several years, at least three or four years before the New York law, I don’t know, is New York 

more popular than Illinois, but for whatever reason, that regulation is what everybody’s talking  about and what everybody started saying, we better get ready.  

I know me. I’m like, okay. I can consult on this. Haven’t had a lot of takers. Speaker 2: If you 

Speaker 1: thought it was about me, but then when I talk to other people in my field and people  like yourself as nobody’s really doing I saw a list on somebody shared a list from GitHub with  me that had a list of the different audits that have been done and there’s like three or four global  corporations and the rest of vendors who are basically having audits done when they have  nothing to do with passing the New York City law anyway. What I’m most interested in because I  I think a lot of folks know the deal about New York City and don’t wanna downplay it, is what’s  coming after it. And how how different that’s gonna be in terms of the the real actual  requirements for some compliance. So two two part question for really quickly. Tell us why you  think the New York law has gotten so much additional PR, but then tell us what’s coming down  the pike that’s actually gonna work if anything and why? 

Speaker 2: Well, on the first question, I think that the reason that the Illinois law didn’t I I’m  guessing referring to the Illinois video AI video interview act. I think that one of the reasons that  that didn’t get much attention is that it was very narrow. It only applied to the use of artificial  intelligence to analyze video interviews. It only required notice. It didn’t require any sort of  impact assessment or audits.  

And it had no enforcement provisions whatsoever. So there was no consequence if you didn’t  provide the notice that the bill required. So I think as a result of all of those things, you know,  even calling it regulation is a stretch. The New York bill, I still wouldn’t quite call it regulation  because because of the number of loopholes the bill had in it that companies have taken  advantage of, like you said, to to not comply with its provisions. And instead, what you’ve ended  up seeing is vendors using the the law as kind of an opportunity to do marketing based on their  tools like, hey, look, we hired an auditor to examine our tool under the New York City law.  And, you know, they found that we’re bias free, GA for us, bias free, whatever that means. You  know, obviously, any IO psychologist knows that if somebody is saying that their tool is bias  free, that that that that that is showing you that they’re not being very rigorous what the meaning  of bias is. It’s nothing else. What we’re seeing happening elsewhere now and what we might  anticipate happening next, is it kind of an open question. It at first, it looked like the New York  City model might have like Yes.  

And it might spread and be adopted either at the state level in New York and or in other  jurisdictions. And a bill because bills based on it were introduced in I believe four different  states, but none of them appears to have advanced. What we are seeing instead is legislation,  there’s there’s kind of two different approaches that are going out there. One is being approaches  that are more comprehensive regulatory regimes that are being proposed by civil rights groups  and civil society organization. An example of that at the federal level is the no robot bosses act. 

Speaker 1: I’ve heard of that. 

Speaker 2: Yeah. Yeah. Which was introduced by senator Casey earlier this year, and kind of  similarly themed legislation is now pending also in a few states, including New York,  Massachusetts, Vermont and DC. DC’s real estate, but same difference. And but then there’s 

there’s another model where legislation that’s not actually modeled on the New York City law but  that is kind of general AI risk management legislation that would impose some requirements on  AI systems, not only when they’re being used for employment decisions, but also decisions in a  wide range of other text, including housing, some of you can cover criminal justice and voting,  and they would regulate the use of AI in this wide range of different settings.  Those bills are mostly being pushed by tech companies. And you know, somewhat cynically, I  say unsurprisingly, they also contain lots of loopholes and kind of fine print that would make it  relatively easy for companies to either avoid compliance or to have minimal consequences if  their tools are problematic. So those are kind of the two approaches that we’re seeing develop  right now. And you know, perhaps unsurprisingly given, you know, that I work for a civil rights  focused organization, I’m hoping that the that the approach that is based on stronger stronger  regulation and at a minimum greater transparency. I think the problem number one in this space  is that candidates often don’t even know when they’re being assessed.  

You know, and that’s something that’s really unique to the rise of AI. That’s when you take a  paper and pencil employment assessment or you go to an assessment center and engage in  assessments there, you know that you’re being assessed. And not only do you know that you’re  being assessed for a job, you usually know what specific skills, knowledge, and abilities you are  being assessed for. You know, AI, it’s possible to assess somebody without at them even being  even knowing that an AI system is assessing them, much less knowing what exactly it is that  they’re being assessed for and how that relates to the job that they’re applying for. That  transparency issue to me is problem number one.  

And I think that if I had to make a guess, the the compromise position that we might see happen  in legislation is you don’t see audit requirements come into play right away. But you do see  strong disclosure and notice requirements for candidates. And in order for those to be  meaningful, the the big battleground is going to be, do you only apply the disclosure  requirements to tools that play a dominant role in the decision making process. And that’s  basically the approach that the New York City law takes. Or do you apply it to any AI system or  selection procedure that influences or is a factor in the decision making process.  And I think that the latter has to be the rule in order to achieve meaningful transparency as the  experience of the New York City law shows. Because companies can always say, well, we have a  human who has final say, on any employment decision, even if that human’s a rubber stamp in  effect. If a, you know, if a company has the ability to say, Well, we don’t consider the tool to be  the dominant factor in the decision making process because a human is the final one that signs  off on it, then they will then as the New York City experience shows, they’re gonna find a way to  escape compliance. I think that that’s where the battleground is gonna be, though. Like, there’s  gonna be pushback from companies saying, we think we should be able to use AI tools to give  recommendations, and those shouldn’t be subjected to notice or disclosure requirements.  And that that’s gonna be where the battle goes. Manage. So if you want a prediction, that’s it. 

Speaker 1: It’s the moving target. So tell us a little bit about two that come to mind for me that I  think we know a lot more about the EU, but and we’ll get to that in a second. What about  California? There’s two different things being proposed there. Typically California, you know,  the the legislation that sweeps across the US comes from California and the West Coast.  Actually, a lot of times comes over the pond from from Europe. Right? But but I read a little bit  about the California I guess they’re proposed at this point, but what do you think about those  within the context we’ve been talking about?

Speaker 2: So it’s interesting. So California what they have as an enacted law right now is the  California Consumer Privacy Act. And that was the first privacy and data transparency bill that  applies to workers that has gone into effect in the United States. There are data privacy bills that  have gone into effect in other states that apply to consumers but that explicitly don’t apply to  employees. California’s is the first that doesn’t carve out employees so that workers have the  same rights to data transparency and about data processing that apply to consumers.  And as a result of that, like the California privacy protection agency, which is responsible for  enforcing that law, has been they’re proposing rules that would relate to automated management  that would basically impose transparency requirements on automated management systems and  on automated decision systems in the workplace. So that’s still in relatively early stages, but  that’s kind of the one, you know, black letter law thing that’s already on the books. And that is  likely to see some action that will require companies to, you know, modify their practices going  forward. But there’s also a bill that’s pending in California. The the original version of it last here  was a b three thirty one.  

It just got reintroduced, and I can’t remember the the bill number for the new one. But that falls  under the category of general AI risk management regulations that I mentioned earlier. And  critically, that says that it only applies to tools that quote, are designed to be or specifically are  modified to be a controlling factor in the decision. So again, that’s why I say, like, that this is  where the battleground is going to be on these sorts of things. Because unless somebody can  prove that a AI hiring tool is the controlling factor in a decision.  

The company will be able to say that this law doesn’t apply to us. And it’s a catch twenty two  because if the company says, well, it’s not a controlling factor, and therefore they don’t provide  any disclosures that the bill requires. Well, then how would anybody else get the information that  they need to disprove their claim? That it is not a that it is not a controlling factor in a decision.  So, you know, again, that’s going to be the where the battle lines get drawn on a lot of these  issues.  

Because as long as that language like that is in there controlling factor or anything like that that  kind of gives companies as as, you know, an initial right say this tool doesn’t apply to us based  on how we use it, then the law is not going to be meaningfully complied with. Well, and 

Speaker 1: there’s no precedence for these things. You know, a lot of legal stuff is built on  precedent, case law, or, you know, case results, whatever, that allow judges to understand how to  apply that in a certain situation. But without that, we’re we’re waiting for these first things to  happen. So 

Speaker 2: That’s exactly right. 

Speaker 1: Does the EU have their Does the EU have their act together better than anybody else  with their new legislation? You know, that’s been quite a long time and a lot of debate, and it  seems like they got it pretty much in place now. What’s the impact of that? How do you feel  about it? You like it?  

You think it’s you know, what what should we know? 

Speaker 2: It’s so, I mean, this bill is, I think, literally, less than two weeks or this act is less than  two weeks old in terms of it being signed. So it’s far too early to say, to see how it actually gets  implemented and what effect it has. But the big thing from the employment perspective is that It 

classifies employment decisions as a high risk use of artificial intelligence. And that means that  in addition to disclosure about the types of decisions that are being made, users of tools that are  making those of AI tools that make employment decisions are gonna have to do these impact  assessments and establish kind of, you know, ongoing monitoring in governance of the use of  those tools. I’m somewhat skeptical because in in some ways, the the the legislation reminds me  of the way that air airline or certifications 

Speaker 1: Okay. 

Speaker 2: Where, you know, for for for the uninitiated, the FAA does not inspect and issue  certifications for planes. Instead, they issue these safety guidelines and standards And companies  are basically given the ability to self certify that their planes are in compliance. Now the there is  obviously a massive potential liability for companies that don’t do this process correctly because  if there’s a plane crash, due to mechanical failure of some kind, the liability for the airliner and  the PR hit that they take beyond what what anybody can in most other industries can infathom.  But as we’ve seen, that doesn’t always lead to great work being done on certifications. And, you  know, the Boeing seven thirty seven Max series is kind of the famous example of that.  You know, like, Boeing self certified that they were that they had that they had done all of these  tests, that they’d complied with all of these rules but it turns out that they kind of had fudged a  little bit on their testing process to claim that what we didn’t really make that meeting changes  from this generation seven thirty seven versus the last one. And so we can just rely on our testing  from the last one And then when it turned out that the differences were more substantial and it  led to a couple of crashes, that turned out to be deeply problematic. So that’s the problem with  this sort of like you know, you set standards, but you rely on the companies to implement them.  It you know, you you you you leave a little bit too much to companies that are gonna take  shortcuts in the interest of pursuing the bottom line. So that’s what I’m worried about with the  approach that the the UAI Act takes.  

That said, at this stage in the process where industry and technical expertise is more technical  expertise on these tools. Is so concentrated in industry. Is there an alternative? Could you have  could you have had a regulatory regime that required government certification and government  inspection. I don’t think you could. 

Speaker 1: Yeah. 

Speaker 2: And Europe isn’t like the United States where they can where there is this tradition of  relying on private lawsuits to kind of, you know, act as a check on corporate negligence and  abuses. So I’m not sure that the that the EUAI Act model is something that I would recommend  seeing transported to the United States, but it’s still very early days and maybe it will work more  effectively than I think well. 

Speaker 1: Well, if anything, it gives us something to think about, it gives us some examples to  draw on. I mean, if you think of GDPR, you know, that became kind of just by proxy adopted in  the US, but it’s a lot simpler. I mean, there’s a very clear rig things with that. There’s not a lot of  

gray areas about what you need to do and not do. So it’s it’s much more simplistic.  So let me ask you this. Is is the stuff we’ve been talking about, regulate? Is it ever gonna even  come close to keeping up with, like, it it’s good that we’re making the effort, but I feel like it’s I  feel like Don Quixote or something, you know, sometimes with this stuff because technology is 

exploding, you know, so fast and takes a long time to figure out this we’re just We’re chasing. Are  we ever gonna catch up ever? 

Speaker 2: It it’s a great question. And my very first foray in to AI policy was an article that I  wrote nine years ago, and it it’s called regulating artificial intelligence systems. And I I kind of  dug into all of the problems that are inherent in trying to come up with a government regulatory  regime for a technology like AI. What I kind of have ultimately come to the conclusion of nine  years later is that, yeah, I think that we are going I think the regulation is eventually going to  keep up with the technology, but it’s not going it can’t be regulation in the way that we’re usually  thinking of it. In a lot of ways, it’s going to have to rely on a sort of crowd sourced regulation  where we were acquire greater transparency of the companies that develop artificial intelligence  systems.  

And we rely on not just experts in government agencies, but we rely kind of in a Wikipedia  Right. A slight way on the general public and people who are knowledgeable about a subject to  analyze tools for potential risks and problems. And I think that eventually that my gut tells me  that this is based solely on, you know, kind of theoretical thinking and not based on any practical  evidence that I have that this is gonna work, but my gut tells me that that’s the only way that  we’re ever going to keep up with the risks that are involved with these technologies is that we  ever environment of transparency, and that as by virtue of that transparency, kind of there’s the  ability for experts and concerned citizens with knowledge of the subject matter area to raise  concerns and flag them. I I think that absent that, Government will always be playing catch up  because the resources in private industry are always going to dramatically outstrip whatever in  government enforcement agencies are tasked with keeping up with them. 

Speaker 1: I mean, that’s how we’ve seen with the uniform guidelines. And even then, the EUSC  doesn’t typically go after people. They wait for complaints. I’ve heard the OFCCP is a little more  aggressive, but still, let’s just think about how many hires are made, how many companies are  hiring, and how many regulators are that are out there to do anything about it. So last question,  and we got a wrap up, do you think there’s ever gonna be any kind of mandate that vendors go  through some kind of I mean, I guess it’s just what you were talking about with airlines, plane  manufacturers or whatever, but I’m just really interested in that because because, you know,  we’ve seen the effect already of the New York law where vendors are proactively saying we’re  gonna we’re gonna hire somebody to sanction us you know, I audit Tech manuals, I audit  processes, but but not that many companies even put those out.  

There’s a mandate I mean, the EOC comes back and you gotta produce something, but but that  doesn’t mean that you have to do it beforehand, you know. 

Speaker 2: I mean, I what I would like to see happen and what I think is is a possible course to  be taken here is that there are audit requirements, not unlike the financial auditing requirements  that publicly report, you know, that public companies are subject to. Because I don’t I I I don’t  buy the argument that we don’t know what an audit of hiring systems and employment decision  systems and tails. Everybody kind of agrees that, you know, you look for validity, you look for  sources of discrimination, like and it’s really the details. It’s it’s relatively minor details that  there’s disagreement. Right?  

And given the broad agreement, on what you need to look for. To me, it makes sense to come up  with a system where you say you need to have an independent auditor who comes in And just as 

you have an independent accounting firm come in and look at books, you have an independent  auditor come in and look at your technology and do all of these things. And just like the  independent auditors, their ability to be viable as independent auditors are based on their ability  to conduct these audits well into to maintain at least some level of independence. And when that  goes away as we saw with Arthur Anderson and Ron, you’re no longer able to effectively act as  an auditor anymore. You know, like, I think that that sort of audit regime is what makes the most  sense because and and and the reason that audit regime came into force for public companies was  exactly the same reason that we’re talking about that I think it makes sense here, which is that  regulators don’t have the resources to audit the books.  

Of all these massive public corporations. So the best way to do it was to set up a market  incentive where private actors do so. But you have standards for those auditors that ensure that  they are actually independent. 

Speaker 1: Gotcha. Very cool. We gotta wrap up. What a great I could keep going forever. I I  really enjoy geeking out on some of the stuff that hopefully our listeners will appreciate at trying  to balance the practical with the theoretical and geek geek stuff.  

But just real quick, let everybody know how they can follow you. Center for democracy and  technology. Look it up. It’s we’re glad it exists. But tell us how people can track you, and then  we’ll sign off. 

Speaker 2: Yeah. So I’m Matt Cherer. You can just Google my name, Matt Shearer, s c h g r e r c  d t, and I should pop right up on Google or Bing. And if you want to look at the civil rights  standards for 21st century employment selection procedures that we mentioned before, just go to  c d t dot org civil rights standards, and that’ll take you there. 

Speaker 1: Very nice. Alright. Well, thank you so much. Been great having you. As we wind  down today’s episode dear listeners, I want to remind you to check out our website rocket hire  dot com and learn more about our latest line of business which is auditing and advising on AI  based hiring tools and talent assessment tools.  

Take a look at the site. There’s a really awesome FAQs document around New York City local  law one forty four that should answer all your questions about that complex and untested piece of  legislation. And guess what there’s gonna be more to come. So check us out. We’re here to help.

The post Crafting the Future of Ethical AI in the Workplace appeared first on Rocket-Hire.

The B2B Marketplace for Recruitment Marketers

Find the right recruitment marketing solution for your brand and for your talent acquisition needs.

Create your account

[user_registration_form id="9710"]

By clicking Sign in or Continue with LinkedIn, you agree to RecruitmentMarketing.com's Terms of Use and Privacy Policy. RecruitmentMarketing.com may send you communications; you may change your preferences at any time in your profile settings.