Man Versus Machine: NYC Law Attempts to Eliminate Bias in Artificial Intelligence

Julie CalliBy Julie Calli
November 21st, 2022 • 4 Minutes

A New York City law will soon require businesses to conduct audits designed to assess biases in all AI-driven systems that are used for recruitment marketing and hiring. Under the new law, scheduled to take effect in January 2023, the hiring company will be liable for potential biases and subject to fines for violations.

The use of artificial intelligence has grown more prevalent in many human resources departments over recent years. Nearly one in four organizations now make use of some form of automation, AI, or both, to support their HR activities.

Regulators have begun to take notice and seek to ensure that companies don’t accidentally introduce the potential for further biases through AI technologies.

What Is the Challenge?

New York’s new law requires all companies that use AI tools in their hiring processes to perform an independent audit of those tools before January 1, 2023; however, little guidance is available that properly outlines the criteria for an audit or even mentions who should perform it.

With just a handful of weeks left in the year, organizations are now left scrambling to comply with a new regulation they don’t fully understand. 

The current law contains minimal overall guidance, though specific stipulations are made clear. Employers who use AI tools must make their audit results available to the public on their websites. In addition, employers must describe the data obtained by the AI tool and how employers plan to use it in the recruitment marketing and hiring process. 

Employers have the option to describe the analysis given by the AI tool on their websites, or they can choose to disclose that information when potential candidates or others inquire about it. Those looking for candidates living in New York City must provide at least ten days’ notice before using AI tools in the hiring process and decision. In addition, they must provide prospects with accommodation options if the applicant asks for them.

Companies that are not in compliance with the regulations of the new law will face penalties. First-time violators face a fine of $500, but fines increase to $1,500 for each subsequent violation.

Currently, the NYC Department of Consumer and Worker Protection is working on more transparent rules to implement into the law, but there isn’t a timeline for when those might be published.

Examples of Potential Bias in AI

We all know that humans have inherent biases — some they may not even be aware of — but shouldn’t a software tool be objective by nature? According to the Harvard Business Review, that’s not necessarily the case.

There are numerous examples of cases in which software and machine learning can introduce new — and even amplify existing — biases. Consider a few examples:

Building the Talent Pool

Job board websites use algorithms to recommend specific roles to candidates who may be a good fit for certain positions. These algorithms can use various factors to recommend roles, including prior search history, a job seeker’s resume and previous applications.

But algorithms don’t always perform in the employer’s best interests. 

One study by HBR, in tandem with Northeastern University and USC, found that ads on Facebook skewed applicants based on gender and race. 

In one instance, job ads for supermarket cashier roles garnered an audience of 85% women. In another, Facebook’s ads for taxi drivers reached an audience that was 75% African American. 

Narrowing the Talent Pool

When applications begin rolling in, AI software can potentially eliminate applicants before the hiring manager reads a single resume. Some software automatically rejects candidates who don’t meet specific requirements found through chatbots, while others use high-level resume scanning to look for evidence of a potential bad fit.

While such tools are designed to be practical time-savers, they may unwittingly introduce biases into the hiring process. For instance, some tools can automatically reject people who’ve left their previous place of work within a short timeframe without considering the employee’s reason for doing so.

What Is the Scope of the Change?

At the end of 2021, New York City housed over 200,000 businesses. It’s believed that approximately one in four of these companies use automation or AI tools in their hiring processes. The proportion of automation technology usage increases to nearly 42% amongst employers with more than 5,000 people.

Other state and local governments look to New York for guidance on their impending legislation. Washington, D.C., Illinois, and California all have pending legislation that could impact the use of AI tools in the hiring process in forthcoming years. 

Washington, D.C.’s pending law is very similar to the one being implemented in New York City. It will require all employers to audit their recruiting tools to ensure that they aren’t introducing bias into the hiring process. The law is currently under review by the city council, and a public meeting will occur before the end of the year.

The U.S. Equal Employment Opportunity Commission (EEOC) introduced guidance concerning AI tools and applicants with disabilities at the federal level, instructing employers to review their tools to ensure the software isn’t rejecting applicants as a result of the disabilities they may have. 

However, the federal government has yet to go any further, and there are no current penalties or audit requirements in place for using the tools.

Why Was This Law Created?

So, why are some concerned about using AI tools in the hiring process? While there’s been a lot of talk about machine learning, AI, and other predictive technologies, the reality is that these are relatively new advancements. 

We’re in the middle of a technological disruption, and some of the latest software and platforms developed from recent advancements still have a learning curve. They may be introducing unintentional biases and discrimination.

At the base level, governments are concerned that the tools may not be fully capable of providing automation technology without introducing biases. In other words, they may not truly work as their developers intended.

Until we have a complete understanding of the tools and know how to audit them to ensure they don’t introduce discriminatory practices, we may be at a disadvantage.

Where to Go From Here

Expect hiring tools that employ AI and machine learning to be under increased scrutiny in future years. Federal, state, and local governments will likely continue to introduce legislation to monitor AI-driven hiring tools closely to ensure that they aren’t introducing unwanted biases that could unintentionally discriminate across various genders, ethnicities and people with disabilities.

Subscribe to the RecruitmentMarketing.com Newsletter
Stay informed on industry news and trends with our monthly updates.
Subscribe

The B2B Marketplace for Recruitment Marketers

Find the right recruitment marketing solution for your brand and for your talent acquisition needs.

Create your account

[user_registration_form id="9710"]

By clicking Sign in or Continue with LinkedIn, you agree to RecruitmentMarketing.com's Terms of Use and Privacy Policy. RecruitmentMarketing.com may send you communications; you may change your preferences at any time in your profile settings.