Study Shows Race and Sex Bias in AI Hiring Tools

Artificial intelligence (AI) applications are becoming quite common for a wide range of uses in employment. Many businesses use AI tools in hiring as a way of increasing efficiency, They can train AI tools, for example, to screen out applicants who meet certain criteria, or to look for certain favored criteria. The trick, as it turns out, is to make certain that the use of AI in hiring does not lead to violations of New Jersey employment law. On multiple occasions over the past few years, AI hiring tools have produced outcomes that demonstrate bias based on race, sex, or other factors. Even if a machine or algorithm makes a hiring decision, the employers may ultimately be liable for unlawful discrimination. The legal system is still catching up to these aspects of AI. A recent study shows how biases in the information that an AI system receives can lead to biased outcomes.

The New Jersey Law Against Discrimination (NJLAD) prohibits discrimination based on numerous factors, including race, sex, religion, disability, sexual orientation, gender identity, pregnancy, and national origin. Overt discrimination, such as refusing to hire someone specifically because they belong to a group listed in the NJLAD, is not the only kind of unlawful discrimination. Disparate impact discrimination occurs when a policy or practice has an outsized impact on members of a protected group, regardless of whether the employer intended to discriminate.

AI hiring tools may fall somewhere between these two types of discrimination. They can have a disparate impact on a protected group with no biased intent on the employer’s part. Studies suggest, though, that any bias AI shows is the result of bias in the information used to train the AI. Employers’ legal duty to guard against these types of bias remains an op[en question.

The term AI often refers to “generative AI,” which creates images, videos, or text based on prompts from users. This is only one type of AI, though, and it is a relatively new one. Traditional AI performs specific tasks based on a variety of machine learning models. AI hiring tools often use large language models (LLMs), which train AI applications to understand and produce human language using massive amounts of data. This enables the AI hiring tools to sift through resumes to find and screen out candidates. If an AI tool receives biased data, it will produce biased results.

Researchers at the University of Washington recently performed a study that compared the results when identical resumes with different names were submitted to several resume-screening AI applications that use LLMs. The names on the resumes differed based on race or gender. They used names from a database that showed how often names are associated with White or Black people and men or women.

The researchers found that the AI tools showed a disproportionate preference for White- and male-associated names. The systems preferred White-associated names over 85% of the time, and male-associated names about 89% of the time. They preferred White- and male-associated names over Black- and male-associated names almost 100% of the time.

Suppose you believe your employer has engaged in unlawful workplace practices in New Jersey or New York and violated your rights. In that case, you need an experienced employment lawyer to fight for you. The Resnick Law Group will work to recover the compensation owed to you under federal or state law. Schedule a confidential consultation to discuss your case with us today through our website or by calling 973-781-1204 or 646-867-7997.

Contact Information