Artificial intelligence (AI) applications are becoming quite common for a wide range of uses in employment. Many businesses use AI tools in hiring as a way of increasing efficiency, They can train AI tools, for example, to screen out applicants who meet certain criteria, or to look for certain favored criteria. The trick, as it turns out, is to make certain that the use of AI in hiring does not lead to violations of New Jersey employment law. On multiple occasions over the past few years, AI hiring tools have produced outcomes that demonstrate bias based on race, sex, or other factors. Even if a machine or algorithm makes a hiring decision, the employers may ultimately be liable for unlawful discrimination. The legal system is still catching up to these aspects of AI. A recent study shows how biases in the information that an AI system receives can lead to biased outcomes.
The New Jersey Law Against Discrimination (NJLAD) prohibits discrimination based on numerous factors, including race, sex, religion, disability, sexual orientation, gender identity, pregnancy, and national origin. Overt discrimination, such as refusing to hire someone specifically because they belong to a group listed in the NJLAD, is not the only kind of unlawful discrimination. Disparate impact discrimination occurs when a policy or practice has an outsized impact on members of a protected group, regardless of whether the employer intended to discriminate.
AI hiring tools may fall somewhere between these two types of discrimination. They can have a disparate impact on a protected group with no biased intent on the employer’s part. Studies suggest, though, that any bias AI shows is the result of bias in the information used to train the AI. Employers’ legal duty to guard against these types of bias remains an op[en question.
Continue reading