New EEOC Guidance: How to Avoid Title VII Violations When Using AI  |  By: Kelly O. Scott

05.23.2023
Emploment Law Reporter
Photo of New EEOC Guidance: How to Avoid Title VII Violations When Using AI  |  By: Kelly O. Scott

On May 18, 2023, the U.S. Equal Employment Opportunity Commission (EEOC) issued new technical assistance to help employers avoid violations of Title VII of the Civil Rights Act of 1964 when using certain software, algorithms, and/or artificial intelligence (AI) in connection with employment decisions.  The guidance also includes a “Questions and Answers” section regarding the use of AI by employers.

The guidance emphasizes that employers are responsible for any violations even if they are using software, algorithms or AI developed or supplied by a vendor, or if an agent, such as a software vendor, is given authority by the employer to act on its behalf.  In this regard, it should be noted that employers often face the dilemma that vendors using AI may not disclose their testing methods and may also require employer indemnification in connection with any claims that might be made by employees or job applicants.

Examples of software covered by the EEOC’s new guidance include automatic resume-screening software, hiring software, chatbot software for hiring and workflow, video interviewing software, analytics software, employee monitoring software, and worker management software.  Algorithms are often used to allow employers to process data to evaluate, rate, and make other decisions about job applicants and employees.  Software or applications that include algorithmic decision-making tools are used at various stages of employment, including hiring, performance evaluation, promotion, and termination.  Artificial intelligence may be used by employers and software vendors when developing algorithms that help employers evaluate, rate, and make other decisions about job applicants and employees.

An employer could violate Title VII of the Civil Rights Act of 1964 if the employer or the employer’s vendor uses software, algorithms, and/or AI without appropriate safeguards in connection with, among other things, the employer’s selection of new employees, monitoring of employee performance, and/or determination of employee pay or promotions.  A violation would occur if the use of such tools disproportionately excludes persons based on race, color, religion, sex, national origin, or another protected characteristic, if the tests or selection procedures are not “job related for the position in question and consistent with business necessity.”  This type of violation is called “disparate impact” or “adverse impact” discrimination.

The technical assistance focuses on one of the questions that is typically raised in disparate impact cases: does the employer use a particular employment practice that has a disparate impact on the basis of race, color, religion, sex, national origin or other protected class?  For example, if an employer requires that all applicants pass a physical agility test, does the test disproportionately screen out women?

The new guidance advises that the four-fifths rule is a general rule of thumb for determining whether the selection rate for one group is “substantially” different than the selection rate of another group.  The essence of the rule is that one rate is substantially different than another if their ratio is less than four-fifths, or 80%.  However, compliance with the four-fifths rule does not guarantee compliance, as the rule may not be appropriate in certain circumstances, such as where smaller differences in selection rates may indicate adverse impact.  This could occur when a procedure is used to make a large number of selections, or where an employer’s actions have discouraged individuals from applying disproportionately on grounds of a Title VII-protected characteristic.  In these cases, the test may be used to prompt the employer to seek further information about the procedure in question.

In sum, using AI or an outside vendor will not protect an employer from claims of discrimination.  Employers should therefore measure the results of any AI-assisted decision-making process to avoid disparate impact claims.  In addition, vendors should be asked what steps have been taken to ensure that any software or algorithmic decision-making tool avoids having a disparate impact on individuals with a protected characteristic and, regardless of the answer, carefully monitor the results of any vendor efforts.

The author would like to gratefully acknowledge the assistance of Joanne Warriner.

This publication is published by the law firm of Ervin Cohen & Jessup LLP. The publication is intended to present an overview of current legal trends; no article should be construed as representing advice on specific, individual legal matters, but rather as general commentary on the subject discussed. Your questions and comments are always welcome. Articles may be reprinted with permission. Copyright 2023. All rights reserved. ECJ is a registered service mark of Ervin Cohen & Jessup LLP. For information concerning this or other publications of the firm, or to advise us of an address change, please send your request to marketingemail@ecjlaw.com. 

PDF

Professionals

Practice Areas

Jump to PageX

ECJ uses cookies to enhance your experience on our website, to better understand how our website is used and to help provide security. By using our website you agree to our use of cookies. For more information see our Privacy Policy and our Terms of Use.