Client Alert: California's Civil Rights Council Approves New Regulations On AI In Employment | By: Jared W. Slater
Client Alert: California's Civil Rights Council Approves New Regulations On AI In Employment | By: Jared W. Slater

The California Civil Rights Council has approved new regulations that clarify how existing anti-discrimination laws under the Fair Employment and Housing Act (“FEHA”) apply to the use of artificial intelligence (AI) and automated decision systems. These regulations become effective on October 1, 2025.

The new rules are not a ban on AI. Instead, they serve to expand and clarify existing safeguards to ensure AI tools do not have a disparate impact on, or a disparate treatment of, employees based on FEHA-protected characteristics.

This development is distinct from AB 1018, a separate legislative proposal that was not signed into law this session. While AB 1018 is on pause for now, the new AI regulations are moving forward, and these new regulations require every California employer’s attention.

The regulations are broad and cover any Automated Decision System (ADS) used in employment decisions, from screening resumes to targeted job advertisements. Here is what employers need to know:

  • Proactive Bias Testing: Employers must audit all automated systems used in employment decision making for potential discriminatory impact before using them. The absence of such efforts could be used against employers in any subsequent legal claim.
  • Expanded Record-Keeping: The regulations double the required record retention period for ADS-related data to a minimum of four years. This includes dataset descriptions, scoring outputs, and all audit findings.
  • Vendor Responsibility: Employers are responsible for the actions of their third-party vendors. Any AI tools purchased or used for employment decisions must comply with these new rules.

Employers must also ensure that the use of AI does not interfere with the obligation to provide reasonable accommodations to individuals with disabilities. For example, if an AI tool evaluates an applicant's dexterity, reaction time, or tone of voice, you may need to provide an alternative assessment or method for a candidate with a disability. Further, the regulations clarify that an AI-based assessment that is "likely to elicit information about a disability" can be considered an unlawful medical inquiry. The key is to ensure automated systems do not screen out qualified candidates with disabilities without providing an opportunity for reasonable accommodation.

Compliance with these regulations is crucial to avoid costly litigation and potential penalties. We recommend that employers take a complete inventory of all AI and automated systems used for employment decisions and consult with legal counsel to ensure that these processes are in full compliance with the new rules.

This publication is published by the law firm of Ervin Cohen & Jessup LLP. The publication is intended to present an overview of current legal trends; no article should be construed as representing advice on specific, individual legal matters. Articles may be reprinted with permission and acknowledgment. ECJ is a registered service mark of Ervin Cohen & Jessup LLP. All rights reserved.

Subscribe

Recent Posts

Blogs

Contributors

Archives

Jump to PageX

ECJ uses cookies to enhance your experience on our website, to better understand how our website is used and to help provide security. By using our website you agree to our use of cookies. For more information see our Privacy Policy and our Terms of Use.