Privacy vs. Innovation: California's Proposed Regulations on AI and Automated Decision-Making
Privacy vs. Innovation: California's Proposed Regulations on AI and Automated Decision-Making

The California Privacy Protection Agency (“CPPA”) has unveiled draft regulations regarding AI and automated decision-making technologies.  AI and automated decision-making encompasses systems leveraging machine learning, statistics, and data processing to evaluate personal information.  Such technologies not only aid in human decision-making, they also involve individual profiling capabilities, which triggers a potential need to safeguard privacy rights.  The draft regulations include, among other things, provisions for providing notice of AI technology use, delineating the scenarios and mechanisms for opting out from such use, and ensuring consumer access to information utilized by AI technologies.  As always, the profiling of children under 16 and defining how consumer information can be employed in training these systems is of utmost concern.

The CPPA argues that the draft regulations encourage responsible automated decision-making while ensuring privacy safeguards -- particularly for employees and children.  Similar to the laws and regulations associated with the California Consumer Privacy Act (“CCPA”), these proposed AI regulations would require that businesses using personal information in automated decision-making systems to transparently outline and disclose the information's use and provide opt-out avenues for California consumers.  If initial notice proves to be inadequate, businesses need to also furnish supplementary information through hyperlinks, highlighting how consumer data is used by their AI systems.  Additionally, disclosures must include evaluations of technology reliability and fairness. 

Just like the CCPA regulations, the new automated decision-making regulations have exemptions that may apply to certain businesses using automated decision-making for security, fraud detection, consumer safety, and core service provision.  However, businesses leveraging such systems for behavioral advertising must, similar to the CCPA, provide opt-out mechanisms to consumers, which underscores that ever-expanding importance of consumer choice and privacy as technologies evolve and become more powerful.

The draft regulations also emphasize various scenarios where consumers have the right to opt out, including in connection with legal determinations, student or employment evaluations, and instances in public places.  Notably, workplace monitoring tools for productivity tracking and public venue technologies like Wi-Fi or facial recognition fall within this purview.  An essential point of contention applies to the handling of children’s information, which like the CCPA mandates parental consent mechanisms for monitoring individuals under 13 and informing those between 13 and 16 of their opt-out rights.  Moreover, consumers once again would continue to possess rights to inquire about the usage of AI-based automated decision-making technology affecting them, demanding insights into system logic, decision-making processes, and human influence on outcomes.  Once again, verifiable identity validation is a prerequisite for making such requests, with limited exceptions for cases involving consumer safety, security, or fraud prevention.

The CPPA's initiative is pivotal amid global discussions on regulating AI, echoing similar calls for regulation at the federal level in the U.S. President Biden's executive order on AI and the EU's proposed AI Act underline the urgency for state-based AI governance.  As always, under these circumstances, California's proactive stance in enacting these regulations aligns with ongoing discussions on AI regulation, setting precedents for other states and potentially shaping future federal legislation.  These draft CPPA regulations signal a significant stride in balancing the ethical use of AI with safeguarding individual privacy rights.  As California spearheads these groundbreaking AI-oriented regulations, the world looks on hoping for insights into a reasonable and balanced benchmark for responsible AI governance.

This publication is published by the law firm of Ervin Cohen & Jessup LLP. The publication is intended to present an overview of current legal trends; no article should be construed as representing advice on specific, individual legal matters. Articles may be reprinted with permission and acknowledgment. ECJ is a registered service mark of Ervin Cohen & Jessup LLP. All rights reserved.


Recent Posts




Jump to PageX

ECJ uses cookies to enhance your experience on our website, to better understand how our website is used and to help provide security. By using our website you agree to our use of cookies. For more information see our Privacy Policy and our Terms of Use.