A Roadmap for Companies Developing, Deploying or Implementing Generative AI | By: Jeffrey R. Glassman
Posted in IP Insights
A Roadmap for Companies Developing, Deploying or Implementing Generative AI | By: Jeffrey R. Glassman

Generative artificial intelligence is moving from experimental pilot projects into enterprise-wide deployment at an unprecedented pace. Yet as companies accelerate adoption, regulatory bodies in the United States and abroad have issued new rulemaking, enforcement guidance and governance frameworks that create legal exposure. The evolution from pilot programs to production and deployment now requires structured compliance, contractual protection and board-level oversight.

California enacted Senate Bill 53 (“SB 53”) in 2025, establishing transparency obligations for developers and deployers of advanced foundation models. The law requires detailed public reporting regarding model capabilities, intended uses, risk assessments, third-party evaluations and safeguards. It also includes whistleblower protections and mandatory incident-reporting to California’s Office of Emergency Services. Federal agencies have also signaled increased oversight, including: (a) Federal Trade Commission (“FTC”) Guidance on AI in Advertising and Automated Decisions, warning that deceptive or opaque AI use may violate Section 5 of the FTC Act; (b) Equal Employment Opportunity Commission (“EEOC”) Guidance, clarifying that employers using automated hiring tools remain responsible for discriminatory outcomes; and (c) Consumer Financial Protection Circular (“CFPB”) 2024-03, noting that companies must provide “specific and accurate reasons” when AI contributes to adverse credit decisions.  In addition, the European Union AI Act, expected to take effect in phases beginning in 2026, includes risk-tier classifications, requirements for transparency in generative models, and obligations for U.S. companies offering AI-enabled services to EU residents. To that end, companies with a presence overseas must begin prepare for compliance regardless of California state and/or U.S. federal action.

Moreover, generative AI models can inadvertently reveal training data through model inversion or extraction attacks. If the model processes personal information, this may create violations under the California Consumer Privacy Act (“CCPA”) and California Privacy Rights Act (“CPRA”), other state privacy statutes including Virginia, Colorado, Connecticut and Texas, and the General Data Protection Regulation (“GDPR”).  Therefore, covered businesses should start to evaluate whether training datasets include copyrighted material, whether outputs could infringe on third-party intellectual property, and whether client or employee prompts might become part of a model’s training corpus. Contractual controls and internal protocols are essential to avoid derivative-content issues, trade-secret exposure and copyright claims.

Enterprise-level adoption of generative AI is frequently built on third-party foundation models. Before implementing such models, companies should allocate indemnity obligations to generative AI service providers, examine the data-processing and data-retention terms and conditions of such providers, ensure that your company has audit rights concerning model behavior and logs; clarify which party owns customized models and derivative works; and revise service level agreements with generative AI service providers to address model availability, reduce hallucination rates, and outline timeframes for incident response.

New rules and regulations increasingly target automated decisions affecting employment, lending, housing and access to services. Therefore, it is imperative that companies not simply rely on generative AI service provider terms of service. Representations and warranties, covenants and agreements, and indemnification and limited liability provisions must be drafted in a way that reduces the risk of integrating generative AI into your company’s tech stack.  Companies should adopt AI governance frameworks that include risk heat mapping, incident escalation protocols, a complete inventory of AI tools being deployed across an IT network, schedule regular recurring audits of generative AI model performance, and periodically review transparency, accuracy and completeness of reports and risk assessments of generative AI tools.

In essence, AI risk now parallels cybersecurity risk and requires similar enterprise-level oversight. As a result, companies should negotiate provisions into their agreements with third party generative AI service provider prohibiting vendor training on client data; ensuring minimum encryption standards, access restrictions, log retention; including performance guarantees related to AI model accuracy, error thresholds, and fallback mechanisms; clarify company’s ownership of model outputs, fine-tunes and embeddings; ensure access to model testing results and redacted risk reports; and structure indemnification provisions that cover IP infringement, data exposure and regulatory penalties. Simultaneously and internally, companies should create AI acceptable-use policies, prompt response and confidentiality rules, model audit procedures, record-keeping protocols for ongoing risk assessments, and disclosure requirements when AI is used in a company’s client or consumer-facing interactions.

While generative AI offers significant operational and competitive benefits to businesses large and small, the transition from experimentation to company-wide deployment brings wide ranging responsibilities that cannot be ignored. Between California’s new frontier AI requirements, expanding federal oversight, and global regulatory actions, companies must adopt a proactive and defensible AI-compliance position. Businesses building or deploying generative AI systems must engage in responsible innovation and careful planning. And they must balance the need for taking advantage of evolving AI opportunities with the importance of mitigating risk and maintaining regulatory compliance.

This publication is published by the law firm of Ervin Cohen & Jessup LLP. The publication is intended to present an overview of current legal trends; no article should be construed as representing advice on specific, individual legal matters. Articles may be reprinted with permission and acknowledgment. ECJ is a registered service mark of Ervin Cohen & Jessup LLP. All rights reserved.

Subscribe

Recent Posts

Blogs

Contributors

Archives

Jump to PageX

ECJ uses cookies to enhance your experience on our website, to better understand how our website is used and to help provide security. By using our website you agree to our use of cookies. For more information see our Privacy Policy and our Terms of Use.