Published on

Essential Considerations for Making Your DPA AI-Ready

Authors

As AI-powered features become a critical part of modern business tools, ensuring your Data Processing Agreement (DPA) aligns with these advancements is more important than ever. Whether it's automated workflows, intelligent document review, or predictive analytics, AI is transforming the way we handle data—but it also brings new responsibilities to maintain trust and compliance.

Recent regulatory actions, such as Italy’s €15 million fine on OpenAI for privacy violations, underscore the importance of aligning AI tools with data privacy laws. Missteps in transparency, data handling, or compliance can result in significant financial and reputational risks. To mitigate these challenges, businesses should include an AI Addendum to their DPA. Here are seven key clauses to make it comprehensive and effective.

1. Clear Definition of AI Features

Avoid broad, sweeping definitions of AI that encompass all software, as this can create compliance issues and unrealistic expectations. Instead, define AI features based on their specific functionality and intended use, aligning with market norms and technological realities

Parties should ensure the scope of these features complies with data minimization principles, avoiding unnecessary collection or processing of personal data.

  • Example Clause: "The AI Features include [specific features, e.g., automated clause redlining or key term identification] provided in connection with [Vendor’s product or service] (the “AI Features”)."

2. Deactivation of AI Features

Provide customers with the ability to disable AI Features anytime, maintaining control over data processing. A realistic approach avoids terms that overly complicate this process or create operational challenges.

  • Example Clause: "The Customer may deactivate the AI Features at any time. Upon deactivation, [Vendor] will cease processing data using AI Features, and Sub-processors will no longer engage in data processing."

3. Data Ownership

Reinforce customer ownership of data inputs and AI-generated outputs to build trust and meet legal obligations.

  • Example Clause: "All rights to Customer Content, including data inputs and outputs generated by the AI Features, remain the exclusive property of the Customer."

4. Data Transfers

Ensure compliance with cross-border data transfer regulations like GDPR by highlighting safeguards. Missteps in this area, like insufficient data protection measures, can lead to regulatory scrutiny.

  • Example Clause: "Where the use of AI Features involves the transfer of Personal Data to third countries, such transfers are subject to Standard Contractual Clauses as outlined in [Vendor’s] Data Processing Addendum."

5. Sub-Processor Disclosure

Customers should know which third-party sub-processors (e.g., OpenAI) are involved in processing their data and what their roles are. Sub-processors must be disclosed to comply with GDPR’s accountability and transparency requirements.

  • Example Clause: "[Vendor] uses third-party sub-processors, including [Subprocessor], for the provision of AI Features. Sub-processors are engaged solely to process data as necessary for delivering the AI Features."

6. Pass-Through of Sub-Processor Policies

Ensuring customers comply with the terms and policies of sub-processors, such as OpenAI, protects the vendor from liability and aligns customer usage with third-party requirements. However, vendors should avoid overly burdensome reporting or transparency requirements, especially for black-box systems like foundational AI models.

  • Example Clause: "The use of AI Features is subject to compliance with applicable data protection laws and the terms and policies of [Subprocessor], including [Subprocessor’s] Terms of Use and Service Terms, as updated from time to time. Failure to comply with these terms constitutes a material breach of this Addendum."

7. Indemnification for Third-Party Claims

Customers should avoid accepting liability for misuse of AI Features against a sub-processor’s policies. Vendors are better positioned to communicate AI policies clearly and enforce compliance through their tools. Vendors should take responsibility for ensuring sub-processor compliance to meet legal accountability requirements. However, it is standard for vendors to include disclaimers that they cannot guarantee flawless outputs or full compliance with all intellectual property frameworks due to the current limitations of AI.

  • Example Clause: "The Customer shall indemnify [Vendor] against claims from [Subprocessor] or its affiliates resulting from the Customer’s misuse of the AI Features or violations of [Subprocessor’s] terms and policies."

Final thoughts

Aligning AI terms with real-world capabilities and risks is essential for creating effective agreements. Many vendors face challenges due to overly broad or unrealistic clauses in contracts that attempt to address every conceivable AI risk.

Some final considerations:

  • Avoid one-size-fits-all agreements in favor of tailored agreements based on the specific use case and risk profile.

  • Recognize that AI providers, particularly those relying on foundational models, cannot guarantee error-free or unbiased outputs due to the inherent nature of machine learning systems.

  • Considering non-contractual risk mitigation strategies, such as internal policies, employee training, and emerging AI-specific insurance products.

By combining clear, pragmatic terms with proactive risk management, businesses can ensure their DPAs are both flexible and resilient in the face of evolving AI challenges.