Published on

Services contracts for AI Vendors, explained

Authors

Are you updating your terms because your company just incorporated AI into your offerings? If so, you might want to know what terms your fellow AI vendors are adding to their contracts.

We compared 50 vendor contracts before and after they incorporated AI into their offerings. Here are our primary takeaways.

Flow-through third party terms

TLDR Vendors must review third-party AI providers' terms closely and pass down any necessary obligations to their customers. Vendors should ask customers to acknowledge and manage the risks associated with AI use.

Building foundational AI models requires time, money, and data. This means only a limited number of companies worldwide build these models. As a result, most vendors use APIs to integrate these models into their products and services then fine-tune the models for specific use cases.

Given this reliance on third-party technology, vendors need to carefully check their contracts with AI providers and flow down on any use restrictions or obligations to their users.. For example, a vendor using OpenAI may require customers to comply with OpenAI’s Usage Policies or Sharing and Publication Policy. Vendors must also avoid providing broader rights, representations, warranties, or indemnities to their customers than those imposed by these external providers.

OpenAI prohibits vendors from using its output to develop competing models.1 Vendors, in turn, must pass down this restriction to their customers to maintain compliance across the ecosystem. For example, Loom includes a similar clause in its customer agreements, prohibiting the use of Loom AI for creating foundation models or large-scale models that compete with Loom AI: “Customer may not use Loom AI to: (i) develop foundation models or other large scale models that compete with Loom AI. . . .” This approach has become standard practice among vendors.

Some vendors will simply require customers to not “use [vendor services] in a way which would cause [vendor] to breach terms that [vendor] has agreed with OpenAI for this third party AI functionality.”2

These layered contracts pose compliance issues for customers and monitoring difficulties for vendors. Some vendors address this by implementing Acceptable Use Policies.

Enforcement of Acceptable Use Policies

TLDR Vendors should have clear AUPs and incorporate them into their service agreements. Vendors may consider adding measures for proactive enforcement of such policies.

In response to the potential compliance issues and to clarify use restrictions, AI vendor contracts almost always feature Acceptable Use Policies (AUPs). Companies like Airtable, Amplitude, Benchling, and Freshworks have adopted or updated their AUPs to formalize the do's and don'ts in AI usage. AUPs aren’t just static clauses in contracts but are evolving into dynamic frameworks for managing the relationship between AI vendors and their users. The shift towards active enforcement, like content scanning and usage monitoring3, indicates a proactive stance among vendors in ensuring compliance. In the future, content scanning may become the standard for enforcing AUPs, as seen in email services like Shopify.4

TLDR Vendors should include strong disclaimers regarding AI outputs. Transparency mechanisms like click-through agreements and safeguards mandating customer acknowledgment of terms can also help mitigate liability risks effectively.

AI outputs can misleadingly present inaccuracies with confidence, unlike traditional software that “fails loudly”, i.e. clearly indicates errors. This issue has become more noticeable, highlighted by incidents such as the ChatGPT lawyer who submitted a legal brief rife with made-up cases and a string of wrongful arrests in Detroit due to errors in AI-generated evidence.

AI providers must proactively include comprehensive disclaimers in customer-facing contracts for several critical reasons:

  • AI technology cannot assure absolute factual accuracy and compliance with prevailing laws and industry standards. Thus, disclaimers act as a protective shield.
  • AI output quality depends on the training data and prompts from customers, making results unpredictable.
  • Liability risks increase when customers use AI outputs in ways not intended by the provider.
  • Many products offer AI features, like chatbots, for free or as an optional add-on to enhance customer experiences. Providers usually avoid liability for these non-core functionality and may emphasize that using AI is "optional."5

Users bear the final responsibility for interpreting and acting on AI data. Common AI-related disclaimers include the following:

  • Customers are expected to acknowledge the risks and limitations of AI, including potential inaccuracies, biases, and offensive outputs.6
  • Disclaim responsibility for decisions made based on AI outputs.7
  • Don't guarantee suitability of AI output for a Customer’s use case.8
  • Encourage human review to verify outputs.9

Vendors should disclaim both the accuracy and uniqueness of AI-generated outputs, stating that all outputs are provided "as-is" without accuracy guarantees. These disclaimers protect against the inherent unpredictability of outputs and the possibility of the AI producing similar outputs for different users, a characteristic of how machine learning models operate.10

At a more general level, vendors should adopt the following practices:

  • Make sure customers know when they're dealing with AI instead of a human. Click-through agreements or clear pop-up disclaimers can help with transparency.
  • Use technology safeguards to ensure customers read and agree to terms and disclaimers before using AI, like click-through agreements or explicit pop-up notifications.
  • Some vendors expect customers to develop internal policies and manage risks associated with AI outputs.11

Protecting Data Ownership and Rights

TLDR Vendors should clarify ownership over AI output, distinguish between non-confidential usage data and confidential customer data, and assert ownership of improved AI models. Non-competitive contractual restrictions can safeguard proprietary technology.

Intellectual property rights may not cover AI outputs, but vendors can meet customer ownership requests by adding a clause that transfers output ownership to the customer.12 However, vendors should clearly state that they may not hold any rights to the outputs and that other customers may similar outputs could be produced for other customers.13

If vendors plan to use AI outputs for their own purposes, they should explicitly define these rights in the contract. Vendors typically retain ownership of "aggregated and anonymized" customer data, clearly specifying the data’s use for improving products and services or for monetizing insights through benchmarking and other means. For instance, companies like Lucid14 and Lily15 mention using "statistical data" to enhance services and machine learning. Vendors must also clarify what "anonymization" means—whether the anonymity applies to individual users, the customer organization, or both.16

Vendors should claim intellectual property rights for enhanced AI models resulting from data processing. Contracts should specify that vendors can use these improved models for any business purpose, clarifying that these models aren't considered part of the customer's confidential information.

In fact, confidentiality and use restriction terms are often more effective in controlling use of AI output, ensuring that customer expectations align with contractual terms. Some vendors classify usage data differently from "Confidential Information", emphasizing its critical role in product development and seeking to protect resulting intellectual property rights. For example, Alation has excluded "Customer Data" from its confidentiality clauses.17 DataDog makes a distinction between confidential "Customer Data" and non-confidential "Customer Operational Data." Databricks has similarly revised their terms to not classify usage data as "Customer Confidential Information" in their non-disclosure commitments.18 Like feedback, vendors aim to protect intellectual property rights in enhancements derived from using this data.19

To protect any product enhancements and improved models, vendors should also update their contracts to add or highlight non-competitive terms. These terms can help prevent customers from developing competing products using the vendor’s services.20 For instance, Databricks removed the 21, Datadog22 and Freshworks23 prohibit the use of their services for competitive intelligence or performance benchmarking purposes, and Loom strictly prohibits developing competing products. Evidently, AI vendors are taking contractual measures to prevent competitors from using their technologies for market insights or rival product development, showing their focus on safeguarding proprietary tech in a competitive market.

Vendors that don't develop or enhance their own AI models might opt for stronger protective measures to set themselves apart from competitors. For instance, Notion commits to not using Customer Data for training machine learning models, offering a clear distinction in its approach to data use and privacy compared to others in the industry.24

Exclude Liability for AI Output

TLDR Contracts should be updated to exclude AI output from indemnification clauses, shifting responsibility to customers for claims arising from their actions. Insurance terms are seldom included in vendor agreements.

Vendor liability and indemnification practices have remained relatively stable, typically limiting liability to 12 months of service fees. However, vendors dealing with data may offer a higher cap—up to twice the 12 months' fees—for breaches related to privacy, security, or confidentiality.

AI introduces complexities to intellectual property rights, such as copyright concerns with AI-generated content. Vendors typically assure their technology doesn't infringe on third-party IP rights and offer indemnification. Yet, this could unintentionally extend to AI outputs, raising liability risks. Consider revising existing contracts to exempt AI outputs from such clauses, and clearly stating in new agreements that risk allocation terms do not apply to AI-generated content.

Customers should indemnify vendors against claims arising from customer content, software misuse, or breaches of contract, including legal infractions, third-party IP conflicts, and misconduct.25

  • The customer controls how they use the AI tool, including content creation and software application. If the customer misuses the AI or breaches the terms of service, the vendor should not bear sole responsibility.
  • Customers must follow all relevant laws and regulations while using AI tools. If they use the AI for illegal activities or breach data privacy laws, they should fairly face the legal repercussions.
  • If a customer's behavior leads to a breach of agreements with third-party providers, resulting in legal action against the AI vendor, the customer should indemnify the AI vendor. The vendor often has limited influence over these third-party interactions.
  • Customers hold accountability for the data they input or use with AI tools. Mismanaging personal data or not complying with privacy laws can result in legal liabilities for both the customer and the AI vendor. The customer should indemnify the vendor to recognize the customer's obligation to ensure data privacy.

Insurance claims are rarely seen in vendor agreements.

Compliance with laws

While it may be challenging to entirely remove the requirement to comply with applicable laws from customer contracts, vendors should acknowledge that AI is subject to evolving regulations. Emerging laws affecting AI include President Biden's Executive Order on AI, data privacy regulations like GDPR, and sector-specific legislation like HIPAA.

AI providers can reduce contractual risks by specifying compliance with laws and regulations as they stand at the contract's signing or maintaining the option to terminate should legal changes substantially impact the provision of AI solutions or associated costs. Vendors must monitor legal changes closely to ensure continued compliance and prevent breach of contract claims. For instance, Zendesk's contract explicitly mentions the absence of obligation or liability for the company if governmental or regulatory actions limit service access.26

AI force majeure event

TLDR Events beyond a party's reasonable control, like outages at third-party AI providers, may constitute a force majeure event. Vendors should define force majeure events specifically to avoid unintentional termination rights and consider the impact of third-party service provider outages in service level terms.

The dependence on third-party AI providers introduces scenarios "beyond a party's reasonable control," such as outages in upstream AI providers affecting service availability. These incidents may qualify as force majeure events, raising the question of whether they activate termination rights with potential revenue recognition consequences for the vendor.

Buyers often want the right to terminate contracts after prolonged service outages. To prevent accidental termination rights, vendors should narrowly define force majeure events, excluding general scenarios "beyond a party's reasonable control".

Vendors frequently exclude problems due to third-party service providers in their service level agreements. For example, Notion specifies that downtime caused by third-party service failures won't count towards their Availability and Downtime calculations.27 This distinction recognizes the reliance on external services and their possible effect on service performance.

Iterable adopts a similar stance, noting that their AI service is not subject to standard service level agreements and may experience performance downgrades.28 These clauses emphasize the uncertain behavior of AI services, particularly those dependent on third-party providers.

Final thoughts

In conclusion, as companies integrate AI into their offerings, it's crucial to update contractual terms accordingly. At Pincites, we’ve crafted our services agreement to align with these key trends. Our agreement covers essential areas such as customer cooperation, service usage restrictions, and disclaimers for warranties and liabilities, in line with industry standards. Clear specifications on ownership of customer and usage data reflect transparency, with Pincites retaining rights over usage data for insights but not AI training. Overall, our services agreement demonstrates a comprehensive approach to addressing the complexities of AI integration and ensuring compliance and transparency in customer relationships.

Footnotes

Footnotes

  1. “You may not use our Services for any illegal, harmful, or abusive activity. For example, you may not: . . . Use Output to develop models that compete with OpenAI.” https://openai.com/policies/terms-of-use

  2. Notion's terms require compliance with OpenAI's policies, showing how third-party terms affect user behavior and extend OpenAI's reach into Notion's customer base. “Third Party Provider Policies. If you choose to use the Notion AI feature(s), you may not use the Notion AI features in a manner that violates any OpenAI Policy, including their Content Policy; Sharing and Publication Policy; and Community Guidelines.” https://www.notion.so/Notion-AI-Supplementary-Terms-fa9034c8b5a04818a6baf3eac2adddbb. Lucid's terms require customers to follow subprocessor terms, like Microsoft Azure OpenAI Service. “Third-Party Terms. Lucid uses the third parties listed in our subprocessor list to host and provide the Lucid AI. You understand and agree that each service provider may have their own AI services terms and conditions. For instance, Microsoft’s Service Specific Terms related to the Microsoft Azure OpenAI Service and Code of conduct for Azure OpenAI Service apply to your Use of the Lucid AI. We may use additional third-parties to provide other features in the future and will update our subprocessor list accordingly. Agreeing to abide by third-party terms and conditions is required to access Lucid AI.” https://lucid.co/legal

  3. Alation reserves rights for usage monitoring, including tracking users and limits, reflecting a trend in active user engagement management and consequences for exceeding boundaries. “Usage Monitoring. Alation reserves the right to periodically review the number of Named Users, number of connectors, apps, objects, and Customer usage.” https://www.alation.com/msa/

  4. Shopify's email content scanning signals a potential industry shift toward proactive AUP enforcement. “Shopify employs certain controls to scan the content of emails you deliver using the Email Services prior to delivery (“Content Scanning”). . . . By using the Email Services, you explicitly grant Shopify the right to employ such Content Scanning.” https://www.shopify.com/legal/terms

  5. “You acknowledge that the Lucid AI is an optional feature of the Subscription Service(s) overall and that you are free to stop using the Lucid AI at any time.” https://lucid.co/tos

  6. Lucid's terms highlight that customers must acknowledge the risks and limitations of AI. “By using the Lucid AI, you acknowledge and agree that (a) Lucid is not responsible for any inaccuracies or errors in the Output, (b) Lucid is not responsible for any biases or limitations of the underlying algorithms or data, and (c) Lucid is not responsible for any Output that you may find harmful or offensive.” https://lucid.co/tos

  7. Scale's agreement highlights the probabilistic nature of AI and machine learning outcomes, reminding customers that they are responsible for all decisions made based on the AI's output. “Decisions. Results and outcomes generated by machine learning algorithms and artificial intelligence are probabilistic and Customer should evaluate such results and outcomes for accuracy as appropriate for Customer’s use case, including by employing human review. Customer is solely responsible, and Scale will have no liability, for all decisions made, advice given, actions taken, and failures to take action based on Customer’s use of the Services or Output, including whether the Output is suitable for use in the Customer Application.” https://scale.com/legal/msa

  8. Loom's terms emphasize that they do not guarantee the accuracy or suitability of AI-generated information for specific use cases. “Loom does not make any warranty as to Loom AI, output, the results that may be obtained from the use of Loom AI or the accuracy of any information obtained through Loom AI, including with respect to the factual accuracy of any output or suitability for Customer’s use case. . . . Customer should not rely on factual assertions in output without independently fact checking their accuracy. no information or advice, whether oral or written, obtained by Customer from loom or through Loom AI shall create any warranty.” https://www.loom.com/loom-ai-supplementary-terms

  9. Scale's agreement emphasizes AI's probabilistic nature, making customers responsible for AI-based decisions. This underscores that users must interpret and verify AI-generated data. “Decisions. Results and outcomes generated by machine learning algorithms and artificial intelligence are probabilistic and Customer should evaluate such results and outcomes for accuracy as appropriate for Customer’s use case, including by employing human review.” https://scale.com/legal/msa

  10. “You acknowledge that due to the nature of machine learning and the technology powering Notion AI features, Output may not be unique and Notion AI may generate the same or similar output to Notion or a third party.” https://www.notion.so/Notion-AI-Supplementary-Terms-fa9034c8b5a04818a6baf3eac2adddbb

  11. “Risks and Limitations. Artificial intelligence and machine learning technologies have known and unknown risks and limitations. You acknowledge that you are solely responsible for developing your own internal policies regarding the appropriate use of these technologies and training other Users on your account on such policies.” https://www.bamboohr.com/legal/bamboohr-artificial-intelligence-addendum

  12. “The Customer owns the Input it provides through the AI Feature and is hereby granted rights, title and interests in and to Output.” https://www.panaya.com/ai-terms-and-conditions/

  13. “Customer acknowledges that, due to the nature of generative AI and the technology powering BambooHR’s AI features, BambooHR AI Output may not be unique and BambooHR AI may generate the same or similar output to BambooHR or a third party.” https://www.bamboohr.com/legal/bamboohr-artificial-intelligence-addendum

  14. “Statistical Data and Analyses. Lucid owns all rights to the Statistical Data and may perform analyses on Statistical Data your Content (“Analyses”). Content utilized as part of Analyses will be anonymized and aggregated. Lucid may use Statistical Data and Analyses for its own business purposes (such as improving, testing, and maintaining a Subscription Service, training machine learning algorithms, identifying trends, and developing additional products and services). Provided that Lucid does not reveal any of your Confidential Information or the identity, directly or indirectly, of any User or entity, Lucid may publish Feedback and anonymized aggregated Statistical Data and Analyses. “Statistical Data” means data generated or related to the provision, operation or use of a Subscription Service, including measurement and usage statistics, configurations, survey responses, and performance results.” https://lucid.co/tos-mobile

  15. “Machine Learning. Customer acknowledges that a fundamental component of the Services provided through the Platform (as defined below), whether directly or indirectly, includes a method of optimization that uses computer programming to analyze data taught and trained from Customer Materials, creating a set of algorithms that extract knowledge from such data through statistical learning (“Machine Learning”). Therefore, Customer hereby grants to Lilt a royalty-free, worldwide, perpetual, irrevocable, fully transferable and sublicenseable right and license to use, disclose, reproduce, modify, create derivative works from, distribute, and display any Customer Materials incorporated into the Machine Learning, without obligation or restriction, for purposes of creating and using the Machine Learning. Excluding Customer Materials, such Machine Learning, including the method of optimization and the algorithms, are the exclusive property of Lilt and Lilt owns all right, title, and interest to the Machine Learning.” https://lilt.com/lilt-technologies-master-services-agreement

  16. Alation updated its terms to remove “Customer”. “Alation may access and utilize log files and metadata derived from Customer’s use of the Alation Cloud, to maintain and improve the Alation Technology, provided that such data is aggregated or otherwise anonymized and the Customer or the Named Users will not be identified.” https://www.alation.com/msa/

  17. “and Customer Data shall be deemed Confidential Information of Customer.” https://www.alation.com/msa/

  18. “Usage Data. Notwithstanding anything to the contrary in the Agreement, Databricks may collect and use Usage Data to develop, improve, operate, and support its products and services. Databricks will not sharedisclose any Usage Data that includes Customer Confidential Information except eitherto any third-parties unless (a) to the extent that such Usage Datait is anonymized and aggregated such that it does not identify Customer or Customer Confidential Information; or (b) in accordance with Section 2 (Confidentiality) of this Agreement to perform the Databricks Services.” https://www.databricks.com/legal/mcsa

  19. “Neither this Agreement nor Customer’s use of the Service grants Customer or its End Users ownership in the Service, including any enhancements, modifications or derivatives of the Service. Amplitude may use techniques such as machine learning in order to improve the Services, and Customer instructs Amplitude to process its Customer Data for such purpose; provided that no Customer Data will ever be shared with any other customer and any such use by Amplitude shall comply with applicable law. For the avoidance of doubt, Customer retains all ownership of its Customer Data submitted to the Services and Amplitude retains all ownership in and to all System Data and machine learning algorithms.” https://amplitude.com/msa

  20. “Customer will not and will ensure Authorized Users do not: (a) reproduce, modify, adapt, or create derivative works of the Services; . . . use the Services to build competitive products. . . . Any use of data mining, robots, or similar data gathering and extraction tools or framing all or any portion of the Services without Loom’s permission is strictly prohibited. . . . Restrictions. Customer may not use Loom AI to: (i) develop foundation models or other large scale models that compete with Loom AI. . .” https://www.loom.com/terms

  21. Permitted Benchmarking. You may perform benchmarks or comparative tests or evaluations (each, a “Benchmark”) of the Platform Services and may disclose the results of the Benchmark other than for Beta Services. If you perform or disclose, or direct or permit any third party to perform or disclose, any Benchmark of any of the Platform Services, you (i) will include in any disclosure, and will disclose to us, all information necessary to replicate such Benchmark, and (ii) agree that we may perform and disclose the results of Benchmarks of your products or services, irrespective of any restrictions on Benchmarks in the terms governing your products or services. https://www.databricks.com/legal/mcsa

  22. “. . . use or permit others to use the Services other than for Customer’s operations and as described in the applicable Order, Documentation and this Agreement.” https://www.datadoghq.com/legal/msa/

  23. “Customer agrees not to use the Freshworks Technology . . . for competitive intelligence or performance benchmarking purposes.” https://www.freshworks.com/terms/

  24. “Improving Notion AI. Notion does not use your Customer Data or permit others to use your Customer Data to train the machine learning models used to provide the Notion AI Writing Suite. Your use of the Notion AI Writing Suite does not grant Notion any right or license to your Customer Data to train our machine learning models. Artificial intelligence and machine learning models can improve over time to better address specific use cases. We may use data we collect from your use of Notion AI to improve our models when you (i) voluntarily provide Feedback to us such as by labeling Output with a thumbs up or thumbs down; or (ii) give us your permission.” https://www.notion.so/Notion-AI-Supplementary-Terms-fa9034c8b5a04818a6baf3eac2adddbb

  25. “Customer will defend Company from and against all claims brought against Company arising or resulting from Customer’s misuse of the Software or Service, Customer’s breach of the terms of this Agreement, or any claims that Customer content violates any third-party rights.” https://www.responsive.io/msa-02-dec-19

  26. “Zendesk shall have no obligation or liability to Subscriber if a governmental or regulatory action restricts access to the Services, and Subscriber agrees that this Agreement and any Service Order expressly exclude any right to access the Services from a jurisdiction where such governmental or regulatory restriction is in effect.” https://www.zendesk.com/company/agreements-and-terms/main-services-agreement/

  27. “Service Level Terms Are Not Applicable. Notwithstanding anything to the contrary in your Agreement or the Service Level Terms, downtime of Notion AI that results from a failure of a third party service will not be included in the Availability and Downtime calculations.” https://www.notion.so/Notion-AI-Supplementary-Terms-fa9034c8b5a04818a6baf3eac2adddbb

  28. “Except where expressly stated, Iterable AI is not supported or subject to any Service Legal Agreements that have been (or may be) agreed between the parties and Iterable reserves the right to downgrade performance of Iterable AI at any time.” https://iterable.com/trust/additional-ai-terms-of-use