Managing AI Risks When Onboarding Vendors
Even if your vendors do not offer a specific artificial intelligence (AI) product or service, there is a good chance they license AI tools from another party. The proliferation of this technology has prompted new and modified terms and conditions in many vendor contracts. In this session at RISKWORLD® 2026, experts identified the evolving legal risks presented by a vendor’s use of AI and explained how to enhance vendor due diligence by asking the right AI risk-related questions.
Speakers included:
- John Koch, General Counsel, Insurance Counseling and Recovery at Flaster Greenberg PC
- Peter Wakiyama, Chair: Artificial Intelligence, Privacy, and Cyber Security Practice Group at Flaster Greenberg PC
- Steven Kuemmerle, Senior Risk Management, Global Risk Management at Jacobs
AI use is escalating. It is becoming a blanket over operations that creates a variety of new risks with real consequences. These risks can become potential landmines when your organization interacts with those external companies that either provide AI tools for your business or AI features within other resources you may use as part of your operations.
These risks can originate from:
- Saas AI platforms, like ChatGPT and image generation tools
- Application program interfaces (APIs) for machine learning, like data sharing to provide a service
- Enterprise AI integrations built organically within the organization
- AI development partnerships or licensing models
Emerging AI Risks
New risks of AI use can include inadvertent disclosure of confidential information, trade secrets, and intellectual property. There should be concerns around privacy compliance and regulatory compliance from the use of these third-party tools.
An example of privacy concerns could be the use of AI for transportation tracking of things like license plates and make / model of vehicles. It is important to determine that privacy compliance is being followed with use of such a tool.
An example of regulatory concerns is the use of synthetic performers in marketing and advertising. New York just passed a law that requires users of this technology to disclose that these performers are not real. This helps protect consumers.
When it comes to intellectual property compliance, patent infringement can become a large risk.
Due Diligence & Contracting
Approach contract considerations by deciding what to cover, when, and how. To achieve this, it is important to determine how to integrate vendor AI risk management into your organization’s broader onboarding playbook.
The risk manager’s goal is to identify risks and avoid surprises. It is critically important to be clear on the nature of the relationship between your organization and your vendors in addition to defining the roles of the provider (licensor) versus the user (the licensee). Note that many algorithms produce invisible risks. Uncovering these intangible exposures can be a mind shift for many risk managers.
Risk managers should work hand-in-hand with their legal and IT departments to determine risks during the contract process. Be prepared to recognize and identify the evolving risks presented by the vendor’s use of AI or your company’s use of AI. Assist legal in contract negotiation to ensure risks presented by AI use are addressed. Typical risk shifting and insurance provisions in contracts may not be suitable to address risks posed by AI use, so it is important to identify the inclusion or lack of inclusion of that language in the contract.
The following checklist of topics includes important contractual provisions to evaluate:
- License grants and intellectual property
- Restrictions
- Data rights
- Privacy and security
- Confidentiality / sensitive business information
- Liability and associated limitations
- Indemnification
- Insurance requirements (Will typical coverage apply to AI-related loss? Do you have necessary endorsements? Are special inclusions detailed?)
- Warranties or representations
- Processes related to termination and/or transition to a new vendor (What happens to your data?)
Auditing
Risks arise when two parties enter into a contractual relationship because one or both parties are probably using AI, like using your data or using AI for deliverables.
The big takeaway is to know your vendor’s AI use. Ask the right questions, like what AI is the vendor using, what data is used and how, how will it work, and what risks it may pose as a result.
This should also include identifying the AI usage of your vendor’s sub-contractors and suppliers as well. There can be many layers and, if not identified, can cause a risk manger to be blindsided later in the process.
Dig deep to identify the downstream risk. The tools risk managers have historically used for operations will not necessarily protect against these layers of AI providers. Determining how far you need to go depends on the situation. It is becoming a best practice to identify that, and to create risk categories by vendor, as part of the vendor onboarding process.
These audits should not stop at the contracting stage. AI capabilities and use are changing daily. Appreciate the importance of auditing AI use and recordkeeping during the relationship to proactively manage these potential risks to your organization.
