The Bridge From Current Tech to AI
Artificial intelligence (AI) is rapidly reshaping the risk management landscape, but understanding its capabilities and constraints is essential for responsible adoption. In this RIMS 2025 session, Steve Kappel, Corporate Senior Vice President and Chief Information Officer of Safety National, and James Benham, CEO & Co-Founder of JBK, Terra, and The InsurTech Geek Podcast, explored the foundational technologies behind AI integration, including APIs, workflow automation, and data warehousing. The discussion emphasized the importance of AI literacy across organizations and examined how poor planning, overhyped expectations, and overlooked ethical and legal issues can undermine success.
Why AI Matters and the Risks of an Uninformed Approach
AI is the most informative and transformative approach available to develop efficiencies for our industry, but it presents significant risks. Blindly trusting any technology is extremely risky, so it is best to prepare and arm yourself with information. The consequences for not understanding included misplaced trust, biases, blind spots, and missed opportunities.
Being Informed
The two extremes are paralysis by analysis and blind trust. Moving cautiously is critical to data security and stability. There should not be any “black boxes.” You should understand how the software decides its conclusions. When the answer is supported by facts, it can prevent model bias.
Gen AI Is Not the Only AI
While current models reflect new Gen AI tech, older models have been around for decades. The insurance industry has leveraged machine learning and predictive analytics for quite awhile. Predictive analytics is the use of traditional mathematical algorithms to predict and calculate and predict outcomes, which may include regression and scatterpoint analysis. Although AI has been used for some time, many of the ethics and use conversations have not occurred with industry executives. The goal of any AI is to find correlation and causality.
Building the Tech Foundation for Successful AI Adoption
Before any organization dives into AI, it should have the right core technologies to ensure its success. This might include a foundation in fundamental areas like machine learning, programming, and data science. Understanding core concepts like supervised, unsupervised, and reinforcement learning is also crucial. This can be critical for AI’s success because it fuels AI models, establishes data infrastructure, governance, trust, compliance, traceability, and enhances cybersecurity.
The Importance of Cloud
Many cloud technologies, such as Microsoft Azure, Google, and Amazon Web Services (AWS), and other Software-as-a-Service (SaaS) offerings, are available. If your organization wants to take advantage of Gen AI solutions through the cloud, it needs a solid cloud operating model. It is not just about the elasticity of the cloud, which allows system development for testing and concepts, but also developing a security pattern to manage those interactions. If your company is not already in the cloud, it should have a good operating model in terms of financial operations management.
Bridge Technologies
Application programming interfaces (APIs), robotic process automations (RPA), workflow automations, and data warehousing are options for bridge technologies. RPAs can repeat trained work, but will not work independently or develop anything novel. Once trained on a process, it can repeat the actions for efficiency. RPAs serve as an early entrance to AI, familiarizing companies with the technology. However, it is still essential for a human to be involved. RPAs and APIs can work together to automate menial tasks, so employees can focus on elevating analytical processes and making better decisions.
Educating Your Team & Managing Risks
AI is not just another tool in risk management; it must be managed as a risk. The following strategies can help manage those risks.
- Conduct bias and fairness audits regularly.
- Implement model governance frameworks (e.g., model documentation, version control, and explainability.
- Build AI ethics boards or review panels.
- Ensure human-in-the-loop systems for all high-impact decisions.
- Train decision-makers in AI literacy to interpret outputs wisely.