Predictive Claims Models
At the 2016 Lockton Complex Risk Symposium in St Louis, a panel discussed the use of predictive claim models. The panel was:
- Mark Moitoso – SVP Analytics Practice Leader, Lockton
- Gary Anderberg – SVP Claim Analytics, Gallagher Bassett
- JJ Schmidt – SVP Managed Care, York Risk Services
- Melissa Dunn – VP and Managing Director, Helmsman Management Services
- Christopher Makuc – Liability Estimation and Insurance Coverage Analysis – Navigant
What does big data really mean?
Big data refers to a real phenomenon where we can examine more data points than we have ever been able to do in the past. Modeling is built on the volume of data points and the larger the database the more accurate the models become. Analyzing more variables allows for a more reliable determination of the outcome you are trying to identify.
The challenge today is to incorporate the narrative into the data analysis. The volume of adjuster notes, medical records, etc is the greatest source for information in the claims. In addition, adjuster notes are available for analysis long before medical records so analyzing this information allows for a more timely development of the model. Text mining is being used for this and it shows tremendous potential.
At the end of the day, what big data and analytics are being used for is to provide a tool to assist the adjuster in the handling of the claims.
Where can we be more effective with modeling?
One of the biggest challenges we face is that our industry tends to be a slow adopter of change and new technology. Other industries have been using predictive analytics much longer than the insurance industry.
How do you set up and monitor predictive modeling?
First you need to know your goals. What is your objective? Is it to identify claims that need medical case management, assignment of a different adjuster, etc. Then you start at the end and work backwards. Look at claims that had adverse and unexpected development and try to identify common elements in these claims. Finally, there needs to be a level of accountability. The data needs to be gathered consistently, and the model applied consistently.
You must have a sizable database which is embedded into your claims platform. Most organizations do not have the volume of data needed to develop an accurate analytic tool. It is important to automate as much of the process as possible, with things such as data mining.
No predictive model is effective in a vacuum. There has to be collaboration to change the behavior in order to change the trajectory of the claim. Identifying potential problem claims has no value if you do not take action as the result of this identification.
It is also important to be objective and open to new possibilities. If you go in with pre-conceived notions about what you are going to find then you will stop your analysis when you find what you are looking for. However that may not be the proper answer.
What things are emerging in this area?
Companies will be compared on their decision support systems and those with better systems should be able to achieve better results. Investing in this area will become a necessity, not an option. Things like voice analytics is a technology that can help to identify fraud by listening to the claimant’s interview with the adjuster and determining the likelihood they are telling the truth.
Something to keep in mind is that newly emerging risks can significantly impact your mode. 15 years ago opioid medications became predominate in workers’ compensation claims and significantly increased the costs on the claims tail. The model needs to evolve to address these emerging risks.
What type of output should clients expect to see?
The alerts issued by the modeling can be shared with the insured. Dashboards can provide a variety of information to the clients including reserve trajectory. These models can be a good tool to show anticipated additional reserve development.
The big question is whether these models are actually working. Some evidence of this includes claim closure rates, faster reserve development to ultimate, and better return to work. However, the reality is it is impossible to evaluate the impact of these models compared to other variables on the claims and claims handling process. While their effectiveness makes sense in theory, in reality it is very challenging to fully attribute changes on claim outcomes to the models.
Modeling also allows an element of auto-adjudication on the claims which in turn allows the adjusters to focus on more on those claims that need the additional attention.
How do you ensure data quality?
That is always a challenge with any analytics. There has to be oversight to ensure things are being coded properly. If you see a spike in one area you will need to validate the data set to ensure there is an actual spike vs inappropriate coding.
Adjuster notes tend to be more accurate than adjuster coding so systems that compare data mined from different sources can help spot potential errors.
Will we see analytics around workforce characteristics used in underwriting?
This is already happening in the group health and group disability side. It is just a matter of time until workers’ compensation catches up in this area.
Final thoughts
It takes teamwork to make predictive modeling work. Information technology builds the models but claims must be able to implement them.
As we try to shift to a more advocacy based claims model the people suitable for those jobs are not necessarily best suited for analytics. Because of this, the more things we can auto-adjudicate the better.