Skip to content

Improving Cycle Time With Business Intelligence-Part 3

The second, and often most wasteful phase of cycle time is the amount of time that elapses between a company noticing a problem/opportunity and when they decide to take action.  If you’ve been an executive for very long you will be painfully familiar with the following scenario:

  1. Your CFO (Finance analyst, Controller) announces in a figure review, “A giant radioactive lizard is crushing homes on its way to destroy Tokyo, which is causing higher than planned property insurance losses (or insert your own business problem here.)
  2. The executive team engages in a lengthy discussion of the issue.  “Maybe the lizard will die soon of radiation poisoning.”  “Do we have a problem with our underwriting process that allows us to write too many homes between Tokyo and the ocean?”  “Maybe if we increased agent compensation, the lizard would turn around and carefully tiptoe back into the sea.”
  3. Eventually, the executives will agree to form a team (or task force, or committee) to investigate the issue and determine what, if anything we should do about this and report back in two (or three or six) weeks. 
  4. By the time the team reports, Tokyo is in ruins, and so is your property insurance P&L.

There are a number of ways in which a strong business intelligence system can prevent or mitigate this scenario.

Automatic Triggers in The Executive Dashboard

One option to address the above scenario is to incorporate automatic triggers to begin solution preparation into your executive dashboard.  The idea of incorporating elements of statistical process control into the executive dashboard has been floating around the insurance industry for many years.  The problem is that the randomness inherent in insurance means that any tolerances constructed with a wide enough range to avoid false positives will result in so many false negatives as to render the exercise meaningless.

There are two ways to mitigate this Type I/Type II error problem.  One is to increase the credibility of the automatic trigger by considering other corroborative metrics in the trigger algorithm.  For example an increase in property damage claim frequency for non-fleet auto that in simultaneously accompanied by increased in a company’s personal auto and commercial fleet auto claim frequencies has more credibility than one that is not. Competitor data, analogous insurance products, and sometimes analogous geographic markets can all be sources of corroborative data.

The second approach to dealing with the Type I/Type II error issue is to set the automatic triggers only for those metrics where a deterioration/deviation has a significant enough negative impact on the business to justify the cost of solution development even if the alarm turns out to be false.  For example, a deterioration in new business quality may be serious enough of a problem to justify immediately developing a set of underwriting process/rules changes to offset the deterioration even if the trend reverses itself in the next month’s metrics 80% of the time, because the 30 day gain in cycle time (and resulting improvement in new business quality) from proceeding immediately to solution development in worth far more than 5 times the cost of solution development.

An Agreed on Source of the Truth

Another way in which a robust executive dashboard can help to reduce the time it takes a company to take action on a problem is by eliminating “data wars.”  This phenomenon is particularly common in large insurers where each functional department can afford a small staff (or army) of data retrieval and analytics experts.  Each department creates its own reports and analyses, and defines its own metadata.  When one department (often Finance) reports that there is a business problem, another department (usually the one which “owns” the problem, produces its own report proving that there is no problem.  The ensuing debate consumes a significant amount of precious cycle time and in extreme cases may prevent the problem from ever being recognized.  This problem is especially pernicious at companies with “powerpoint’ corporate cultures, where looking good in the executive conference room is more important than actually achieving successful business results.

The way to eliminate data wars is to incorporate a single view of corporate truth into the executive dashboard and then to require that each functional area’s dashboards are drill down versions of the enterprise one, using the same sources and metadata.  This requires significant upfront effort to rationalize and reconcile the competing data views currently used within the enterprise. It also means that is critical that the dashboard and its functional components be absolutely correct representations of the actual results of the business.  If you are only going to have one source of the “truth” it had better be the real truth.

Implementing a unified version of data for the enterprise is a difficult exercise, the functional leaders will hate and resist it (I know I would have) as an infringement on their autonomy, but it is critical to achieving any sort of corporate nimbleness.

Improving Cycle Time With Business Intelligence-Part 2

In Part 1 we discussed the five phases included in cycle time.  The first phase is the time that elapses between a problem or opportunity arising and the point at which the company notices.  There are three business intelligence tools that have a significant impact on this phase:

  • Executive Dashboards
  • Competitor/Market Intelligence
  • Environmental Scanning

From Executive Dashboard to Executive Cockpit

Historically, executive dashboards have been the tool of choice for executives to identify problems with the performance of the business.  Each week or month, a 1 or 2 page (or sadly in a least one company,  a 54 page) report is produced with the key metrics of the business usually compared to prior year or to expected.  In some cases, the report is provided in the form of a “figure review” presentation with accompanying commentary, usually by the finance or controller’s function.  There are several disadvantages to this historical approach:  1. Reporting Lag-Business problems have been impacting the financial results for some period of time before the report is produced and read by the key executives.  2. -Business results can be impacted by simple random variation (e.g. in weather), so executive teams often decide to “wait a month or two” to see if the problem continues to occur in the data. This adds 30-60 days to the eventual cycle time.

Leading edge companies are addressing these issues by moving from static dashboards which observe current results to dynamic dashboards that anticipate results and explain key levers driving results   Instead of learning in the June 2014 monthly dashboard that “claim frequency is up 5%”, an executive learns in the July 2013 monthly dashboard that “that “Overall quality of new business deteriorated by 7% last month and the defection rate among our best customer segments increased by 200 bps: unless these trends are reversed, claim frequency will rise by 5% in 12 months.”  This new approach enables executives to identify emerging problems sooner and with more confidence.

Competitor/Market Intelligence

Many insurers discover much too late that a combination of close to quote ratios, a few standardized rate comparisons and a formalized process for capturing producer feedback is not a sufficiently robust market intelligence system, usually after they have been blindsided by a competitor’s new product offering or a significant change in consumer behavior that renders their brilliant new strategy obsolete overnight. Small to mid-sized carriers who have adopted a “fast follower” approach often find that their lack of a good market intelligence system has turned them into “slow followers.”

A strong market intelligence system provides nearly real-time information on new tactics (e.g., products, producer compensation, marketing spend, claims processes, etc.) implemented by key competitors and changes in consumer behaviors and preferences.    The most cost-effective competitor intelligence functions usually consist of a small core team (often one or two employees) with a larger number of correspondents (usually in Pricing, Underwriting, Claims, Distribution, and Marketing) who are responsible for reporting any competitor activity (actual or rumored) to the core team.  The core team has three responsibilities: 1) Facilitating a biennial assessment of key competitors’ strategy by senior management of the business unit; 2) Analyzing the reports from correspondents and providing an assessment of the credibility of the information as well as an interpretation of the actions in light of the competitor’s strategy;  3) Making a database with both the competitor strategies and the reported competitor activities available to key decision makers in all functions and at all levels throughout the company.

A robust market intelligence system also includes a rate comparison system that measures the insurer’s  competitiveness for consumers/customers who are actually shopping in the marketplace such as Choicepoint’s InsurView.  The system should include the ability to identify specific customer segments and contain prices for key competitors for each segment rather than generalized competitor prices (if you seldom compete against Competitor A for customers in Segment B, why do you care about their prices there?)  Another key enhancement is the ability to incorporate algorithms that mimic the heuristic processes that consumers use in their “switch/not switch” decision process so that the system can forecast expected close rates rather than just provide “win” rates as an output.

Environmental Scanning

Often, less-than-nimble insurers will find out about an emerging opportunity when they see a competitor exploiting it or about a change in the marketplace when the problem it causes shows up in business results.  The best way to prevent these occurrences is a strong environmental scanning process, and the key to having an affordable and manageable, but strong environmental scanning process is to focus on a few important areas:

  1. Consumer Needs, Preferences, and Behavior
  2. Technology
  3. Risk Environment

Monitoring how consumers shop choose and purchase insurance is the most critical element of environmental scanning.  At a minimum, an insurer should have its own personalized version of the J.D. Power U.S. Insurance Shopping Study completed and fully analyzed each year.  Insurers also need to monitor their target customers use of the web, mobile technology, and social media, and how that use impacts (or doesn’t) their behaviors regarding various insurance products. This category also includes monitoring developing customer needs.  For example, homeowners insurance carriers have done a remarkable poor job of anticipating the growth of and changes in the home business market.

Another area that insurers must monitor is technology including the “data and analytics world” (e.g., big data, business analytics, data mining, visualization, systems dynamics modeling), the “processing world” (e.g.  cloud computing, mobile technology), and the “real world” (sensor technology, satellite imagery, self-driving autos.)   This is one area where insurers can utilize their consultants more effectively.  PwC routinely updates advisory practice clients on future trends (including technology trends) and how they might impact the insurance industry.  I suspect other consulting firms do as well.  Finding a way to integrate, track, and augment those briefings is the first step towards a robust technology scanning strategy.

The final critical area of environmental scanning is monitoring change in (or changes in our understanding of) the risk environment.  Some of this is defensive, simple, and straightforward: If your company sells liability insurance in the US and you are not monitoring plaintiff attorney workshops on “the next big thing to sue for,” you are an idiot. Or lunch. Or both.  However, some risk scanning  is complex and will identify both opportunity and peril.  Consider the March 2011 tsunami in Japan and the (painful)  lessons commercial insurers and customers have learned about the interaction of lean supply chains, international sourcing, overoptimistic infrastructure robustness estimates, and contingent business interruption.   A better scanning process should have surfaced both the risk and the opportunity.

Hockey superstar Wayne Gretzky once famously said of his success, “I skate to where the puck’s gonna be, not where it’s been.”  While a strong business intelligence system may not enable that level of precision, it will help us to see the puck coming and catch it with our glove rather than with our teeth.

Improving Cycle Time With Business Intelligence-Part 1

All useful analysis of data serves a single purpose:  to enable executives and employees to make business decisions.  Decision outputs can range from simple binomial (e.g., Should we accept or reject a particular risk, to extraordinarily complex (e.g., What price should we charge in each of the millions of auto pricing cells in Illinois to optimize the lifetime value of our personal lines customers there?) The underlying decision process can be intuitive or rules driven and in the rules driven processes, the rules can range from very simple to very complex.  But in every case, the goal of business intelligence is the same:  put the right information in front of the right person at the right time, so that better decisions can be made more quickly.

Traditionally insurers have defined cycle time as the period that elapses between the development of a new product (or sales program, or new underwriting rules, etc.) and the implementation of that new product in the marketplace.  This is an incomplete and dangerously incorrect definition.  In reality, cycle time in the insurance industry is the time that elapses between the point where a problem or opportunity arises and the point in time where an initiative has been implemented in the marketplace that effectively solves the problem or capitalizes on the opportunity. This definition encompasses 1) the time that elapses between a problem/opportunity arising and the company becoming aware of it , 2) the time that elapses between the company becoming aware of the problem/opportunity and deciding to take action to address it, 3) the time that elapses while a solution is developed, 4) the time to implement the solution, and 5) the time that elapses until the solution produces the desired result in the marketplace.

Although the speed and decision quality of steps in the process can be improved on a number of dimensions, including organizational design (e.g., clear decision making authority and accountability), improved change management (e.g., effective, just-in-time training), and more flexible transaction processing systems, a robust business intelligence environment can improve the speed and quality of every step in the cycle.  A robust business analytics environment can also be surprisingly affordable:  various elements of the environment can be emphasized/de-emphasized depending on the insurer’s overall market strategy.   Over the next  3 posts, we’ll examine the key components of a business intelligence environment and how each can improve decision speed and quality.

Creating a Property Wind Loss Score Using 3-D Modeling

Most of the major advances in the insurance industry over the past 30 years have not been the result of brilliant new ideas, but have been based on older ideas that were made feasible by advances in technology.*  Another one of these advances is occurring now and early adopters will gain significant competitive advantage in property insurance, particularly in homeowners insurance.

The merger of Eagleview Technologies and Pictometry this January probably flew under your radar unless you work in property claims, but it highlights an important new capability.  Based on aerial photography, Eagleview/Pictometry can provide 3-D modeling of both the roof and walls of a building.  It has long been a rule of thumb that (all other things being equal) single story homes perform better in windstorms than multi-story, that hipped roofs perform better than gabled, and that homes with less window area perform better than homes with more window area.  The problem: it was not cost-effective to collect and digitize that information in a uniform framework for an insurer’s  homeowners book of business.  The technology used by Eagleview has that capability.

The opportunity goes far beyond mere data collection, however.  This technology can place a digital 3-D model of a property in its geospatial position, which when combined with LIDAR terrain mapping (topographic maps with two foot contours are available) and modeled winds (hurricane, tornado and straight line) available from vendors like AIR and EQECAT, can be used to generate expected loss for any property from any storm.  These models can be calibrated by modeling actual storms and comparing the actual loss to the modeled loss for each risk, much as is done with hurricane modeling today.

The uses for individual risk wind loss modeling are myriad:

  1. Pricing-most insurers do not have a full set of wind losses (even tornado or straight line) in their ratemaking data, which often results in inaccurate pricing, both overall rate level and by pricing cell.   This approach would enable historical loss data to be augmented with modeled losses for ratemaking purposes.
  2. An index reflecting differences in expected average wind loss by risk could be added as a new rating variable to reflect the combined impact of roof shape and composition, building height, and glass exposure.
  3. Resistance to wind damage could also become a variable in insurer’s underwriting,  concentration of risk management, and hurricane PML management for property insurance.
  4. Windstorm claims management-insurers will be able to quickly model the needed level of catastrophe claims adjusters to send to a geographic location and provide those adjusters with a triaged list of customers who have sustained the most damage.

As with most things, using the new 3-D technology to improve property underwriting and pricing will require real effort to build and implement.  However, all the components currently exist and the first carrier who pulls it all together will have a remarkable competitive advantage.

*For example, I studied a paper on a specific type of GLM rating plan analysis when I took Part 5 of the CAS exams in 1983-the article was  20 years old then.  GLM’s were adopted in the late 1990’s/early 2000’s because the computing resources available to pricing functions finally had the computational power to do the calculations required for GLM analyses.

Fixing Retention-“The Customer” is Dead, Long Live “The Customers”

Many years ago, one of my friends in marketing had a blue blanket with the words “The Customer” emblazoned on the front, which he hung on an empty chair in a corner of the conference room whenever he was in a meeting.  He would start every meeting by pointing at the “empty” chair saying, “We need to keep “The Customer’s viewpoint in mind during this meeting.” I was reminded of that blanket recently when the leader of a top auto insurer approached me recently expressing frustration with the lack of success of his company’s latest customer retention effort.  “We tried to deliver on everything the customer said was important,” he complained, “but we just didn’t get much bang for the buck.” I replied, “I’m not surprised, because there is no such thing as the customer.”

In case you missed that,  I’ll say it again: There is no such thing as “the customer.”  There are customers.  Plural. With an “s.”  And they think about insurance in different ways,  shop and buy insurance differently, and are attracted to different product features and benefits.  They want different things from their relationship with an insurance company.   The retention improvement programs the executive had described were the traditional “one size fits most” approach that insurers use.  These programs are wasted on  customers who don’t appreciate them and may actually annoy some customers enough to increase their defection rate.

This article in Insurance and Technology  which I coauthored with Punita Gandhi, Anand Rao and Scott Busse,  outlines an approach to retention improvement that segments customers according to their shopping and switching behaviors.  This type of approach enables insurers to provide customers with the specific value proposition and service that they demonstrate they want based on how they actually behave.

Customizing your retention programs to fit individual customer segments requires strong data/predictive modeling skills and a framework to quickly and effectively perform A/B test and control experiments.   The results in terms of improved retention can be well worth the effort.

“The Customer” is dead.  Long live “The Customers.”

Telematics, Self-Driving Cars and Insurance Armageddon (Part 2)

Last week’s post left the auto insurance industry in dire peril, but with a minimum of 12-15 years before that peril becomes imminent.  So how can auto insurers prepare?

“I want to say one word to you. Just one word…Telematics.”  However, insurers must integrate telematics into their product design in a different and much more advanced way than the current iterations..  Today, insurers are using telematics to improve their ability to estimate the expected loss for individual customers  and price accordingly.  Progressive’s Snapshot program uses six months of telematics-observed driving behavior to determine how well a vehicle is being driven and then uses that information to set prices for all subsequent renewals.  Allstate’s Drivewise program prices each renewal based on telematics-observed behavior during the preceding 12 months.   In each case the core value proposition to consumers is “try our product and get a lower price.”

Insurers who pursue this “telematics as a pricing tool” approach are hoping to simultaneously  gain competitive advantage in pricing accuracy and reverse the current trend toward price transparency which they fear will lead to a commoditized market where consumers shop and switch far more frequently  (e.g., the UK auto insurance market after the rise of the online aggregators.)  While this will buy the carriers who make the best pricing use of telematics another decade or so, eventually advances in information technology and insurers’ own insatiable appetite to show other companies’ customers how much money they can save by switching will result in price transparency (more on this in a future post.)  Telematics as purely a pricing tool is an evolutionary dead-end.

It is common knowledge that the most powerful variable in current telematics algorithms is hard braking.  This is not surprising:  hard braking is generally an attempt to avoid an accident and drivers with a higher number of “near-accidents” are more likely to have a real accident.  However, in this context, hard braking isn’t a causal variable; it is a result of the same complex interaction of driver behavior while performing specific driving maneuvers at specific locations under specific traffic and road conditions, that leads to real accidents.  From a loss control standpoint, telling a driver to engage in less hard braking is about as useful as a doctor telling a patient that she will live longer if she contracts fewer life-threatening diseases.

If insurers begin to view vehicle maneuvers such as hard braking or sudden swerves as “near accidents”   and include them along with “real accidents”  as dependent variables in models that include  both the type and quality of other vehicle maneuvers,  vehicle location, time of day, traffic conditions, road conditions, etc as independent variables, they will develop much more powerful models that can:

1. Provide a basis for true exposure based pricing, where a mile of driving straight on a rural interstate highway at 10 am at the posted speed limit is many times cheaper than a mile driven in rush hour in an urban area with 5 unprotected left turns driven at an average 12 mph above the posted speed limit.

2. Create the opportunity for insurers to provide real loss control to customers, both in the form of “money saving suggestions (if you took this route to work instead of your current commute, you would save $127 annually) and real-time feedback to drivers as they operate the vehicle.  Small scale tests have found that even a simple red/green feedback process where drivers were notified of unsafe maneuvers as they drove significantly reduced the number of unsafe maneuvers.

3.  Enable  insurers to “educate” the guidance software of the self driving cars if and when they emerge.   It is unlikely that self driving cars will initially be as capable of defensive driving as truly good drivers.  Yes, I know the F-35 can fly itself even in combat, but the sensor suite and the software cost the GDP of a small nation, so the self driving cars will be working off something a bit less elaborate. And navigating Chicago or NYC roads at rush hour is a lot more dangerous than aerial combat.   Insurers will have the actual driving experience of millions of drivers each making  hundreds of driving maneuvers per day resulting in millions of accidents and near-accidents per year.  This data will become even more powerful as insurers begin to incorporate the sensor (e.g., lidar) data from the self driving vehicles.  Insurers then compete on the value proposition that “Our product gets you where you are going faster and more safely, and in the unlikely event that there is an accident we will protect you financially and get you back on the road.”

Although in a world populated by self driving cars the total premium size of the market will decline as accidents are significantly reduced, the surviving insurers will have larger market shares and because they have moved from the “moving the money around” loss reimbursement business to a value added loss prevention business, margins will likely be better as well.

So what should auto insurers be doing now? Here are a few suggestions:

1. Make sure your telematics partners/vendors are capable of growing with you as you eventually move from a pure pricing model to a loss prevention model.  Do the devices collect accelerometer data in sufficient detail that you will be able to recognized specific driving maneuvers and how well or poorly they are performed?  Do they serve enough carriers to provide a large enough database for analysis? Do the modelers developing your pricing algorithms have the capability to do GLM’s with multiple dependent variables and to model independent variable interactions at the two-way, three-way and perhaps higher levels?

2. Think about what your loss control based product would be like and how you would transition from your initial pricing-only based product to the longer term product.  Ensure that the design of your initial telematics product leaves you with a natural transition to a loss control product.  Allstate’s Drivewise,  which continues to report data after the initial period and provides some driver feedback, leaves a much more natural transition to loss control than Progressive’s Snapshot.

3. Above all, pay attention to what is going on in the market.  Watch how the auto manufacturers are progressing with their self-driving car efforts(Google gets all the press, but they’re not the only ones working on it.) Watch for new driver assist safety features for steering or braking.  Monitor your competitors’ telematics products and their modifications.

And for those of you who think self-driving cars will never be a reality, think about this:  I was one of  last people in America to learn to use a slide rule because, explained my high school chemistry teacher, “The calculator is a fad, and will never catch on.”

  

Telematics, Self-Driving Cars and Insurance Armageddon (Part 1)

I recently had a conversation with author Chunka Mui about Google’s self-driving car for his Forbes column regarding the technology’s impact on the insurance industry.  To describe the ultimate impact as disruptive is a vast understatement.  In a world populated entirely by self-driving vehicles, the number of accidents will decline significantly, reducing the overall potential premium volume (by some estimates as much as 70-80%.)  In addition, the potential for loss will be influenced primarily by the quality of the vehicle’s guidance system software and sensor suite, rather than by individual driver skill and behavior.

The impact of this change on the auto insurance industry could be devastating.  This post’s title references Armageddon, but at least one side wins that battle.  A better metaphor might be Norse mythology’s Ragnarok, where everyone dies  and a new world emerges. Let’s consider a likely scenario:

  1. An already overcapitalized industry will be pursuing a market less than half the current size.
  2. The principal basis of competition in the industry for the past two decades has been the ability to accurately predict the expected losses of individual customers, with better pricers experiencing faster growth and better profitability.  All the data and tools developed over that period will be rendered obsolete.
  3. The best knowledge about customer loss potential will no longer be the intellectual property of auto insurers, but will reside with the developers of the guidance system software and/or the auto manufacturers, significantly reducing their barriers to entry.
  4. The determination of “fault” in auto claims will migrate from a driver error model to a product liability model, further reducing barriers to entry for the software developers and/or auto manufacturers.

The result is a hypercompetitive market with potential for powerful new entrants where the relative competitive advantage of the current market winners has been entirely eliminated.  Bad.  Very, very, Bad.

However, as with most major market changes, companies will win or lose the battle during the transition, and there is time (and opportunity) if insurers begin preparing now.

  1. It will still be several years before the first self-driving cars become commercially available, even using the most optimistic estimates.
  2. Even when most new cars sold are self-driving, we will still be operating in a fleet of mixed operator-driven and self-driven vehicles for another 10-15 years.
  3. The existing highway infrastructure will need to be significantly upgraded before vehicles can be truly self-driving (e.g., construction zones, lane markings under snow,  detours.)

The outcome of this is that any vehicle sold as “self-driving” during the next 15-20 years will be operated in both self-driven and operator-driven modes (for those of you that doubt this, imagine driving Chicago’s outbound Eisenhower expressway at afternoon rush hour trying to leave a safe distance between you and the car in front of you.)

The mixed mode marketplace has several consequences for insurers.  First, for the first decade or so, accident frequency will not decline as much as predicted, and may even go up slightly, as the reduction from the self-driving aspect is offset by the deterioration of overall driver skills caused by less driving experience(especially since the operators will be taking over the driving duties during the most hazardous circumstances.)   Second, for that same decade,auto insurers with better pricing skills will maintain their competitive advantage, especially relative to potential new entrants.  Third, the determination of fault in accidents will involve driver-driver, driver-software, and software-software situations, maintaining auto insurers’ current claims settlement advantage over potential new entrants for a decade or more.  The result of all this is that the earliest date my nightmare scenario could fully emerge is 12 years from now.

So what should insurers do during this 12+ year transition to ensure their survival?  Check back next week for my recommendations in Part 2 (Hint-Check out my interview on Telematics in PwC’s Technology Forecast for a clue.)

Follow

Get every new post delivered to your Inbox.