2007 Jan 01
Measuring the Value of Superior Customer Service
by David M. Raab
Curtis Marketwise FIRST
January, buy 2007
.
Many financial institutions—community banks and credit unions in particular—look to personal service as a way to distinguish themselves from the competition. But setting superior service as a goal is just the start. The real challenge is delivering it.

For executives, medicine this means choosing among a near-infinite set of possible changes: in training programs, malady incentive plans, products and services, branch design and signage, marketing campaigns, business practices, and more. No organization can afford, or absorb, more than a few of these. How do you decide which to adopt? And how do you know after the fact whether you made the right choice?

Both questions have the same answer: you need a customer-level measurement system that lets you determine the anticipated value of any proposed change, and then measures its actual results when it’s complete. Traditional measurement systems won’t work because they measure profitability by product, branch or account—anything but the customer—and report only past results. Marketing campaign analysis, which shows only single promotions, is even less useful. A service improvement program—or, indeed, of any business change—must be judged by its impact on total customer behavior over a long period of time. In short, it requires measuring changes in customer lifetime value.

Organizations often shy away from lifetime value because it seems so unreliable. After all, how can anyone know what customers will be doing years from now, or what products will be offered or what business conditions will apply? Making investments on the basis of such speculative projections seems risky, if not downright foolish. And since lifetime value figures include profits from previous periods, isn’t it silly to use them to assess changes which can only affect results in the future?

These are legitimate questions, but they reflect a misunderstanding of how lifetime value is used to measure program results. When lifetime value is used for purposes such as finding the allowable acquisition cost of a new customer, the result is a single estimated value. This is often heavily discounted to allow for the uncertainty of future year results. But a calculation to measure program results yields values for at least two scenarios: one with the program in place, and one if the program had never been run. This latter value may be based on historical results or, more scientifically, by setting aside a control group and running an actual test.

Results can still be discounted for uncertainty, but the really important measure is the difference between the two calculated values, not their absolute level. After all, it’s the change in value due to the new program that really establishes what that program is worth. Any other changes in future conditions, such as deviations from expected interest rate spreads, will affect both scenarios similarly, so the difference between the two values should remain about the same. If different assumptions do yield substantial changes in the difference between the scenario results, it’s worth exploring why this happens and calling out that particular assumption as an explicit risk to consider in assessing the proposed change.

The objection to including past profits highlights an important issue: you have a portfolio of customers who are at varying stages in their lifecycles. Clearly the lifetime value calculations used to evaluate a change should only include projections of future behavior. But the impact of the change can vary greatly for customers at different life stages. To pick an obvious example, a change in new customer welcome procedures will have no impact on existing customers.

This simply means that the value of the change must be calculated for your actual inventory of existing customers, rather than some mythical “average” customer. Taking this a step further, you need to analyze how different groups of customers react to any change. You may well find that some groups are not affected, or even affected negatively. If so, you can seek ways to limit execution to customers for whom the change makes sense.

Gathering lifetime value inputs requires tracking all the activities that impact customer value—service costs as well as account transactions—and linking these to form a complete picture of each customer’s results. These statistics must be recorded at regular intervals to track any changes in behavior, and projection models must be built to estimate their future impact. The models must also allow what-if simulation to assess proposed changes before they are even tested.

None of this is easy, but the result is worth the effort: a reliable way to translate your commitment to superior service into programs that improve the results of your business. Anything less and you’re flying blind, which is the greatest risk of all.

* * *
David M. Raab is a Principal at Raab Associates Inc., a consultancy specializing in marketing technology and analytics. He can be reached at draab@raabassociates.com.

Leave a Reply

You must be logged in to post a comment.