David M. Raab
Relationship Marketing Report
July, 2000
.
By all rights, service bureaus should be extinct. Burdened with high-cost, slow-moving mainframes, these dinosaurs of the marketing services world should long ago have been supplanted by more nimble, less costly in-house systems. The growing desire for real-time, company-wide integration between marketing and operational systems, which seems to require that the marketing database reside in-house, should have been the final nail in the coffin.
Yet service bureaus continue to prosper. The reason is simple enough: most firms lack the skills to build and maintain a serious marketing database themselves. Such skills have always been rare, and today’s shortage of all types of computer staff has made them still harder to find. Combine this with the risk of in-house development and the need to move quickly in an ever-more-competitive marketplace, and the bureaus’ promise to deliver a sophisticated system in a reasonable time at a controllable cost is nearly irresistible.
But even in the relatively stable world of marketing service bureaus, change does occur. One of the more interesting recent developments is a shift away from traditional volume-based pricing–where customers were charged on a per-thousand basis for every processing step–to a flat-fee model where customers buy a certain amount of hardware, software and staff capacity and are free to use it pretty much as they please. Although few firms have moved to a pure flat-fee model, many have moved in that direction. (In fact, a recent study by Raab Associates found only three of ten proposals relied exclusively traditional unit-based pricing; in half the proposals, over 50% of the fees were not volume-related.)
Part of the reason for the change is technical. Traditional mainframe technologies involved large computers whose capacity greatly exceeded the needs of most individual service bureau clients. The databases were typically maintained through large, periodic batch updates during which one client’s work took over most of the computer’s resources for a brief period of time. In this environment, it made sense to share one large computer among many clients, and to charge those clients based on the proportion of that computer’s capacity that they consumed. Pricing was set by finding some measure of utilization such as processor cycles, figuring how many units of that measure could be processed when the machine ran at its practical capacity, calculating the cost to operate the computer (including downtime and overhead), and arriving at a cost per unit to charge clients. The effect was to translate a largely fixed cost–maintaining a large mainframe computer–into variable unit costs. This approach may sometimes result in prices that do not reflect true underlying costs: if the computer is used at less than the expected load, costs are not fully covered; if demand so exceeds capacity that a new computer must be added, the incremental cost is much higher than the price charged the customer. But avoiding these dangers forces managers to pay close attention to capacity management, which is one of the keys to success in a high-fixed-cost environment. So, somewhat paradoxically, variable-cost pricing makes sense when you are running a fixed-cost mainframe.
But today’s service bureaus have increasingly moved away from mainframes to Unix or Windows NT-based servers. These systems cost much less per unit of capacity than mainframes, except perhaps at the highest end of the capacity scale. More important, the smallest individual systems are much cheaper than the smallest mainframes. This means it now does make sense to think of dedicating a single machine to an individual client. At the same time, clients are increasingly moving away from large periodic batch updates to smaller, more frequent updates–for example, daily instead of monthly. This removes the periodic spike in capacity demand that was the other major reason to use shared rather than dedicated machines to handle each client’s work.
But there’s more to this story than technology. At the same time that traditional marketing service bureaus are moving away from volume-based pricing, the most exciting new variation of the service bureau model is the application service provider or ASP. And guess what? Most ASP charges are based on volume.
The difference isn’t due to technology: nearly all ASPs run Unix or NT-based servers. And it isn’t due to usage patterns either: most ASP systems provide frequent if not real-time data access; few do large infrequent batch updates. Nor is there any fundamental difference in customer goals: people hire ASPs for the same reasons they hire traditional service bureaus, to get sophisticated systems running faster and more reliably than they could it themselves.
It appears the reason is a bit more subtle. Many ASP implementations are for operational systems, such as accounting or human resources management, or for production-oriented marketing applications such as outbound email campaigns or
Web site data analysis. This is a fairly sharp contrast to the marketing databases supported by traditional service bureaus: the exact use of these systems is often not understood when they are created and is expected to change over time.
In other words, the unstructured nature of a traditional marketing database makes it particularly suited to fixed cost pricing. In the quasi-operational world of ASP systems, each transaction has a fairly clear value: the marketer presumably can judge whether each piece of email is worth the incremental cost of sending it; the accounting people are comfortable with paying a little extra for each additional journal entry. But the value of a particular marketing analysis is just about impossible to predict, so there is no way for a marketer to justify the incremental cost of conducting it. This makes unit-based pricing particularly uncomfortable. Even worse, if the marketer does discover some valuable new application that involves much more intensive use of the database, volume-based pricing penalizes this success by driving up costs sharply. This is especially irksome because the marketer knows full well that the unit prices charged by the vendor are much higher than the true incremental costs of the added processing volume, so much of the cost increase is simply higher profit for the vendor.
The fixed price approach lets the marketer make judgements about how to allocate limited resources without facing the risk of sudden and unexpected changes in cost. This is a much more congenial environment for the experimentation and evolution that are the object of most conventional marketing databases. And, of course, if the marketer does find an application that significantly increases capacity requirements, there is still the ability to add hardware and support services in relatively small increments. So flexibility is retained.
Now that the distinction between structured and unstructured processing has been made, older….er, more experienced observers will also recognize that the structured processing done by ASPs resembles the tasks that service bureaus provided in the days before marketing databases: things like merge/purge and postal standardization. Of course, the service bureaus charged for these on a per unit basis, and they usually still do.
In short, while the switch from mainframe to server-based technology has something to do with service bureaus’ change from volume-based to fixed pricing, it is not only reason. Marketers who are evaluating vendor pricing schemes, or vendors who are designing such schemes, should also consider the nature of the task at hand. Volume-based pricing makes the most sense when the task is highly structured and well defined. Fixed pricing–that is, buying a bucket of capacity to be applied as the user pleases–makes more sense when the tasks and their values are less well understood.
* * *
David M. Raab is a Principal at Raab Associates Inc., a consultancy specializing in marketing technology and analytics. He can be reached at draab@raabassociates.com.
Leave a Reply
You must be logged in to post a comment.