1999 Dec 01
Batch vs. Real-Time Technologies
David M. Raab
Relationship Marketing Report
December, 1999
.

The last two columns in this series have looked at ways to segment the universe of marketing-related systems. Although no fully satisfactory scheme has emerged, one distinction was present in nearly every attempt: batch vs.and real-time systems. The general argument was that the technologies needed for these two types of systems are so radically different that they need to be treated separately.

This proposition is worth closer examination–both to understand the nature of the technical differences, and to see how some systems manage to bridge the gap.

First, let’s get the definitions straight. Batch systems execute a sequence of steps without external inputs, while real-time systems wait for user input between steps in a transaction. Batch systems typically apply the same process–such as calculating a model score or assigning a customer segment–to many records in a single job, while real-time systems typically execute a process against a single record per job.

These differences in function result in different goals for system design. For a batch system, the key goal is to move through a large data set as efficiently as possible. The goal for a real-time system is to retrieve and update individual records with minimum delay.

Although batch systems usually process large numbers of records, they generally work with one record at a time: they read the record and its associated data, process it, store the outcome, and then repeat the process for the next record. Efficiency is determined primarily by the time it takes to assemble all the data needed to process each record. In a flat file system, this is done by either combining data from multiple sources into a single record before the process begins, or by sorting multiple files in the same sequence so the system can step through them in parallel without extensive searching. This sort of sequential processing is especially well suited to files stored on tapes rather than disk drives, since it allows the system to physically read the records in the sequence they appear on the tape. If the processing were not sequential, then the system would have to search for each set of records from one of the tape to the other. (Remember all those images of spinning tapes from TV shows and movies in the 1960’s and 70’s? That’s what was going on.)

In contrast, a relational database is explicitly designed not to place records in a specific sequence. Instead, relational systems rely on indexes to link the related data and typically load the data itself onto disk drives that can quickly access records that are not physically adjacent. Still, because sequential access is inherently more efficient than even the fastest disk drive, many of the largest-volume batch systems create an ordered extract that is then processed like a flat file.

Relational systems also often improve efficiency by “denormalizing” the data, which means storing the same piece of information in more than one record. This violates a cardinal rule of relational database design, which says each item should be stored only once. The rule exists to ensure data consistency and speed updates. But violating it will reduce the number of tables that must be searched and read to process a record. This can yield major performance gains.

Batch systems can get away with denormalization and sequential processing because they are not subject to the same constraints as real-time systems. Most real-time systems don’t know which record will be needed next, because they are reacting to unpredictable events such as which customer will place an order or call for service. Therefore the real-time systems need search mechanisms like indexes on account numbers, which allow them to find any particular record quickly. By contrast, a batch system will eventually process all records in its set, so has no particular need to locate a specific record first. Real-time systems also must be kept internally consistent at all times, since two transactions relating to the same account might occur almost simultaneously, and different kinds of transactions might occur in different sequences. This makes it much more dangerous for real-time systems to violate the relational principal of “normalization”–storing each piece of information only once–than for batch systems, which exist in a much more controlled environment. Similarly, real-time systems are also more focused on the update speed that normalized designs provide.

So, to oversimplify a bit, batch systems use sequential processing and denormalized data structures (few tables with some redundant data), while real-time systems use indexes, random access and normalized structures (many tables with no redundant data). While it’s possible for one system to do both, most software is optimized for one or the other. This is why the distinction is so fundamental when attempting to classify different marketing products.

Specifically, traditional data warehouses and database marketing systems tend to use batch processing techniques–after all, most queries are looking for patterns or segments in the entire database, not picking out a single customer or account. By contrast, front-office systems for customer service, sales automation or contact management are real-time systems that must be designed to work with one customer at a time.

The problem, of course, is that today’s goal is to merge the back-end marketing database with the front-office customer contact system. This lets users define customer strategies in the back-end system–which has the rich history data and analytical capabilities–and execute the strategies in the front-office system during the real-time interactions. So designers are being asked to make one system handle both batch and real-time processing.

As with most computer processing challenges, there are two basic solutions: brute force and elegant design. Given the continued drop in hardware costs, brute force is often the best approach. But in some situations, elegant design is still worth the effort.

In dealing with real-time marketing systems, the classic application of brute force is parallel processing. This involves systems that split a single batch job into many smaller jobs and run them all simultaneously. IBM’s SP2 and NCR’s Teradata are the most common examples of massively parallel systems, although other vendors have products as well.

Massively parallel systems do have the ability to give high performance on both batch and real-time jobs. But the hardware is expensive and developers must usually tune the application software and data structure for optimum performance.

This tuning is costly and time-consuming, which is bad enough. But it also means that the resulting system may perform poorly when faced with unanticipated demands. For example, one common tactic in parallel system design is to store data from different date ranges on separate hard drives (each served by its own processor). This works great when queries look across all date ranges, since the different processors can work on the different date ranges simultaneously. But if queries suddenly focus on a single date range, the system will slow considerably because only one processor can access the necessary data. (Reality is a bit less grim, since parallel systems can usually give several processors access to the same data if necessary. But performance will still suffer.)

A newer brute force approach involves “main memory” databases, which essentially move the underlying data from a disk drive into high speed, random access memory. Specialized database management systems that do this include TimesTen (www.timesten.com) and Angara Data Server (www.angara.com). These systems can access records ten to twenty times faster than if the data were stored on a disk drive; they can also employ specialized indexes that reduce performance impact of bringing together related records from many different tables. The most important current application of this technology is managing Internet interactions, where systems may need to access huge volumes of data in real time. But the fast access provided by the main memory systems allows them to complete batch processes extremely quickly as well.

For companies that are unable or unwilling to apply brute force solutions, the alternative is a system design based on conventional technology. Since the same conventional data tables generally cannot provide adequate performance for both real-time and batch tasks, this usually involves maintaining separate data tables for the two types of applications, and somehow synchronizing them. The simplest approach is to first load all data into a conventional marketing database–structured for batch processing–and periodically create extracts that are structured for access by real-time systems or feed data into the real-time systems’ own tables. The problem with this method is that batch processes are used to update the conventional database and to generate the extracts. This means the marketing system cannot feed adjusted information as a transaction occurs. So the marketing feed itself is something less than real-time.

A slightly more sophisticated approach is to update the table that supports the real-time systems at the same time that the main marketing database is updated. This avoids the lag due to batch extracts, but still must wait for the batch updates of the main database. The only way to avoid this second lag is to update the real-time table directly, rather than filtering data through the main marketing system first. Some systems–particularly those designed for Internet marketing–do maintain a profile database that is updated in real time in this fashion. In addition to simply capturing the new transaction, such a system might recalculate derived values such as cumulative purchases and model scores, and use the adjusted values in managing the interaction. The new data would be periodically added to the main marketing database during its regular batch update. This sort of synchronization is about the best that can be done with conventional technology.

As marketers continue to integrate real-time front-office systems with batch-oriented marketing databases, vendors will face increasing pressure to combine batch and real-time processing in a single system. As we’ve seen, this is a difficult task using today’s standard (relational) technologies. Buyers looking for an integrated system should look carefully at each vendor’s approach to this challenge, to ensure the system they purchase will meet both current and future needs.

* * *

David M. Raab is a Principal at Raab Associates Inc., a consultancy specializing in marketing technology and analytics. He can be reached at draab@raabassociates.com.

1999 Oct 01
Segmenting the Marketing Software Market Place
David M. Raab
Relationship Marketing Report
October-November, 1999
.

So much software is offered today to marketers that just trying to evaluate each new product is more than a full time job. But important as it is to understand the strengths and weaknesses of individual products, it’s also necessary occasionally to step back and understand how products relate to each other. This helps to ensure whatever product you are thinking about buying today will fit into the larger structure that will evolve over time. It also helps answer the important question of whether the product’s developer is likely to survive and prosper in the future.

Most computer industry analyses use a two dimensional matrix. This limits the amount of information that can be conveyed about any individual product, but it has the virtue of being easy to understand. Let’s just accept the two dimensional limit and consider what those dimensions should be.

The answer depends on what you’re trying to accomplish. Take the standard matrix used by a well-known IT advisory firm, comparing company “vision” with ability to execute that vision. Those are pretty useful measures if you’re considering investing in a company, either financially or as a buyer of its products. The vision axis gives some idea of whether a company’s products are likely to meet the long term functional requirements of a sophisticated user, while the execution axis hints at both financial stability and resources available to help less sophisticated users with implementation. In combination, the two measures are terrific at annointing “leaders” in a given category–an item of considerable interest to certain buyers and great promotional value to the vendors themselves.

Unfortunately, both measures are also highly subjective. In particular, you may not agree with an analyst’s definition of what constitutes a quality “vision”. More dangerously, this sort of competitive ranking implies there is a single “best” product for all users. In reality, user’s needs vary widely and the right product for one user may be totally inappropriate for another.

Let’s assume you want concrete help in selecting a marketing system. Now you want dimensions that more specifically indicate the functions provided by a product and differentiate similar products from each other. Of course, no two dimensions can capture all the issues. Still, some interesting efforts have been made.

One approach distinguishes analytical vs. execution functions–based on the observation that these have been done by separate systems in the past, but today some products offer both. Purely analytical products would include model building tools like SAS and ASA ModelMax, while pure execution tools would be telemarketing and list generation systems. Hybrid products would include Recognition Systems Protagona and Unica Impact!, which have tightly integrated modeling and campaign management. This method has several advantages: it distinguishes integrated from non-integrated products, helps determine which systems would be complementary rather than overlapping, and lets users choose the quality of analytical and execution functions they require.

But this method doesn’t indicate which channels a system supports. This means that an email broadcasting system and inbound call center application could occupy the same spot on the matrix–even though the two are utterly different products. It also means that a system supporting multiple channels looks the same as one supporting a single channel. Either way, the matrix is missing a key distinction.

It’s possible to imagine a matrix where one side represents the channels served by a product, perhaps arranging the different channels in a logical sequence such as cost per contact or speed of execution. The other dimension would then indicate how well the system supported each channel. The result would be a visual profile of the strengths and weaknesses of each product–a pretty useful thing for some purposes. But this approach displays each product as an irregular blob with multiple data points, which means the simplicity of the two dimensional matrix is lost. When more than a few products are plotted, the results quickly become unwieldy. If you want this type of detail, it is better to use a table with checkmarks or scores for each product’s capabilities in each channel or function.

A simpler approach that does fit within two dimensions would arrange systems based on the number of channels (or other functions) they support. The second dimension could indicate quality–that is, how well the system supports the channels it services. Like the earlier analystical vs. execution matrix, this breadth vs. quality approach has the problem of placing very different systems next to each other. But by putting multi-purpose systems at one end of the matrix and specialized systems at the other, it does distinguish two of the most common vendor strategies: providing a large number of integrated functions vs. doing a single function better than anyone else. This makes it very helpful for buyers who prefer either an integrated package or to assemble their own system from “best of breed” components. Such a matrix would also identify any integrated vendor with high quality components–since it’s at least theoretically possible for an integrated system to be good at everything. Of course, where multiple components are involved, the quality measure would need to be some sort of average, and thus require more detailed explanation to assess quality of individual functions.

As an alternative to quality, some analyses look at the breadth of vendor offerings–specifically, indicating whether a vendor provides software only, software plus supporting services such as application hosting or implementation, or services only. Such a breadth of function vs. breadth of service matrix is very helpful in further distinguishing different vendors’ strategies and identifying vendors who match a particular buyer’s needs.

A different approach focuses on the characteristics of systems that support different marketing functions–that is, distinguishing conventional campaign management from email campaigns, customer service systems from Web-based message delivery, and so on. It is possible to array these different systems based on response cycle (from batch to real-time) as one dimension and interaction complexity (from simple rules to complex customer strategies) as the other. Such a matrix would range from simple list generators (batch processing, no rules) in the lower left to online interaction managers (real-time reaction, long term strategies) at the upper right. Other types of systems would have different combinations: for example, recommendation engines like NetPerceptions give real-time results but rarely look beyond the goals of the current interaction (upper left); conventional campaign management software supports long term strategies with batch processing (lower right).

This matrix offers some interesting insights, since very different technologies are needed for batch vs. real-time processing and for simple rules vs. long-term strategies. In particular, it suggests that vendors claiming to straddle more than one category need to be questioned closely about exactly how they do it. It also raises questions about vendors who started in the simple rule segment but are now attempting to support more complicated strategies. For example, many of today’s “customer relationship management” products started with sales automation or call center systems (simple rules, real-time interaction). Based on where this puts them on the matrix, should be no surprise that campaign management is the weakest feature of their products. Conversely, the matrix correctly predicts that conventional campaign management vendors (batch processing, complex strategies) will have difficulty adapting their systems to handle real-time interaction.

By now it should be clear that no pair of dimensions can fully describe the relationships among different marketing software products. But it should also be clear that a carefully chosen matrix can highlight issues that are important in a particular situation. As always, the burden is on the user to understand her needs and structure an analysis that addresses them correctly.

* * *

Last month’s column described the impossibility of capturing all the significant differences among marketing software products in a single two-dimensional matrix. It’s still impossible, but the reality is that buyers and vendors do need a way to make sense of the different systems. So let’s look at yet another matrix that at least manages to distinguish the main classes of products and how they relate to each other.

The horizontal dimension of this matrix measures reaction cycle–ranging from batch processes on the left to real-time interactions on the right. Batch processes sometimes run every few minutes, but in marketing systems they usually run no more often than daily, and many times just weekly or monthly. Whatever the interval, the important point is the systems respond too slowly to influence whatever transaction is taking place. By contrast, true real-time systems react immediately, in a few seconds or less, and therefore can participate in an on-going interaction. Common real-time systems are telemarketing scripts that tell an agent what to say next and Internet servers that return a page in response to a mouse click. Between batch and real-time are systems that react promptly but not immediately, such as customer support products that reply with an email or fax within a few minutes of a customer inquiry.

Loyal readers will remember that last month’s column also proposed a matrix with a reaction cycle dimension. The other dimension of that matrix had to do with interaction complexity, which roughly corresponds to the sophistication of the contact management strategy a system can execute. That was a pretty useful matrix, but it lumped together fundamentally different systems like low-tech call centers and high-tech collaborative filtering products (which both belong to the real-time, simple strategy group). And that matrix totally excluded modeling and analysis systems, which don’t manage interactions at all.

The second dimension of the new matrix measures analytical sophistication, which ranges from automated modeling systems (high) to user-specified segmentation schemes (low). Assume the high sophistication is at the top and low sophistication is toward the bottom. In between would be rule-based systems that can make sophisticated decisions but rely heavily on user input to specify the underlying rules.

It also turns out that analytical sophistication generally correlates inversely with execution capabilities–that is, systems built to execute marketing programs tend to have limited analytical power, while those with high analytical power rarely do much execution. There are some exceptions to this rule, but they are intriguing enough that it’s actually useful to have to deal with them separately.

So let’s look at how this new matrix lays out. It proposes two major distinctions: batch vs. real-time and analytical vs. execution. The four possible combinations do indeed correspond to familiar classes of systems:

– in the “batch analytical” corner (upper left) are the traditional advanced analysis tools, including conventional statistical packages like SAS and SPSS, neural network software like Trajecta and Advanced Software Applications, and multidimensional analysis tools like Hyperion Essbase and Oracle Express. In fact, sophisticated analysis has always required batch processing, which has become an increasing problem for marketers who want to reduce cycle times. The best these traditional tools can do is to build their models in batch, but score individual records in real-time or near real time.

– this leads to the “real-time analytical” corner (upper right), which today is populated by recommendation engines like Net Perceptions and Andromedia Likeminds, and by interaction managers like RightPoint, Manna FrontMind and Trivida. These products both predict a specific individual’s actions in real time and actually adjust the underlying models as new behavior is recorded. Like conventional modeling tools, the real-time systems have very little execution capability of their own–they only feed their predictions to other systems that manage the actual customer contacts.

– specifically, they feed “real-time execution” systems (lower right). These include conventional call center and contact management products like Siebel and Clarify, as well as personalized Web site systems like Broadvision and Vignette. Although there are major technical differences between conventional and Web-based execution systems, from a marketer’s standpoint they are just different ways to deliver the same contact strategy. So it does make sense for the matrix to group them together. And, regardless of the technical differences, vendors are striving to integrate the two sets of products–so there will soon be no choice but to treat them as one.

– the final corner holds “batch execution” products (lower left), which perfectly describes old-style campaign management software like Experian AnalytiX and MegaPlex FastCount. These products use proprietary database engines that are loaded in batch and used primarily for batch selections of mailing and telemarketing lists.

So far so good–the four corners of the matrix describe distinct and important classes of systems. In fact, people who care about such things might notice that the four corners correspond to the major components of a standard enterprise architecture: operational systems (real-time execution), data warehouse (batch analytical), campaign management (batch execution) and interaction management (real-time analytical). Kinda neat, huh?

But what about the spaces between the corners? Along the execution edge of the reaction cycle dimension (the bottom of the matrix), today’s advanced campaign managers like Exchange Applications ValEx and Prime Vantage might be considered “near batch” products: they mostly use batch loads and selections, but have schedulers and other functions that let them respond to events fairly quickly. They have also been integrated to some degree with outbound email and email responses, also pulling them slightly in the real-time direction. Further along that edge are email campaign managers like Responsys and RevNet, which are used primarily to broadcast batch-selected emails but can also capture email replies and issue a predefined response. Still closer to real time execution are email customer service systems like Acuity and Brightware, which can provide unstructured responses to email inquiries in near real time. Like the call center and Web site systems mentioned earlier, these products are increasingly being expanded to handle additional media, including true real time interactions such as telephone calls and live Internet chat. Nestled between email customer service and real time interactions are the various “marketing automation” products like Imparto and MarketFirst. These can handle both near-real-time response via email and true real-time interactions via personalized Web pages.

Above the pure execution layer lies the middle ground between execution and pure analysis. This is occupied by rule-based systems that rely on people to define a set of policies, but then can combine and apply them independently. Systems including Harte-Hanks Allink Agent and NCR’s CRM trio of Marketing Agent, InterRelate+ and Relationship Optimizer can scan for significant operational transactions in near real time and apply rules to determine how to respond. Black Pearl’s Knowledge Broker, along with RightPoint, can do the same thing in true real time.

Also in this middle ground are the exceptional products that offer both analysis and execution. (I have somewhat arbitrarily placed them above the rule-based layer.) In pure batch processing, Unica Impact! offers a powerful campaign manager plus extensive model building. E.piphany and Broadbase also combine analysis and selection capabilities, although they are less capable in both areas. In the near batch group, Recognition Systems Protagona offers its own integrated modeling, an excellent campaign manager, and a respectable degree of email interaction. Web traffic analysis–a batch or near-batch pure analytical application in products like Accrue and net.Genesis–is also combined with execution by several systems including iLux, GuestTrack and Personify.

As the list of exceptions suggests, today’s relatively neat distinctions can be expected to fray over time, as vendors expand their products to encompass functions in more categories. The matrix has other flaws as well: it doesn’t indicate which channels a product supports, doesn’t identify vendor services such as application hosting, and says little about quality. But it does manage to encompass most of the systems marketers worry about today, and hopefully that is useful enough.

analysis only SAS, Trajecta (predictive models) Accrue, net.Genesis

(Web traffic)

NetPerceptions, Andromedia Likeminds (recommendation)
mostly analysis, some execution E.piphany, Broadbase (marketing marts) Verbind, RightPoint, Trivida, Manna FrontMind (predictive interaction management)
both analysis and execution Unica Impact! Recognition Systems iLux, GuestTrack, Personify

(Web analysis and personalization)

mostly execution, some analysis Allink Agent, NCR CRM (rule-based reaction) Black Pearl

(rule-based interaction management)

execution only AnalytiX, MegaPlex

(old-style campaigns)

Exchange, Prime (standard campaigns) Responsys, RevNet (email campaigns) Acuity, Brightware (esupport) Imparto, MarketFirst (market automation) Siebel, Pivotal (CRM/contact management)

Broadvision, Vignette

(Website personalization)

batch near batch near real time real time

* * *

David M. Raab is a Principal at Raab Associates Inc., a consultancy specializing in marketing technology and analytics. He can be reached at draab@raabassociates.com.

1999 Sep 01
Optimization
David M. Raab
Relationship Marketing Report
September, 1999
.

Some phrases have charisma and others simply don’t. Successful terms like “customer relationship management”, “knowledge management”, “data warehousing”, and “data mining” all somehow sound important, exciting and complicated enough to justify large sums of money and conferences in desirable locations. Other terms, like “cost-benefit analysis”, just don’t make the cut.

“Optimization” will never be a really hot buzzword: it sounds too dry, too limited to wringing the last bit of value from a well-worn set of options. This is emotionally unappealing: people want to blaze a new trail through the wilderness, not cut two minutes from their trip to the grocery store. It is also a dubious business strategy: with the rapid change and new opportunities of today’s environment, there truly are new wildernesses to explore. So fine-tuning an existing process just doesn’t seem all that important.

Still, while optimization will never attract stadiums of screaming fans, it does have its own followers–particularly among the analytically minded, and in industries that are relatively stable. In fact, the term is popping up with surprising frequency in vendor presentations these days. Unfortunately, different vendors use it in different ways–a common enough situation, but one that will further contribute to the term’s ultimate lack of utility.

In the hopes of salvaging some value from this soon-to-be-overused word, let’s take a closer look at what it can mean.

First stop, dictionary. My ancient one defines “optimize” as “to be optimistic”, but then gets around to today’s more common meaning of “to make as effective, perfect or useful as possible”. The key here is “as possible”: because what optimization systems truly do is manage sets of constraints. The focus on constraints is inherently pessimistic, and part of why “optimization” is psychologically unappealing. But, more important, it also gives hint of how to classify optimization systems: by looking at the type of constraints that they manage. The major distinction might be called tactical vs. strategic optimization.

Tactical optimization manages constraints related to a single decision. This kind of optimization has been around for a long time–it is as simple as finding the exact mailing quantity that will yield the highest profit on a list of names ranked by expected response rate. Today, any decent predictive modeling software provides this capability, usually in the form of a “gains chart” that shows the expected costs, revenues, profits, and response quantity from mailing to different depths in the ranked file. The better implementations–such as MarketSwitch Corporation’s Targeting Optimizer (www.marketswitch.com) and Group 1 Software/Unica Model 1 Campaign Optimizer (www.g1.com or www.unica-usa.com)–provide a slick graphical display that shows how these metrics change with different mail quantities, and even tell the user what quantity will meet specific constraints such as a fixed promotion budget or target number of new customers.

MarketSwitch’s Cross-Selling Optimizer takes this a step further including multiple offers subject to their own constraints–such as a maximum promotion quantity or minimum sales target per offer. This is in addition to customer-level constraints such as a maximum number of offers or minimum profit per name. The output is a plan that assigns treatments to each customer in a way that is expected to yield the best over-all result.

But whether the optimization involves one offer or many, what makes these approaches “tactical” is that they consider only the results of promotion at hand. The result is typically measured in immediate profit or return on investment, although it could also incorporate future values such as lifetime purchases from a new customer. While any sensible marketer realizes the future value is determined in part by future decisions, tactical optimization systems themselves do not attempt to measure or manage the future alternatives.

Strategic optimization does exactly this. That is, it looks at a sequence of future decisions and outcomes, and attempts to find policies that will yield the highest long-term value. This is a much more ambitious undertaking than tactical optimization, and probably needs a more exciting buzzword to capture its importance. Of course, one could argue that “customer relationship management” already does this quite nicely.

Semantics aside, the importance of strategic optimization is that it offers the ability to change the long-term value of an existing customer relationship. This involves two major tasks: figuring out what the optimal policies are, and finding ways to implement them. Today, these tasks are handled by separate systems–although there is no particular reason a single system to do both might not appear in the future.

Developing optimal policies is the greater challenge, because it involves true creativity: thinking up a new product, or type of offer, or service policy. Of course, no computer system can really do this today; the problem is simply too unstructured. (Some advocates of artificial intelligence may disagree, but that’s another discussion.) Still, a computer system can report on the results of past policies, predict what will happen if the same policies are applied in the future, and perhaps even estimate the results of combining them in new ways. This involves lots of model building and simulation, so if the number of options to consider or events to predict increases beyond a fairly limited point, the volume of work becomes overwhelming for even the largest computers. This is one reason that strategic optimization has so far been applied primarily in the credit card industry, where there are a limited number of key options (interest rate, credit limit, annual fee, grace period), relatively few key events (activation, balance maintenance, payment, renewal), and lots of customers to provide data and amplify the value of any improvements. Credit cards are also a fairly stable industry with lots of analytical people in control.

The simulation inherent in strategic optimization also lets users examine the risk posed by different sets of policies–say if interest rates rise or bankruptcies increase. While this simulation could also be run without optimization, it’s nice to have both in the same system.

But even in the credit card industry, compromises are necessary to make strategic optimization practical. Trajecta (www.trajecta.com), which seems to have the most complete approach to this problem, limits its analysis to a handful of key variables and combines detailed modeling of near-term events with simpler forecasts of long-term behavior. Both shortcuts are justifiable: a few variables do account for most differences in behavior, and detailed long-term simulations are unlikely to be more accurate than simpler forecasts. But the shortcuts also mean that other tools would be needed to deal with more complicated industries or to make optimal decisions about non-key variables.

This last point is particularly sticky. It’s easy enough to argue that a handful of key decisions account for most of your business profit, and maybe you can even prove it with statistics. But try explaining this to the CEO who just spent $20 million for a new call center precisely because it was able to personalize every customer interaction. Chances are pretty good that she’ll want to treat different people differently, whether or not the optimization system can tell her how.

In fact, the call center rules will probably be defined the old fashioned way: by human beings making their best guess about what policies make sense, and then (hopefully) watching the results to improve the rules over time. This is the realm of the other strategic optimization systems, which do implementation.

The classic rule-implementing optimization systems also originated in the credit card industry: venerable products like Fair-Isaac TRIAD (www.fairisaac.com) and AMS Strata (www.amsinc.com), and the more recent HNC Capstone Strategy Manager (www.hnc.com) and Trajecta Decision Optimizer. All let managers define strategies comprising rules for key decision points, assign customers to different strategies, execute the strategies and evaluate the results. TRIAD and Strata, with roots stretching back more than a decade, have also been adopted in other financial services and telecommunications. These systems are usually integrated with operational processes such as billing so the appropriate decisions can be made and executed during the normal course of business. Optimization evolves over time as managers set up champion/challenger tests that assign customers to alternative strategies, compare the results and pick the winners. Although these systems could also be adapted to selecting names for outbound communications, like a conventional direct mail campaign manager, this is not the usual application.

Recently, however, there has been some movement toward outbound optimization. Recognition Systems Protagona (previously ideas Solution; www.recsys.com) and NCR Relationship Optimizer (www.ncr.com) includes extensive features to manage constraints such as maximum number of contacts or promotion expenses per customer over a time period. Protagona even takes a stab at balancing revenue received from a customer with value provided to the customer–a particularly knotty problem that most vendors more or less ignore by assuming the user will develop a long-term measure of value that encompasses both. Both systems also accommodate limits on marketing resources such as call center capacity. Relationship Optimizer can automatically track the load on marketing resources as responses come in, and shift lower-priority messages to alternate channels when necessary. Although lead management and call center systems have provided similar cascading functions for years, they are unusual in a campaign management system.

Or is there really a distinction between “outbound optimization” systems like Relationship Optimizer and an advanced front office system like a Siebel call center? True, both can implement customer-tailored business policies. But the ability to embed and analyze policies in campaigns and strategies is very limited in standard front office systems: anyone who wanted to develop true optimization would find it difficult at best. This may change over time as the front office vendors strive to make their products live up to the optimization claims inherent in the concept of customer relationship management. On the other hand, tools like Protagona and Relationship Optimizer most definitely do not provide the operational functions of a call center, sales automation or Internet response management product. That is, they don’t capture customer data or execute transactions. Like all strategy implementation systems, they are decision engines that tell other systems what to do–whether it is a batch process processing credit card statements, an on-line queue of messages to display at a bank teller station, or a real-time response to a customer action. Even if the front office vendors were to expand their strategy management capabilities, it seems unlikely that they would extend beyond messages delivered through their own customer interaction tools. So independent strategy implementation tools will probably remain necessary to truly coordinate–and optimize–all decisions regarding each customer.

But I still don’t think they’ll call it optimization.

* * *

David M. Raab is a Principal at Raab Associates Inc., a consultancy specializing in marketing technology and analytics. He can be reached at draab@raabassociates.com.