David M. Raab
DM News
September, 2000
.
The bedrock technique of software evaluation is comparing products with similar functions. But today’s marketing systems so often provide different combinations of functions that this method is increasingly ineffective. Even specialized products often straddle functional boundaries in ways that make classification difficult if not irrelevant.
Black Pearl Knowledge Broker (Black Pearl Inc., 415-357-8300, www.blackpearl.com) is one of those products that don’t quite fit the standard categories. Its basic function is to make recommendations, but it recommends complicated things like stock portfolio allocations, not the simple book or record selections offered by a typical “recommendation engine”. Knowledge Broker could be considered an “interaction manager”–a system that coordinates decisions across touchpoints to implement customer management strategies. With sophisticated technologies for rule management and data access, Knowledge Broker provides two of the three key interaction management functions. But its approach to the third function, touchpoint integration, is relatively limited.
Or maybe the categories don’t matter. What’s important about Knowledge Broker is that it mounts a plausible assault on one of the central challenges facing interaction management: managing the number of the rules needed to execute a sophisticated customer relationship strategy.
This assault operates on two levels: technical and administrative. On the administrative level, which is where most marketers operate, Knowledge Broker conquers rule complexity by dividing the problem into several layers. The lowest layer handles connections with data sources; the next layer combines data to form business concepts; the third layer specifies how one business concept can change its meaning in the context of another; the fourth layer holds rules for how to respond in a given situation. In concrete terms, the first layer might create an entity of “stock price” by reading data from a legacy system, the second layer could define a concept of “volatile stock” that involves several data entities, the third layer might give different definitions of “volatile stock” for customers with different levels of risk tolerance, and the fourth layer might have a rule of “if customer is high risk then recommend buy volatile stock”.
Obviously each level builds on what has been created in the level before, and astute readers will notice each level requires successively less technical skill to set up. This means the business rules in the fourth level can be created by marketers with no technical background–although they may make costly errors if they don’t understand how the earlier layers were constructed. In fact, business users must be involved in setting up every level, to ensure the entities reflect actual business knowledge and objectives. Still, most of the work on the lower levels is clearly technical. Entities on all levels involve a mix of data and rules, with the proportion of rules increasing at higher levels.
While the multi-layer structure and graduated division of labor help make Knowledge Broker more workable, the main value of its approach lies in the third layer. Here, “contexts” can change the meaning of a term depending on the current conditions, allowing a single rule to cover many situations. According to Black Pearl, this lets Knowledge Broker use one quarter to one tenth as many rules as a conventional rule-based systems. Since a comprehensive customer management process can involve literally thousands of business rules, reducing their number and making them easy to manage removes a critical stumbling block.
Knowledge Broker also provides off-line tools for data mining and predictive modeling. These can build multi-layer perception models and decision trees whose outputs, which could be scores or predictions, can be used within rules. Values are calculated in real time for individual customers as needed. Modeling also helps to reduce rule complexity, since a concept such as “best offer” can be based on a model score rather than elaborate if/then logic.
On the technical level, Knowledge Broker employs a battery of sophisticated technologies that make heavy use of Java, XML and distributed processing. It can connect to virtually any data source in real-time, looking up data about an individual customer and translating it to an XML format as a transaction proceeds. It can also write back a history of offers made to each customer and how they have responded. This can be stored in XML, a relational database, or whatever other format the user prefers.
The recommendations themselves are transmitted back to the touchpoint system as an XML string. This can also be converted to another format such as HTML or WML (wireless markup language) if needed. The string-based approach lets Knowledge Broker pass specific information, such as the name of a product to offer, without requiring extensive customization to integrate with each new touchpoint. These features are particularly appropriate for recommendation engines, and in fact other recommendation engines use similar techniques, though not usually based in XML. But this method does not provide Knowledge Broker with internal information about the touchpoint, such as what specific Web pages or telemarketing scripts are available. Some interaction managers allow for more intimate touchpoint integration.
Knowledge Broker also resembles its recommendation engine cousins in treating each decision as more or less separate. For example, there is no flow chart to lay out a sequence of marketing contacts–a fairly common feature among interaction managers. While a clever user could probably create a sequence using rules and contexts, it would be a challenge. But the focus on individual decisions has advantages as well. It has lead the vendor to include start and stop dates for each rule, automated gathering of information on rule performance, support for random testing of different rules against each other, and simulation of how proposed rules interact.
Knowledge Broker was released in February, 2000 and has three customers. The vendor is focusing on complicated recommendations for financial services and telecommunications. The system is designed to scale by spreading across multiple CPUs (central processing units), both within a single computer and across clusters of computers. Pricing is set at $100,000 per CPU, with discounts as numbers increase. An average installation is expected to cost about $500,000 to $700,000 for a one-time license, plus annual maintenance.
* * *
David M. Raab is a Principal at Raab Associates Inc., a consultancy specializing in marketing technology and analytics. He can be reached at draab@raabassociates.com.
Leave a Reply
You must be logged in to post a comment.