2007 Mar 01
Kefta, Inc. Kefta Dynamic Targeting
by David M. Raab
DM News
March, 2007
.
The components needed to tailor treatments to individual customers have been understood for years. Touchpoint systems send interaction data to a central customer profile and rules engine. These select the appropriate treatments and send them back to the touchpoint for execution.

But drawing the picture is one thing and making it happen is something else. Many software products have enabled such targeting or provided pieces of a solution. They differ in technical approaches, channels served, selection methods, degree of automation, user skills needed—you name it. Just among selection methods for Web site personalization, choices have included rules engines, collaborative filtering, behavioral targeting, event detection, and multivariate testing.

Kefta Dynamic Targeting (Kefta, Inc., 415-391-6881, www.kefta.com) fits somewhere within this universe. Kefta itself says it competes primarily against behavioral targeting systems like Touch Clarity (recently purchased by Omniture), Certona or [x+1] (formerly Poindexter Systems). Like those systems, it inserts code snippets into Web pages that gather visitor information, send this to a server with visitor profiles and selection rules, and receive the content to display.

But behavioral targeting systems automatically create segments based on which visitors select which content. Kefta users define the segments in advance and specify which contents are presented to each group. Multivariate testing systems including Optimost, Offermatica, Memetrics and SiteSpect use a similar approach. On the other hand, Kefta and the behavioral targeting systems can automatically direct larger portions of visitor traffic to the best-performing content. Most multivariate testing systems (Offermatica is an exception) keep the content mix steady until users intervene.

Kefta’s primary offering is a “full-service” system that extends beyond ads within Web pages to support follow-up emails, page layers, exit pop-ups, and off-site banner ads. A “self-service” system, introduced late last year, is limited to Web pages and lacks many advanced features.

Both products use the same underlying engine and both are organized around campaigns. To set up a Web campaign, users specify the pages, placeholders within each page, and content elements that can populate the placeholders. Each placeholder is defined by a “Kefta probe” which contains HTML that calls the Kefta server when the page is viewed. The server will refer to the campaign rules to determine which contents the particular site visitor should receive. The contents themselves are blocks of HTML that could refer to materials stored externally, execute Javascript or other programs, return information for analysis, or do pretty much anything else. Kefta uses cookies to identify repeat visitors.

Placeholder probes also store a default content definition, which ensures that visitors see something relevant even if the connection to the Kefta server is lost. Other probes can track “actions” which are accumulated for reports and can be used as the object of a test campaign. Actions can be defined as the number of times a given probe has executed or as the sum of a value, such as order amount, which is gathered when the probe is fired. The full-service system allows any number or type of actions per campaign, while self-service is limited to ten. Kefta staff creates probes for full-service users, while self-service users can produce their own placeholder probes and one action probe. Additional action probes are built for them by Kefta.

To conduct a test, self-service users attach multiple content items to one placeholder, specify the number of splits for the placeholder, and then select the content items to attach to each split. Users can assign control content for each placeholder and specify at the campaign level what percentage of visitors will be in the control group.

Kefta offers several ways to allocate test contents among visitors. Users can manually assign the percentage of visitors who will receive each item. They can specify percentages of visitors to receive the best- and worst-performing combination and let the system implement this based on actual results. Or the system can execute a “full factorial” test plan, meaning it tries all possible combinations of contents across all placeholders. Although Kefta can also test a subset of combinations and estimate results for the remainder (the “Taguchi” method), it has found the full factorial approach to be significantly more reliable.

The simplest tests rotate the same contents among all visitors. However, Kefta argues strongly that finding the best contents for an “average” visitor is less effective than finding the best contents for different segments. Users can define visitor segments and assign test contents separately for each segment. In the self-service system, users must choose one of several segmentation factors: search engine key words; referring site URL; tracking codes or values within the referring URL; geographic location (usually state) or connection speed. The full-service system allows segmentation on combinations of these factors, visitor profiles stored on the Kefta server, and behavioral information stored in cookies deposited on the visitor’s PC. Although the self-service system uses cookies to identify repeat visitors, these do not store behavioral data.

The full-service system can also apply statistical scoring systems to identify the best contents to offer individual customers, drawing on their segment, life stage, previous contents viewed, and available content. Business rules can further control the contents selected. Optimization uses logistic regression to automatically read test results and deploy the best-performing contents.

Full-service also creates third-party cookies (that is, sent to www.kefta.com) for visitors to external Web sites, allowing it to coordinate messages outside of the client’s own site when such cookies are not blocked.

System reports show click-through rates, actions and lift vs. control, with trends by days and details by segment. Other reports show exposures by placeholder combinations and detailed results per placeholder. The optimization system can estimate the incremental impact of individual content items on final results, even across multiple site visits.

Both versions of the Kefta solution are hosted. Reports and the self-service interface run in a Web browser. Kefta was founded in 2000 and has more than thirty users on its full-service system. Pricing is based on volume and services provided. It starts at $10,000 per month for the self-service system.

* * *
David M. Raab is a Principal at Raab Associates Inc., a consultancy specializing in marketing technology and analytics. He can be reached at draab@raabassociates.com.

2007 Feb 01

QD Technology Quick Response Database
by David M. Raab
DM News
February, shop 2007

There are many ways to organize data: flat files, XML tags, networks, hierarchies, cubes, columns, objects, and others still more exotic. But by far the dominant database management systems today are relational databases like Oracle, DB2, and SQL Server. These products are designed primarily for transaction processing—that is, to add, change and remove individual records. The features needed for transaction processing sometimes conflict with the features needed to analyze records in large groups. But relational databases can be used for analysis through a combination of feature extensions, clever database design and powerful hardware. Although this approach adds cost, many companies prefer it to the alternative of making their technical environment more complicated by bringing in another database engine designed specifically for analytics.

Such analytical databases do exist. Marketers in particular have frequently chosen to use them because they wanted the speed, flexibility and low cost that they provide. The leading products in this group have changed over the years but the dominant products for marketing applications are currently Alterian and SmartFocus. Both organize data into columns (for example, all last names or all Zip codes), so only the items needed in a particular query can be loaded to resolve it. This reduces the total amount of data to be retrieved from storage, which is usually the major determinant of query response time. Both products also use compression and indexes to further reduce data volumes and increase speed. In addition, they provide specialized query languages that simplify tasks which are difficult in a conventional relational database. These languages are embedded in the systems’ own query tools.

Quick Response Database (QD Technology, 973-943-4137, www.qdtechnology.com) is another competitor in the analytical database category. Like other analytical systems, QRD achieves better performance than conventional relational databases by discarding the update management features needed for transaction processing. Users load data from existing sources through a batch process that compresses and indexes the inputs before storing them in the QRD format.

The system automatically analyzes the inputs and applies different compression and indexing methods based on what it finds. Once the data is loaded, it cannot be changed directly, although incremental files can be added with new and changed (but not deleted) records. These incremental files remain physically separate from the original but are automatically merged by the system during query processing.

QRD’s compression and indexing yields a file that takes one-eighth to one-tenth as much space as the original input. The actual amount of compression depends on the the input: large blocks of text compress less than numbers or coded values. In addition to the compression itself, the system gains speed by using indexes to resolve queries when possible, by storing data in large blocks to reduce retrieval times, and by decompressing only the records needed to display query results. QD Technology states that queries often run ten times faster than on a conventional relational database, although again the actual improvement depends on the details.

Unlike systems that convert the inputs into columns, QRD retains the original data structures of its inputs. The system accepts queries in SQL—the language used by nearly all relational database systems—through a standard ODBC connection. Because it uses both standard SQL and the existing data structures, queries built to run against the original data source will typically run against QRD with little or no change. This is a major advantage for companies with extensive libraries of existing queries and with large investments in standard query tools such as Business Objects or Cognos.

QD Technology is selling QRD as a tool for desktop analysis, not a replacement for a primary marketing database. It points to applications such as providing regional analysts with subsets of an enterprise marketing database, so they can run their own selections rather than waiting for the work to be done at headquarters. Another example is providing fraud analysts with desktop copies of detailed transaction histories, so they can easily research large amounts of data.

Such applications require frequent updates so the users are working with fresh information. Database compression in QRD runs five to ten gigabytes per hour on a Windows server, placing significant limits on the amount of data that can be processed overnight or a weekend. The system has been tested with twenty to one hundred gigabytes of input data—fairly small amounts by today’s standards—although these can be extracts from much larger databases. Because the incremental files do not include deleted records, a full rebuild is needed periodically to keep the information accurate.

In a typical configuration, compression runs on a central server and compressed files are then distributed to analysts who run them on their personal workstation. The system accepts relational database tables and delimited files as inputs. Relational databases must have both ODBC and JDBC connections available for the system to read the source data structures automatically.

Because QRD loads each source table independently, users define relationships among the tables when they set up individual queries. This allows the same flexibility as any standard SQL environment. Queries can create calculations and temporary data tables, but cannot write back to a database.

The system stores the decompression rules within each QRD file it distributes. This allows query results to display the data in its original, uncompressed form. It also lets users recreate the original input tables without referring to any external documentation.

QRD runs on Windows XP or Server 2003 servers and desktops. The system includes several server components to manage compression and distribution of the QRD files. A smaller set of desktop components receives the QRD files and provides the ODBC connection to third-party query tools.

QRD has been under development since 2004 and has been tested at several large financial services companies. The first commercial release was last fall and has been sold to about a half-dozen buyers. Pricing is based on an annual subscription and ranges from $100,000 to $250,000 based on the number of users. A short-term trial license is available for much less.

* * *
David M. Raab is a Principal at Raab Associates Inc., a consultancy specializing in marketing technology and analytics. He can be reached at draab@raabassociates.com.

2007 Jan 01
Memetrics Memetrics xOs
by David M. Raab
DM News
January, help 2007
.
It’s a safe bet that few readers of this column care deeply about the technical differences among multivariate testing methodologies. Taguchi, salve Optimal Design, and Discrete Choice Models each have strengths and weaknesses, but all are ways to quickly and efficiently identify optimal combinations of marketing treatments. They can’t ignore the underlying technology totally, since it can affect important practical issues such as scalability and flexibility to handle unexpected needs. But, over all, users evaluate testing systems like as any other software: by looking at what it would be like to use them, without too much concern for what goes on under the hood. What ultimately matters is the result: better performing promotions.

Memetrics xOs (Memetrics, 415.513.5120, www.memetrics.com) is a multivariate testing system based on discrete choice models. This approach measures consumer preferences by asking them to choose among simulated versions of a complete offer, each having a different value combination. (Think of product design: attributes might be size, price, color, etc.; consumers would be asked to choose among sample products with different values for each.) Discrete choice models have proven more effective at determining the actual impact of each attribute and value than asking about single attributes or values in isolation.

This is all heady stuff and there are Nobel Prizes involved. But for projects such as Web page optimization, the practical result is similar to other multivariate testing methods: each page is divided into one or more zones (attributes), such as message, image, and offer, and these are assigned multiple test values. If all possible value combinations have been tested—something called a “full factorial” design—the system will identify the best-performing combination as optimal and identify any relationships (“interactions”) among values. If only some combinations have been tested, the system ignores any interactions, identifies the best-performing value for each attribute, and proposes the combination of best-performing values as optimal. Testing only some combinations is a typical multivariate approach that yields faster results from smaller, simpler tests. It requires proper test design, which, like other multivariate testing systems, Memetrics does automatically.

Either way, the system also builds a “choice model” that can estimate the results for any combination of values. This can be particularly helpful if the user is interested in multiple outcomes—say, gross revenue, profit margin and number of orders—and wants to balance them against each other rather than maximizing just one. The Enterprise version of xOs lets track several outcomes and either model them separately or combine them into a single measure and model that. Enterprise users can also assign an offer cost and selection value to each outcome and combine these into a target measure. Outcomes can be based information captured during an interaction or imported from external sources such as an order processing system. These features are not available in the simpler Express version of xOS, which tracks only one outcome.

Setting up a test in Memetrics begins with defining the attributes, values, outcomes and proportion of traffic to be tested. Attribute values can be defined with a name, Internet address (URL), or by uploading the actual content to the Memetrics server. A sample size calculator helps users determine the number of attributes and values to test based on traffic volume, time available, conversion rates, expected response variations, and target confidence level.

Once the elements are specified, xOs can generate a block of Javascript that identifies the test and its attributes. The user then embeds the Javascript in the Web page to be tested. The Javascript calls the Memetrics server each time the page is displayed, allowing Memetrics to assign each visitor to a test, control or default group and present the appropriate content. Memetrics uses persistent cookies to identify site visitors so it can ensure consistent treatment when they return.

The Memetrics Javascript can also capture information from the user’s URL, such as the search query that led them to the page. This is stored on the Memetrics server and used to analyze results by visitor segment. Javascript for the same test can be embedded in several pages, allowing consistent treatments and tracking of results such as registration or purchases.

xOs Express is limited to the Javascript approach. Enterprise can also use techniques such as .NET, PHP and JSP to communicate with Web servers or other interaction systems such as call centers. This enables Enterprise to manage tests across multiple channels. Enterprise, but not Express, can also use real-time filters to limit tests to predefined customer segments. These filters can access data provided by the interaction system or read from other sources such as a customer database.

After a test is complete, both systems let users define segments based on whatever visitor data is available. Users can test alternate segmentation approaches to find the best results. xOs can build one model against the entire test universe or build separate models for each segment. Reports for each model show the effect of each value and its statistical significance. Users can accept the system’s choice of optimal values or select their own, and then deploy this combination as a default. Either way, the system will show the expected results for the specified combination.

There is no automated adjustment of default values as customer behavior changes over time. Memetrics argues that humans should examine each test result and make conscious decisions about what to do next. A typical Web page test runs two to four weeks and evaluates five or six attributes, each with multiple values.

The default values will be shown to all visitors outside of a test sample and are also displayed if the Memetrics server is unavailable. The default contents are also viewed by Web search engines.

Memetrics was founded in 1999 and has more than 30 clients. Its original product was Enterprise, which is priced at $150,000 per year plus consulting. It can run in-house or be hosted by Memetrics. Express is a hosted service that was introduced in 2006. Price begins at $40,000 per year and is based on volume.

* * *
David M. Raab is a Principal at Raab Associates Inc., a consultancy specializing in marketing technology and analytics. He can be reached at draab@raabassociates.com.