Marketing Data Quality Enters a New Stage
David M. Raab
Information Management
September / October 2011
Data quality has never been easy, click but it used to be simple: marketers defined quality as the completeness and accuracy of information in their systems. This reflected the simple goal of the marketing database itself, doctor which was to create a central location holding all information about each customer. This unified view provided the foundation for all customer treatments, ensuring a consistent, coordinated experience across all channels throughout the customer life cycle.
But marketing is no longer so simple. Companies now interact with customers and prospects in many situations where the individual is not tied to an existing marketing database record. These include anonymous Web site visits, social content consumption or creation, appearance at known geographic locations, and membership in behaviorally-targeted Web audience segments. Although individuals can sometimes be identified in those situations, the programs are valuable even when they cannot. Pragmatic marketers will, and should, continue run them regardless of whether they fit the standard database marketing paradigm.
For the marketing database, the result is something like the transition from academic portraits, which stressed hyper-accurate portrayal of the details of each individual, to cubism, which presented a simultaneous view of loosely-connected fragments. Accuracy and completeness are reasonable criteria for evaluating a classical portrait by Ingres, but make little sense for judging a Picasso. As with modern art, modern marketing databases require a revolution in terminology to discuss them productively.
The revolution in marketing databases comes down to one word: effectiveness. An effective marketing program is worth doing even if you can’t link the resulting revenue to a customer profile in the marketing database. Of course, the desire for marketing effectiveness is no newer than the search for Truth in art. Both have been with us forever and both have seemed equally elusive. But while art remains a matter of taste, it’s now possible to assess the value of many marketing programs with enough precision to be useful.
The new, quasi-anonymous marketing programs are still data-driven. You may not know precisely who is reading your Tweets, passing by your store, visiting your Web site, in a cookie-based audience segment, or abandoning your shopping carts. But you know enough to send them a targeted message and to track the results. This knowledge is embodied in data: and where there’s data, there’s data quality.
What has changed is that the relevant data quality metrics are tied to the specific programs, while metrics for customer profiles are not. For example, the effectiveness of a marketing program based on sending messages to people passing by your store or looking for a new car depends greatly on how quickly you can react. If they’re already walked past your door (a matter of minutes) or bought their car (could be hours, days, or weeks), then it’s largely too late, although you might influence their next purchase of the same item. For those programs, speed is a critical data quality metric. In fact, several types of speed may matter: how quickly the data is gathered, how quickly it reaches the marketing system, how quickly a message can be transmitted, and how quickly you can see any response. The unit of measure may also vary: it could be milliseconds if you’re trying to snare a pedestrian passing by your storefront, while minutes or hours may suffice to influence a considered purchase like an automobile.
By contrast, the traditional measure of completeness will matter much less for many situation-based programs. You may reach only 2% of the people passing your store, but so long as you can profitably contact those you reach, the program is worthwhile. Things are different in a database-driven program where missing data could result in expensive messages to inappropriate recipients.
Other program-specific quality metrics may include:
– reliability: much of the new data will reside outside of your company, either because suppliers want it that way (for example, social networks may be unwilling to share member profiles) or because it changes too quickly to import (e.g., current location data). Reliability measures, such as how often the data is unavailable, how consistent are connection speeds, and how often there are changes in access methods or data formats, are critical in assessing an external source’s value.
– consistency: external providers often consolidate information from multiple sources. These inputs may themselves change, impacting accuracy, coverage, currency, or other aspects that affect the utility of the data for a marketing program.
– specificity: many of the new marketing programs gain their effectiveness from the precision of one piece of data, such as current location, product interest, recent purchases, or attitudes. The level of precision may vary as topics change or as sources evolve.
– clarity: categories such as product interest or content topic may be hard to interpret because different terms can describe the same concept. Marketing programs that rely on matching such inputs to offers need quality measures that track the ability map inputs to a known, stable taxonomy, or to quantify potentially vague measures such as “social influence”.
Part of the challenge in this new style of data quality is to understand which metrics apply to which programs. This requires investigatory analytics that quantify each attribute and then assess its impact on results. These analytics must also separate the impact of quality-related factors from other factors, such as offers and creative execution. Once the key quality-related metrics are identified, data quality programs can use them to monitor existing sources for changes, to assess potential new sources, and to prioritize the search for improvements.
This change in the mechanics of data quality implies a deeper change in the relationship of quality efforts to the marketing database and to marketers themselves. A central marketing database supporting many different programs encourages generic quality measures such as accuracy and completeness. But as marketers increasingly use specific data sets for specific programs, quality measures can be tied directly to each program’s unique requirements. This means that quality managers must work more closely with marketers to understand those requirements, to develop appropriate measures, and to correlate results with quality changes. This closer relationship is more demanding but will ultimately provide greater insights into marketing results and new opportunities for improvement.
* * *
David M. Raab is a consultant specializing in marketing technology and analytics and author of the B2B Marketing Automation Vendor Selection Tool (www.raabguide.com). He can be reached at draab@raabassociates.com.
Leave a Reply
You must be logged in to post a comment.