2008 Dec 01
Assessing Demand Generation Usability
David M. Raab
DM Review
December 2008

Last month’s column discussed how to identify the system functions you need in a demand generation system.  But functions are only one piece of the puzzle, viagra and many vendors argue that their real advantage actually lies elsewhere, diagnosis in superior “ease of use”.  How do you measure that one?

This is a serious challenge.  Functionality is relatively straightforward: either a system has a particular capability or it doesn’t.  But “ease of use” is more subtle.  Formal, objective measurement is difficult and expensive, so most people end up relying on subjective assessments instead.  And even subjective assessments are hard to organize.

To make any sense at all of the topic, we need to take a couple steps back.  First, “ease of use” is one component of a larger and ultimately more important concept of “usability”.  This is defined in different terms but generally includes ease of use, ease of learning, and functionality.  Second, both ease of use and usability are highly contextual: they can only be measured for specific tasks for specific users in specific circumstances.  This makes perfect sense if you think about it for a moment: a system may be quite easy to use for one task, yet very difficult for another.  But it means that a single “usability score” makes much less sense than a single “functionality score” (not that functionality scores make much sense to begin with—the point of last month’s column was to measure only the functions that matter to you).

The good news is that accepting the contextual nature of usability is the first step towards a reasonable way to measure it.  Specifically, it hints that you should start by defining the contexts you need to consider.  Then you can actually assess usability within those contexts.

We’ve already listed three broad types of contexts: tasks, users and circumstances.

– The tasks you need to consider are essentially the same ones identified in the use cases that drive your functional requirements.  Usability analysis does add one new dimension to these: measuring the effort to do a task the first time compared with the effort to do it repetitively.  The difference is set-up effort, which applies to the first iteration only.  This turns out to be a significant differentiator among demand generation systems, since some are optimized for simple, one-off campaigns (not much set-up but little reuse) while others are best at creating many variations within a fixed theme.

– Users, of course, come in many varieties.  Some dimensions to consider are: experience with the system; frequency of system use; skill sets (marketing, analysts, technologists, etc.); and system access (end-users vs. administrators).  You will first identify the characteristics of your users, recognizing that you will probably have several sets of users who fall into different groups.  Then, determine which types of users will perform which tasks, bearing in mind that some tasks may be shared across different groups.    For example, administrators who are technically skilled and frequent users may set up program templates that are then completed by infrequent end-users who are primarily marketers.

– Circumstances vary as well: will your users be focused solely on the demand generation system or will they be in chaotic environment with many distractions?  Will they be under extreme time pressure or be able to plan ahead?  Is there some leeway for error or must everything be perfect the first time out, perhaps for regulatory reasons?  The answers bear on system attributes such as display style, alert functions, error checking capabilities, approval workflows, versioning, and security.  Again, different tasks will likely be performed in different circumstances.

Once you’re worked through these issues, your original task list will now be extended to include information on which types of users will perform each task, and in what kinds of circumstances.  This allows you to assess the usability of each task against the right criteria.

This still hasn’t answered the question of how to do the assessment itself.  In an ideal world, and sometimes in the real world when the stakes are high enough, you would install a test version of the software, train the appropriate users, and let them work with it.  But most evaluation projects don’t have the time or resources to make such an investment.  Yet even if you’re limited to what can be accomplished in a couple of vendor demonstration sessions, you can ensure that you are looking at usability in the right context.  Here are some specific suggestions:

– have the vendor demonstrate tasks that would be performed by expert users.  This assumes that the vendor’s demonstrator, either a salesperson or engineer, is herself likely to be an expert.  The demonstrator will surely make things look easy—but your own experts will eventually learn the same tricks, so that’s okay.

– have the vendor show you how to perform tasks that would performed by casual users.  That is, you should operate the system rather than watch.  If you are already an expert, it might be better to have a typical end-user at the controls instead.  The goal is to see how well someone unfamiliar with the system can work with it, at least for tasks likely to be done by such users.

– look for features relevant to the appropriate user groups.  Expert users can invest the time to learn system shortcuts, set up reusable templates, and make correct choices in unstructured environments.  Casual users rely more the intuitive choice being the right choice, expect more guidance and error-checking, and may prefer graphical interfaces.  Casual users may also need to rely on templates and other components produced by the experts.

– when different kinds of users will share the system, look for options to tailor the interface to individual needs.  These include capabilities to turn help messages off and on, to limit the capabilities offered to casual users, and to present predefined workflows.

Be sure to prepare a specific list of these items in advance of the demonstration itself, so you know exactly what to look for and have a place to record your observations.  The more structured your process, the more you’ll be able to cover and the more clear you’ll be on what you actually saw.

Finally, remember to look beyond the demonstration itself.  Reference clients are a particularly important source of insight.  Anybody the vendor recommends is almost certain to be happy (although slip-ups are more common than you might think), but you still need to assess whether the reference context (tasks, users and circumstances) is similar to your context.  If the vendor can’t connect you with someone similar, you’ll have to work harder to assess the product directly.  You can also look at other elements of the customer experience, such as training, support and user forums.  These have the advantage of being objectively measurable, but bear in mind that they often have more to do with the size and maturity of the vendor than the quality of the product itself.

*                            *                           *

David M. Raab is a Principal at Raab Associates Inc., a consultancy specializing in marketing technology and analytics.  He can be reached at draab@raabassociates.com.

Leave a Reply

You must be logged in to post a comment.