2008 Feb 01
Software Selection Mistakes
David M. Raab
DM Review
February 2008
.

Selecting the right software cannot guarantee the success of a project, for sale but picking the wrong system can ensure failure. Here are three common errors and how to avoid them.

1. Overly detailed requirements. This error is so common that it’s sometimes mistaken for a best practice. Selection teams list hundreds of desired system functions and embed them in a Request for Proposal. Vendors are expected to answer, try often in a yes/no format, cialis without any opportunity to understand the reasons for the requirements or to explain how their solution would meet them. The selection team then scores the results, and the software with the most tics wins.

This approach creates much work and little information. Vendors cannot describe how their products would best meet the client’s needs, and clients gain little insight into how the products function. It’s better to help vendors build a thorough understanding of your situation and goals, and then let them propose a solution based on their product. In addition to getting the vendor’s best ideas, this process gives your team a good idea of what the vendor will be like to work with.

This approach does not do away with the need to understand your requirements. It’s perfectly possible for a vendor to propose a solution that won’t actually meet your needs. You must build your requirements list in advance so you can compare it against the vendor’s recommendation. And, yes, you should share this list with the vendors—this is a business project, not a child’s game of “gotcha”.

2. Canned demonstrations. Project teams often follow their Request for Proposal with invitations for the most promising vendors to demonstrate their software. Demonstrations play the same role as an automobile test drive: they let you discover what it’s like to actually use the product. But just as you wouldn’t be satisfied with sitting in the passenger seat while the dealer drives for you, you can’t simply watch someone else run a piece of software. You need to take control, which includes both running the system and choosing what to test. For an automobile, this might mean driving on different roads in different weather conditions. Maybe you’d even hook up a trailer if that’s your intended use. The software equivalent is running through the relevant business processes – setting up a campaign, processing an order, handling a phone call, and so on.

The first step in this type of testing is personally running the tasks on the demonstration system. This can be enlightening, because vendors often structure their planned demonstrations to avoid known weaknesses in their product. On an even more basic level, something that looks simple in the hands of an experienced demonstrator can turn out to be considerably more painful when you are pushing the buttons yourself.

But you’ll usually want to go beyond the demonstration system to see how the product would function in your own environment with your own data. If actually connecting to your own systems is not practical, the vendor can still show you the steps required to do it. This will give you a much clearer idea of the work required to deploy the software and will help identify challenges it adopting it to your data models. It’s important that your team have the right technical experts present for this discussion, so they can provide information, ask the right questions, and understand the implications of the vendor’s answers.

3. Uninformed evaluation. Many software vendors offer evaluation copies of their products. This is the exact opposite of a controlled demonstration since users can do whatever they want. But users testing a product on their own may underestimate its capabilities because they don’t understand them properly. This is why the auto salesman shows you the controls before your test drive. Software vendors often provide an evaluation guide or tutorial that illustrates key product features. Some share the complete user documentation. For complex products, assistance may extend to offering the time of sales engineers or technical support staff.

The mistake here is to not take advantage of those resources. Evaluators often try to learn the products just by loading and running them. They sometimes rationalize this as “testing for ease of use”, but unless you actually plan to deploy the software without training your staff, that’s a poor excuse. You will eventually try to run your own processes on the evaluation system, but must first start by learning how it works. Remember, your ultimate goal is to gather correct information about each product. Undervaluing a product due to poor evaluation is as much an error as overvaluing it because of vendor hype.

Different as they are, these errors all have on thing in common: avoiding them requires creation of scenarios that illustrate how the system will be used. Scenarios provide a reference point for vendor proposals, determine which features to explore during a demonstration, and structure the time spent with an evaluation copy. They ensure the evaluation is grounded in actual business needs and that it covers the key processes from start to finish. Although creation of scenarios is hard work, it is the best way to avoid the ultimate selection nightmare of purchasing a product, installing it, and immediately discovering it doesn’t do what you really need.

* * *

David M. Raab is a Principal at Raab Associates Inc., a consultancy specializing in marketing technology and analytics. He can be reached at draab@raabassociates.com.

Leave a Reply

You must be logged in to post a comment.