2008 Mar 01
Revamping the Software Selection Process: A Modest Proposal
David M. Raab
DM Review
March 2008 – April 2008
.
Last month I wrote about the importance of scenarios in helping to select the right software. But even scenarios only help buyers understand how a product works. To make a truly sound decision, decease they must know how it will impact their company.

This means the software must be assessed in the context of other company systems. A list of requirements can be misleading because some of the requirements may already be met by other resources. But you can’t just ignore those requirements: if a new product is better at them than the existing tools, that value of that improvement should matter.

To place the new product in context, the evaluation team must first describe how the company functions today, and then see what would change if the new product were added. If several products are being considered, they must assess the combination of each product with the existing systems.

The mechanics of this approach are not so different from conventional assessments. It still starts with a requirements list, preferably built by analyzing company processes through scenarios. What’s different is the next step: preparing a set of scores for how well each requirement is met today. This is the “base case”. Then the team creates additional sets of scores for the base case plus each new system (less existing systems that would be removed, if any).

The practical impact is that a new system gets credit if meets a requirement better than the existing systems, but is not penalized if it doesn’t. Instead of comparing the new systems in isolation, you’re measuring the value they add to your business. This is what you really want to know.

Converting “requirements met” to “business value added” is not easy. With analytical systems, the business value often comes from making better decisions, not from processing transactions more quickly or at lower cost. How do you measure the contribution that different analytical systems will make to better decisions?

One approach is to measure who does the actual work. The logic is this: in most analytical systems, the primary goal is to answer managers’ questions. So the best system is the one that gets managers their answers the most quickly. This depends on how far from the manager each question must travel before reaching someone who can answer it.

In other words, analytical systems involve tiers of users, starting with the manager herself. If she can answer a question without help, she will. If not, she will ask a someone on her staff who has greater technical skills and probably more powerful tools. Let’s call that person a business analyst. If the analyst can’t answer the question, he will hand it off to specialist such as a statistician working in a corporate decision management group. If that group can’t get an answer—usually because it requires data that hasn’t already been exposed in a data warehouse or service environment—they turn to the IT department.

Each step in this chain adds time. Therefore, the value added by an analytical system can be measured by how many answers it moves closer to the manager. For example, automated predictive modeling systems shift the ability to build models from statisticians to business analysts, speeding up the process and lowering the cost dramatically.

The table below provides a concrete illustration. It defines five levels of effort to answer a question: read an existing report, drill down in a business intelligence system, analyze data extracted from an existing BI or reporting system, analyze data in a data warehouse (but not exposed to end-users in a BI system), and add data not already in the warehouse. Numbers represent the percentage of questions of each type that can be answered by each user role (they add to 100% reading across). Column totals show the capabilities and workload for each group.

In the example, “Base” shows the current situation and “New” shows capabilities after adding a new business intelligence tool. As the “Change” row illustrates, the new tool slightly empowers Managers and greatly increases the capabilities of Business Analysts. The workloads of Statisticians and IT are reduced accordingly.

% Answers Provided by Each User Type

Case

Question Type

Manager

Business Analyst

Statistician

IT

Base

Read Report

100

Drilldown in BI

40

40

20

Analyze from BI

30

40

30

Analyze from Warehouse

30

70

Add New Data

30

70

Score Total

170

110

150

70

New

Read Report

100

Drilldown in BI

50

50

Analyze from BI

30

60

10

Analyze from Warehouse

10

80

10

Add New Data

40

30

30

Score Total

190

230

50

30

Change

Score Total

+20

+120

-100

-40

So far so good. But how do you find the best choice among several systems? I’ll answer that question next month.

* * *

(Second of a two-part series)

The first part of this series showed why the traditional approach of comparing software products against each other less useful than evaluating how they complement existing company systems. It also proposed a new measure for the value of analytical systems: the number of user groups a question must pass through before reaching someone who is able to provide an answer.

The table below repeats the example from last month. It shows five types of analytical questions, requiring different levels of technical skills and resources to answer. The “Base” case shows the percentage of questions that can be answered by four types of users with existing systems, and the “New” case shows the percentages with a new system added. The change in score totals shows that the new system allows Managers and Business Analysts to answer more questions, while shifting work away from Statisticians and IT. The bar chart illustrates the same changes even more clearly.

% Answers Provided by Each User Type

Case

Question Type

Manager

Business Analyst

Statistician

IT

Base

Read Report

100

Drilldown in BI

40

40

20

Analyze from BI

30

40

30

Analyze from Warehouse

30

70

Add New Data

30

70

Score Total

170

110

150

70

New

Read Report

100

Drilldown in BI

50

50

Analyze from BI

30

60

10

Analyze from Warehouse

10

80

10

Add New Data

40

30

30

Score Total

190

230

50

30

Change

Score Total

+20

+120

-100

-40

This approach gives a good picture of how a new system will affect the organization. If several new systems are being considered, comparing the charts for each would be enlightening. But it wouldn’t tell you which one to buy.

Picking a single system requires combining the workload figures into a single number that can be used to rank the alternatives. To do this, the detailed figures must be weighted on two dimensions: question type and user type.

Questions types are ultimately the same as user requirements. Since weighting on user requirements is part of any evaluation methodology, this poses no new challenge. (The example simply added the five requirement scores, which implicitly weights them equally.)

But weights for user types are something completely different. These reflect the relative value of having different users answer the same question. Our original premise is that answers from users “closer” to the manager are worth more because they are received sooner. They are probably cheaper too. So we know the value weight is highest for answers from managers and lowest for answers for IT.

It’s possible to calculate precise ratios between these weights, based on factors like turnaround time, labor cost, accuracy, and opportunity cost. But in most cases an intuitive estimate will suffice. In the example below, weights of 4, 3, 2 and 1 have been applied to the original New and an alternative business intelligence product, New 2.

Summary Value Calculation – New vs New 2

Case

Manager

Business Analyst

Statistician

IT

Combined

Value

New

Score Total

190

230

50

30

x Value Weight

4

3

2

1

= Value

760

690

100

30

1,580

New 2

Score Total

200

140

130

30

x Value Weight

4

3

2

1

= Value

800

420

260

30

1,510

Examining the user-level scores, we see that New 2 gives Managers slightly more capability than New (800 vs. 760), but Business Analysts gain much less (420 vs. 690). The combined value for New is higher than New 2 (1,580 vs. 1,510), making New the better choice.

Where does this leave us?

In a better place, I think, than the traditional selection process. Instead of a horse race between product features, this approach puts focus where it should be: on value to your business. It recognizes that the value of a new tool depends on the other tools already available, and it forces evaluation teams to explicitly study the impact of different tools on different users. By creating a clearer picture of how each new tool will impact the way work actually gets done at the company, it leads to more realistic product assessments and ultimately to more productive selection choices.

* * *

David M. Raab is a Principal at Raab Associates Inc., a consultancy specializing in marketing technology and analytics. He can be reached at draab@raabassociates.com.

Leave a Reply

You must be logged in to post a comment.