Measure, Metric and Indicator: What's the Difference

来源:互联网 发布:树莓派3安装ubuntu系统 编辑:程序博客网 时间:2024/05/01 15:45

The importance of measurement in software development has been highlighted this past year, largely as a result of the new Air Force policy on software metrics (see CrossTalk, April 1994). At the same time, there seems to be a good deal of confusion on the terminology involved, specifically measure, metric, and indicator. It's important to understand differences between these terms.

Many people look to the Institute of Electrical and Electronics Engineers, Inc. (IEEE) and the Software Engineering Institute (SEI) definitions for guidance. These are good sources but I will recommend some software-specific terms and provide some examples that will help to clarify the definitions.

First, let's look at the definitions of these terms (the definitions are in italics):

Measure - To ascertain or appraise by comparing to a standard [1]. A standard or unit of measurement; the extent, dimensions, capacity, etc., of anything, especially as determined by a standard; an act or process of measuring; a result of measurement [3]. A related term is Measurement - The act or process of measuring. A figure, extent, or amount obtained by measuring [1]. The act or process of measuring something. Also a result, such as a figure expressing the extent or value that is obtained by measuring [3].

An example measure might be five centimeters. The centimeter is the standard, and five identifies how many multiples or fractions of the standard are being appraised. With the centimeter, someone measuring something in the United States is going to get the same measure as someone in Europe.

Let's relate this to software, such as lines of code. Currently, there really isn't a universal standard for lines of code. Someone measuring a program's lines of code in one office will probably not get the same count as someone measuring the same program in a different office. Therefore, it is imperative that each organization determine a single standard for what is meant by a line of code and ensure that everyone in the organization understands and uses that standard. Thus, a measure may be universally standard or locally standard, but it needs to be a standard.

Metric - A quantitative measure of the degree to which a system, component, or process possesses a given attribute [2]. A calculated or composite indicator based upon two or more measures. A quantified measure of the degree to which a system, component, or process possesses a given attribute [3].

An example of a metric would be that there were only two user-discovered errors in the first 18 months of operation. This provides more meaningful information than a statement that the delivered system is of top quality.

Indicator - A device or variable that can be set to a prescribed state based on the results of a process or the occurrence of a specified condition. For example, a flag or semaphore [2]. A metric that provides insight into software development processes and software process improvement activities concerning goal attainment [3].

As the definition notes, a flag is one example of an indicator. An indicator is something that draws a person's attention to a particular situation. Another example of an indicator is the activation of a smoke detector in your home; it is set to a prescribed state and sounds an alarm if the number of smoke particles in the air exceeds the specified conditions for the state for which the detector is set. In software terms, an indicator may be a substantial increase in the number of defects found in the most recent release of code.

My objective is not to add more definitions or confusion but to give an example to help you understand the differences between these terms. A few charts can help clarify the differences. Let's start with a common scenario that involves a sick patient.

An individual is brought into a hospital emergency room. He is unconscious and has a temperature of 99.1 degrees Fahrenheit (see Figure 1). Other vital signs appear normal. What does the measure of 99.1 degrees Fahrenheit tell you? Very little. You may realize that it is above normal body temperature, but you don't know if the temperature is going up, down, or remaining constant. So is this individual getting better or getting worse?

Now, after many hours of regularly checking the patient's vital statistics, we are able to see a trend in the temperature readings (see Figure 2).

This trend analysis gives the doctors a lot more to work with, even though the patient is still unconscious. What does the chart in Figure 2 show us? The temperature continues to climb and even more rapidly as the second day progresses. The doctors start to worry, but other vital statistics show no problems.

Suddenly, the patient awakes and provides more information about his condition. He is Fprouktquiktzarpkx, from the planet Zorkkokkroz, and his normal body temperature is 105.6 degrees Fahrenheit (see Figure 3). He was recovering from hypothermia.

Figure 1. Body temperature measure

 

Figure 2. Body temperature metric

 


Figure 3. Body temperature compared

The above scenario helps to illustrate the difference between measures, metrics, and indicators. Figure 1 shows a measure. Without a trend to follow or an expected value to compare against, a measure gives little or no information. It especially does not provide enough information to make meaningful decisions.

Figure 2 shows a metric. A metric is a comparison of two or more measures--in this case body temperature over time--or defects per thousand source lines of code.

Figure 3 illustrates an indicator. An indicator generally compares a metric with a baseline or expected result. This allows the decision makers to make a quick comparison that can provide a perspective as to the "health" of a particular aspect of the project. In this case, being able to compare the change in body temperature to the normal body temperature makes a big difference in determining what kind of treatment, if any, may be needed.

This example is obviously fictitious (I think). But it does illustrate the point that a little bit of information can be dangerous. This does not mean that no information is better; it means that the right amount of information of the right kind is needed to make the best decisions. So do we wait until we have all the information we want before we make decisions? No. But recognize that without enough of the right information, there is a risk involved in making that decision.

The example also illustrates that our frame of reference is not always the right one. We must be willing to look at situations with an objective view. If we cannot see a situation from more than one angle, we may need to request consultation from someone with a different persepective.

This article is one point of view. If you have comments or differing opinions about these examples or definitions, we would like to here from you.

Bryce Ragland
Software Technology Support Center
Ogden ALC/TISE
7278 Fourth Street
Hill AFB, UT 84056-5205
Voice: 801-777-8057 DSN 458-8057
Fax: 801-777-8069 DSN 458-8069
Internet: raglandb@software.hill.af.mil

About the Author

Bryce Ragland has over 17 years experience in software quality (three years as a government employee and 14 years as a government contractor). He is an expert in system and software testing, software test tools development, and application software development. For the past three years he has worked in the STSC as a software process improvement consultant to the center's Air Force customers and as a technical consultant in software quality engineering, metrics, and test. He is currently the government lead for the software quality engineering domain.

References

  1. IEEE Standard Glossary of Software Engineering Terminology, IEEE Std 729 1983.
  2. IEEE Standard Glossary of Software Engineering Terminology, IEEE Std 610.12 1990.
  3. Engineering an Effective Measurement Program Course Notes, 1994.  
原创粉丝点击