The Impact of Irrelevant and Misleading Information on Software Development Effort Estimates

Reviewed by Jorge Aranda / 2011-10-18
Keywords: Estimation

Jorgensen2011 Magne Jørgensen and Stein Grimstad: "The Impact of Irrelevant and Misleading Information on Software Development Effort Estimates: A Randomized Controlled Field Experiment". IEEE Transactions on Software Engineering, 37(5), 2011, 10.1109/tse.2010.78.

Studies in laboratory settings report that software development effort estimates can be strongly affected by effort-irrelevant and misleading information. To increase our knowledge about the importance of these effects in field settings, we paid 46 outsourcing companies from various countries to estimate the required effort of the same five software development projects. The companies were allocated randomly to either the original requirement specification or a manipulated version of the original requirement specification. The manipulations were as follows: 1) reduced length of requirement specification with no change of content, 2) information about the low effort spent on the development of the old system to be replaced, 3) information about the client's unrealistic expectations about low cost, and 4) a restriction of a short development period with start up a few months ahead. We found that the effect sizes in the field settings were much smaller than those found for similar manipulations in laboratory settings. Our findings suggest that we should be careful about generalizing to field settings the effect sizes found in laboratory settings. While laboratory settings can be useful to demonstrate the existence of an effect and better understand it, field studies may be needed to study the size and importance of these effects.

The researchers from the SIMULA Lab in Norway do something pretty unique in our domain: use their research funds to pay lots of software professionals to do their thing in a setting as natural as possible, under conditions as controlled as possible. As a result you avoid running toy experiments with half a dozen students and get instead, for instance, that paper on variability in software projects that Greg blogged about recently, where several companies were paid to develop the exact same software. Or this other paper. In it, the researchers contact people in 46 companies and ask them to estimate the likely effort required to develop some software projects. However, they send slightly different descriptions of the projects to these companies, modifying them in "irrelevant and misleading" ways to see if that biases their estimates.

Several previous studies have tweaked project descriptions to see if the resulting estimates varied, and the overwhelming response is that yes, they do vary—estimators are subject to cognitive biases, like everyone else. What's interesting is that the authors here found that most of the effects were considerably weaker in their setting than those effects observed in previous studies "in the lab" (including, by the way, my own). In most cases the effects appear to be there, but they're often not strong enough to achieve statistical significance. They conclude:

While a meaningful role of laboratory experiments is to demonstrate the existence of an effect and understand its nature, we should be careful to base statements about the size, i.e., the importance of an effect on laboratory studies alone. For the purpose of establishing knowledge about the importance of an effect, we need field studies.