Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The NIST study graphic matched another from the book Code Complete, to illustrate the cost of defects. I did not have access to the CC graph so I used it instead.

Do you believe the larger point (about cost of defects) is incorrect? Your g+ critique seems to take issue with the details of the study, but I missed an assertion it is wrong.

As to the wikipedia timeline, I cherry picked from it. The point being to pick the important things a student should know, not list everything possible. Seems your WP edits didn't make it through?



I'd put the cost of defects claim in the category "not even wrong".

If someone made quantified claims about "the number of minutes of life lost to smoking one cigarette" I would refuse to take them seriously: I would argue that the health risk from smoking is more complex than that and can't be reduced to such a linear calculation.

This talk about "the cost of a defect" has the same characteristics. I don't mean the above argument by analogy to be convincing in and of itself, and I've written more extensively about my thinking e.g. here: https://plus.google.com/u/1/+LaurentBossavit/posts/8tB2RQoHQ...

But it's a large topic that quite possibly deserve a book of its own.

As for the history of software engineering, it's pretty much the same - to do it properly would entail writing a book, pretty much, and I didn't want to do it on WP unless I could do it properly.


Table and Figure 3.1 in Code Complete support the conclusion and is well cited:

Source: Adapted from “Design and Code Inspections to Reduce Errors in Program Development” (Fagan 1976), Software Defect Removal (Dunn 1984), “Software Process Improvement at Hughes Aircraft” (Humphrey, Snyder, and Willis 1991), “Calculating the Return on Investment from More Effective Requirements Management” (Leffingwell 1997), “Hughes Aircraft’s Widespread Deployment of a Continuously Improving Software Process” (Willis et al. 1998), “An Economic Release Decision Model: Insights into Software Project Management” (Grady 1999), “What We Have Learned About Fighting Defects” (Shull et al. 2002), and Balancing Agility and Discipline: A Guide for the Perplexed (Boehm and Turner 2004).

Joel on Software also has a convincing narrative on the subject. Therefore I'm not in a big hurry to replace the image, though I will put it on my todo-list.


I'm all too aware of these many citations. A few years ago, I went to the trouble of chasing down most of the papers and books, and evaluating how well each of them supported the claim. To put it mildly, I was underwhelmed.

For just one example, here's my treatment of Grady: http://lesswrong.com/lw/9sv/diseased_disciplines_the_strange...

It's not just me. Here's another author of a book aimed at software professionals who attempted some fact-checking, and came up short: http://www.sicpers.info/2012/09/an-apology-to-readers-of-tes...

I, too, used to argue for practices such as test-driven development, based on the supposedly firm knowledge of the "cost of defects curve". I changed my mind about the cost of defects when I saw how poor the data was. This is me in 2010: http://lesswrong.com/lw/2rc/coding_rationally_test_driven_de... and this is me two years later, recanting: http://lesswrong.com/lw/2rc/coding_rationally_test_driven_de...

However, I haven't (entirely) changed my mind about TDD and similar practices. I do still believe it pays to strive to write only excellent code that is easy to reason about. I like to think that I now have stronger and better thought out reasons to believe that.


Looks like you've done your homework. Yet in my experience it has been true that the farther in space and time I've been from a bug, the harder it was to solve.

So now I'm a bit confused on how to proceed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: