GA Tracking Code

Tuesday, 17 March 2015

"The Leprechauns of Software Engineering" by Laurent Bossavit


"The software engineering profession as a whole needs to feel ashamed that it will so readily repeat, as if it were a universal truth, something that is only a half-baked figure from a single project in a single business..."
- Laurent Bossavit


Laurent Bossavit's book questions how truisms in software development come into being without compelling evidence. And it specifically debunks a few of them - like the "10x" variation in productivity between developers; and the "fact" that defects are exponentially more expensive to fix the later in the development process they are identified.

Unfortunately, we accept and repeat claims that sound intuitively plausible like these without stopping to think that we don't even have universal agreement on how to measure "productivity", or on what exactly is the definition of a "defect".



Clearly decisions should be made based on evidence, not on hearsay or what "everyone else is doing". 
Organisations shouldn't adopt Agile practices, for example, just because they're fashionable. That wouldn't be rational.  And yet....

If I ask myself how this adoption of folklore happens, I can think of a couple of possible contributing factors. (Given the topic, I should stress that I have done no research, and can offer no evidence.)

How do these claims propagate?

As I read the book I was reminded how information-transfer often takes place in "punchy" presentations, documents and web pages. 

Points need to be made quickly, often visually. A few individual data points get represented as a more dramatic curve for "clarity".  If we have a point that we want to get across then the cliche (along with including a Wikipedia definition) is to Google for some - any - source that appears to back us up with statistics. 

We don't check any deeper for proof of what we already know. No-one really has time for detail anyway.

As Bossavit points out: "If you can cite a source for your claim then it is probably true" is a dangerous heuristic.

If someone tells us that "evidence shows" 71% of failed software projects suffer from poor requirements then, fair enough, we don't have the time or inclination to check that.
But that doesn't mean we have to believe it.
And why do we think that what applies to other people's projects applies cleanly to ours?

Why are these claims accepted?  

Well, personally I suspect I fall for something like the "argument from authority" fallacy.

I tend to assume that everyone (and I do mean everyone) in the software world knows more than I do, especially if they're confident enough to state those views publicly. 
But knowing about some things in software doesn't equate to being an expert on every topic. Nor mean that they can be bothered to fully research the claims they repeat any more than I can.

My default position is too often a trusting "ah - I didn't know that" rather than "what's the evidence?". 

(Luckily the fact that I have now have a blog myself should rid me of the belief that writing a blog automatically makes you a credible source for anything.)

Critical thinking

Bossavit wants to see a bit more of what Testers like me should already be used to applying - critical thinking.
He also believes that research into software engineering should be informed by the methods of the cognitive and social sciences, and not simply look to arrive at quotable metrics by taking shortcuts.

In leading us through exactly how he has uncovered the lack of evidence behind these "facts", Bossavit gives a run-down of how we can use Google for leprechaun hunting ourselves - and encourages us to practice on claims we find questionable.  

I might try it on some of those statements made about the miraculous properties of test automation.


"The Leprechauns of Software Engineering" by Laurent Bossavit is available now as an e-book from Leanpub.

No comments:

Post a Comment