2017-06-04

Software complexity

Tom Breur
4 June 2017

In software development, we distinguish between inherent (or intrinsic) and accidental complexity. Although he probably didn’t “invent” this distinction, many people point to the original “No Silver Bulletpaper by Fred Brooks (who is probably best known for his classic The Mythical Man-Month) when talking about this subject. Inherent complexity, or essential complexity as Brooks refers to it, is native to the nature of software. Therefore, it is unlikely to ever go away. In this context Brooks refers to the “… interlocking concepts: data sets, relationships among data items, algorithms and invocation of functions.” You might call inherent complexity in software a necessary cost of doing business.

Accidental complexity is the component we actively try to manage. High-level programming languages, unified programming environments, and object oriented programming have all contributed to driving down accidental complexity. Needless to say, these are only necessary, not sufficient conditions. As Martin Fowler has stated: “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” Much of managing accidental complexity is about software engineering that addresses this human component, to mitigate any tendencies to err. Writing software is hard enough of itself, writing good software is even harder, exactly because we want to make it less error prone!

Hassan & Holt (2003) wrote a fascinating paper “The Chaos of Software Development” in which they made an attempt to quantify the impact of accidental complexity. They measured it over the lifetime of development of complex products like operating systems, a productivity suite and a database, for example. To this end, they derived metrics to quantify and monitor complexity based on Claude Shannon’s Information Theory. Hassan & Holt conclude that higher complexity indeed is related to overruns, and that reducing scope is one of the effective mechanisms to combat these dynamics – these are all conclusions that experience in the field would tend to underscore.

If we buy into these notions at the heart of complexity and schedule overruns, this leads us to an unavoidable conclusion. If we are going to make a conscious effort to learn from the past you want constantly and relentlessly drive down batch sizes. If meeting a deadline or adhering to a project schedule is challenging, there is really only one sensible thing to do: reduce scope. Making heroic attempts to do things better (“Next time will be different!” is my favorite corporate lie), is like Freud’s definition of insanity: repeating the same mistakes over and over again, and expecting different results…


No comments:

Post a Comment