For the upcoming MICE (Mistakes, Ignorance, Contingency, and Error) Conference in Munich I have prepared a paper entitled “When Good Software Goes Bad: The Surprising Durability of an Ephemeral Technology.”

In theory, software is a technology that cannot be broken. Virtual gears do not require lubrication, and digital constructs never fall apart. Once a software-based system is working properly, it should continue to work in perpetuity — or at least as long as the underlying hardware platform it runs on remains intact. Any latent “bugs” that subsequently revealed in the software system are considered flaws in the original design or implementation, not the result of the wear-and-tear of daily use, and ideally could be completely eliminated by rigorous development and testing methods.

In practice, however, most software systems are in constant need of repair. Beginning in the early 1960s, large-scale computer users discovered, much to their surprise, that between 50% and 70% of all their operating expenditures were being devoted to “software maintenance.” This meant that most computer programmers were (and are) spending most of their time “fixing” other people’s computer code.

You can read a draft version of the paper here.