I got curious about the fact that the cited table from the paper only contains five laws, but the blog author describes seven such laws. So I read the paper. I have to say that the blog describes these laws superficially at best. I recommend reading the paper (instead of the blog post)! It's not too surprising (the results have long been digested into common knowledge since its publication, I guess) but anyway draws attention to the fundamental relationship between software complexity, program lifecycle, and the organization in which the software lives.
The first and second laws are more or less accurately described in the blog post.
The blog post reflects the third law as "... software development is an ongoing process that requires continual improvement and adapation." This does not match the description in the paper, where the third law is described such that over time, the system behaviour emerges from a (large) number of single decisions within a complex environment.
The fourth law is described by the blog post to be about feedback loops (which would be actually applicable to the third law). In the paper, it is about stability (no radical changes during a program's evolution).
The fifth law is described to be about incremental and radical changes, while the paper refers to quite the opposite. Organizations want to maintain familiarity, so the tend to reject big changes.
The sixth law in the blog post seems to refer to the actual fourth and/or fifth in the paper.
> software evolution is constrained by organizational stability and the ability of developers to understand the system
Simple is better than magic and code documentation is required.
Side note: "loop over items and save each" isn't a useful comment right before a for loop. That isn't what I'm talking about. However, "We have to call this API one-at-a-time synchronously because otherwise their internal WAF will block us if we issue parallel requests" is very helpful context when the obvious improvement to the code is multiple threads/async I/O.
Exactly: Comment the "why" more than the "what/how" (although I sometimes comment the what/how if it makes things easier to understand, even at the risk of the comment getting out of sync with the code)
It looks like the author omitted that most of these laws hold for E-type systems, meaning systems which are linked to some real-world activity, see for example https://en.wikipedia.org/wiki/Lehman%27s_laws_of_software_ev.... As a counterexample: a math library written in C can last for years without change.
Why do you call such systems E-type systems? I am asking because I've been interested in an ontology of software systems for quite some times. Because most advice about software development, architecture, maintenance is stated general but actually only applies to some subset of software.
I wonder if these laws could be encapsulated by a simple principle, which is that software has both per-project and per-unit costs, just like hardware. Notably, it contradicts the simplistic view that you can write a piece of software once and sell copies indefinitely at virtually no cost.
I was peripherally involved in the maintenance of one of the systems studied for FEAST/2. The system persisted in production for many years after the end of the study, in fact I think it's still running. My job at one point was to study the system and suggest modern alternatives. Amusingly this included using multi-agent technology which I think found favour because the folks from Imperial really, really, really liked it.
It was believed to have reached a stable state of complexity during the FEAST study, and for some years after. The myth was that it basically couldn't be significantly changed because it was so complex. People (including me) were shown system diagrams that covered several walls of a meeting room. No one outside of the system's ownership group was allowed to touch or inspect the code, all changes were managed by a hierarchy of committees and program groups. To add an item to a menu would require many meetings, many sign offs, many hours of design, about 2 minutes of cobol coding, many weeks of vv&t, UAT etc etc etc. Often this process would be randomly derailed due to someone's "concerns". It was positively Soviet.
However then a strange thing was discovered. Unknown the the custodians of the system an extensive program of robotization had been undertaken by many independent business units that were dependent on the system. These business units had been frustrated by the pronouncement that no innovation was possible and had been compelled by competition and pressure from customers to find a way to innovate. They all knew that informing the system's owners of their work would be the end of their activities. Independently they found a variety of mechanisms to build out functionality on top of the mainframes user interface.
A compelling case was made by IBM that the systems front end should be retired by moving to PC's rather than terminals for cost saving reasons. Because of this it then became possible to rapidly and easily improve the interface. This program of work suddenly broke the unknown robots. This lead to a breakdown of the business for several weeks. I should emphasis that at the time a lost business day equated to about $35m of revenue.
You will not be surprised to learn that when the executive board managed to understand what had happened the new interface was rolled back.
A program of work was promptly put in place to find all of the robots. When I say "program of work" I could alternatively use the words "holy Jihad" no stone was left unturned, no apostate was permitted, nothing was allowed to stand in the way. I think they were at it for about three years.
It was discovered that the robotized estate was now significantly larger and more complex than the core system. Another enterprise program was required to replicate the robot functionality and close them in order that the core system could be maintained in the future while the business actually operated - because without the robots, it couldn't.
None of this was known to the FEAST/2 study.
I wonder what the implications would have been for the thinking behind the laws had it been. I also wonder how much about the other systems was undiscovered or misunderstood during the academic investigation. I am sure these investigations were better than nothing - in terms of shining a light on how these systems behave. On the other hand I suspect that they revealed no more to Lehman about the true nature of software than staring at sunspots showed Galileo the mechanisms of nuclear fusion.
I had a similar experience modernising a 20-year-old custom ERP with tens of thousands of users. We found external integrations, extensions, and customisations had grown like mushrooms in the dark.
The first and second laws are more or less accurately described in the blog post.
The blog post reflects the third law as "... software development is an ongoing process that requires continual improvement and adapation." This does not match the description in the paper, where the third law is described such that over time, the system behaviour emerges from a (large) number of single decisions within a complex environment.
The fourth law is described by the blog post to be about feedback loops (which would be actually applicable to the third law). In the paper, it is about stability (no radical changes during a program's evolution).
The fifth law is described to be about incremental and radical changes, while the paper refers to quite the opposite. Organizations want to maintain familiarity, so the tend to reject big changes.
The sixth law in the blog post seems to refer to the actual fourth and/or fifth in the paper.
And the seventh law just seems to be made up.