Linux was obsolete 17 years ago and is still obsolete -- unfortunately, most programmers only know how to write in obsolete ways, so it takes a long time before obsolete code stops being written (cf. forking servers vs. event-driven servers).
It's starting to look like virtualization will deliver the world of microkernels which Tanenbaum prophecied: Xen is, for all practical purposes, a microkernel with which semiprivileged processes (OS kernels) interoperate.
As one senior developer put it to me once: "legacy is the stuff that already works."
One question though, why silicon? I understand why you'd consider all the other ones obsolete, but what modern alternative is there to silicon based integrated circuits?
This has thing ring of truth, for me--but do you see virtualization taking the kind of extreme route it'd need to provide more microkernel (at least, like EROS) benefits? A different OS instance for every program? That seems like a pretty kludgy way to do it.
A different OS instance for every program? That seems like a pretty kludgy way to do it.
Kludgy or not, there have been some moves in that direction. People often distribute "VMWare appliances", i.e., a single application packaged up as a VMWare instance; and FreeBSD system administrators often run different services in separate jails.
There are other constraints on the size of a program, though. All other things equal, larger programs tend to be harder to maintain, and proportionally buggier. This is completely independent of the cost of memory, the size of processor caches, etc. -- There's nothing like Moore's law affecting the human brain's ability to understand sprawling, tangled code bases.
Also: Bugs in device drivers tend to affect stability much more than in userland programs.
Inelegant imperfect but simple design meant that Linux was more accessible to contributors and progressed quickly and hence succeeded in building a community where more complex designs have largely failed.
Perhaps products such as Xen provide a "worse is better" version of microkernels.
"Most programmers only know how to program in obsolete ways"
-- This actually illustrates for me the problem of "progress" in programming approaches. Any "new and better way" that can't be grasped quickly and easily by a good fraction of programmer isn't really "better". The path to new systems flows along the line of least resistance. A better programming system is one which can flow close enough to the lines of resistance that most programmers can adopt it.
There can be a huge barrier to entry for new stuff. A new editor is likely to be ignored unless it 1) has compelling new ideas, and 2) can already do most of what e.g. Emacs can. Likewise, any new operating system has the burden of porting over numerous major programs, protocols, etc. that people expect.
Suppose someone created a completely new system, which did NOTHING that the old systems did BUT could be learned quickly and easily, had tremendous power and fullfilled a new or existing need. Then, that system would be adopted in very short order. Ruby-on-rails is far from ideal but it shows the principles. Maybe the users of older approaches would poo-poo it but that wouldn't matter.
Systems that attempt to be an incremental improvement on existing system with large user bases are the least likely to be accepted.
The problem one can see with ACADEMIC research is that this research has lost interest in easily grasped and used method, systems and approaches.
Technology choices for the majority of programmers are often not based on technical merit alone, but rather are based on historical influence (legacy systems, existing libraries, interoperability, et cetera,) managerial perceptions (nobody gets fired for picking Java, it is too hard to hire good Rubyists, the smalltalk vendors are unstable, et cetera) and cost.
It's starting to look like virtualization will deliver the world of microkernels which Tanenbaum prophecied: Xen is, for all practical purposes, a microkernel with which semiprivileged processes (OS kernels) interoperate.