Linux was obsolete 17 years ago and is still obsolete -- unfortunately, most programmers only know how to write in obsolete ways, so it takes a long time before obsolete code stops being written (cf. forking servers vs. event-driven servers).
It's starting to look like virtualization will deliver the world of microkernels which Tanenbaum prophecied: Xen is, for all practical purposes, a microkernel with which semiprivileged processes (OS kernels) interoperate.
As one senior developer put it to me once: "legacy is the stuff that already works."
One question though, why silicon? I understand why you'd consider all the other ones obsolete, but what modern alternative is there to silicon based integrated circuits?
This has thing ring of truth, for me--but do you see virtualization taking the kind of extreme route it'd need to provide more microkernel (at least, like EROS) benefits? A different OS instance for every program? That seems like a pretty kludgy way to do it.
A different OS instance for every program? That seems like a pretty kludgy way to do it.
Kludgy or not, there have been some moves in that direction. People often distribute "VMWare appliances", i.e., a single application packaged up as a VMWare instance; and FreeBSD system administrators often run different services in separate jails.
There are other constraints on the size of a program, though. All other things equal, larger programs tend to be harder to maintain, and proportionally buggier. This is completely independent of the cost of memory, the size of processor caches, etc. -- There's nothing like Moore's law affecting the human brain's ability to understand sprawling, tangled code bases.
Also: Bugs in device drivers tend to affect stability much more than in userland programs.
Inelegant imperfect but simple design meant that Linux was more accessible to contributors and progressed quickly and hence succeeded in building a community where more complex designs have largely failed.
Perhaps products such as Xen provide a "worse is better" version of microkernels.
"Most programmers only know how to program in obsolete ways"
-- This actually illustrates for me the problem of "progress" in programming approaches. Any "new and better way" that can't be grasped quickly and easily by a good fraction of programmer isn't really "better". The path to new systems flows along the line of least resistance. A better programming system is one which can flow close enough to the lines of resistance that most programmers can adopt it.
There can be a huge barrier to entry for new stuff. A new editor is likely to be ignored unless it 1) has compelling new ideas, and 2) can already do most of what e.g. Emacs can. Likewise, any new operating system has the burden of porting over numerous major programs, protocols, etc. that people expect.
Suppose someone created a completely new system, which did NOTHING that the old systems did BUT could be learned quickly and easily, had tremendous power and fullfilled a new or existing need. Then, that system would be adopted in very short order. Ruby-on-rails is far from ideal but it shows the principles. Maybe the users of older approaches would poo-poo it but that wouldn't matter.
Systems that attempt to be an incremental improvement on existing system with large user bases are the least likely to be accepted.
The problem one can see with ACADEMIC research is that this research has lost interest in easily grasped and used method, systems and approaches.
Technology choices for the majority of programmers are often not based on technical merit alone, but rather are based on historical influence (legacy systems, existing libraries, interoperability, et cetera,) managerial perceptions (nobody gets fired for picking Java, it is too hard to hire good Rubyists, the smalltalk vendors are unstable, et cetera) and cost.
"If the GNU kernel had been ready last spring, I'd not have bothered to
even start my project: the fact is that it wasn't and still isn't." - Linus Torvalds, January 1992
17 years later and it still isn't! Makes you wonder about the whole GNU/Linux ecosystem had the GNU guys actually finished Hurd.
Probably about the same. I think the biggest thing was splitting away from using a BSD kernel since the BSD projects are typically more, hmm, pedantic? Difficult to break into? Whatever. But the GNU led projects that have dealt with a lot of external contributors like GCC have fared quite well.
No, GNU-led projects have had plenty of drama, GCC especially -- there was a huge forking fracas in the late nineties, and the FSF's politically-motivated crippling is largely the motivation behind LLVM's clang.
My sense is that the split from BSD was necessary because of licensing wars over proprietary code going on at the time. It wasn't until after Linux got going pretty well that BSD was established as being free of proprietary code, with some formerly proprietary parts rewritten (hence, FreeBSD).
What does "finished" mean, with an operating system?
You can install and run HURD today, including X. It doesn't support as much out-of-the-box as Linux does (I've heard about half of Debian), but it's certainly a lot further along than Linux was 10 years ago.
Is Linux "finished"? What set of features makes it "finished"?
Linux is "finished" in the sense that it works at least on some computers, in a stable enough way to let you do serious work.
There is no computer in the world you could throw at the Hurd and expect your work to not get eaten by a grue. The HURD is an unstable POS and if it wasn't enough they change their mind every few years and switch their choice of microkernel to the fashionable kernel-du-jour. From GNU Mach to L4 to debate about Coyotos to someone who's working on Viengoos.
In comparison, NeXT took the Mach microkernel, turned it into a somewhat monolithic architecture and made a commercially available OS, that was bought by Apple and is now on 10% of the consumer computers of the world. OS X is not the perfect academic OS but people are using it to do real work on it. Musicians, photographers and so on are productive with it.
I expect that in 2020 people will still talk about the HURD as the Duke Nukem Forever of operating systems.
On a side note, the FTP he mentions still functions, and still contains the directory minix. Pretty amazing that directory structure remained for 17 years. Anyway, reading on...
edit: Oh, it's A. Tanenbaum, my OS class used his book...
My OS class used his book, my networking class used his book, my compiler class used his book, my distributed computing class used his book ..
and by "class" I mean self-teaching :-P
AST is a legend and altered my life for the better. I can safely say that I paid more attention to AST's writings on my 18th year on earth than anything my friends or family have said. I had OSD&I and the Amoeba book on my desk, standalone linux with a bad winmodem, and floppies upon floppies of assembly language tutorials I downloaded at an internet-cafe. Life was good then. Code was good then.
Summary: 1992 usenet message from the creator of Minix, explaining that Linux is no good because it's monolithic and closely tied to x86.
In fairness, Linux evolved so that it's no longer x86-specific, and insmod makes it somewhat non-monolithic. Had it not evolved, AST might have been right and Linux might have died, particularly due lack of portability.
No. This is a common misconception due to the overloading of the adjective "monolithic". In the context of microkernels vs. monolithic kernels, kernel modules are irrelevant, since they operate within the same (monolithic) address space as the main kernel.
And linking to groups.google.com is much different than linking to google.com I think... I was expecting the Google OS announcement (not really, but I still expected something more than a discussion group discussion)
"You can say the burden is on us old-timers to tell you what's missing or we shouldn't be whining. But I don't see it that way. I see the burden is on the victors, who have the resources and who claim their way is better, to show us that they won for good reason."
" What is going to happen
is that they will gradually take over from the 80x86 line. They will
run old MS-DOS programs by interpreting the 80386 in software. (I even
wrote my own IBM PC simulator in C, which you can get by FTP from
ftp.cs.vu.nl = 192.31.231.42 in dir minix/simulator.) I think it is a
gross error to design an OS for any specific architecture, since that is
not going to be around all that long. "
Shocking how wrong he ended up being about the x86 line. It makes me wonder what predictions we make today will come out to.
Honestly, this isn't all that far off -- modern Intel and AMD chips have essentially nothing in common with the old 8086 line, and in fact use microcode heavily to "emulate" the classic instruction set.
Of course, it took them a few years longer than he thought to get there, but the basic idea is sound.
And still we look around and all but one OS we use today are Unix-like operating systems... And the one that's not is a security nightmare sending almost all the spam and running each every botnet in existence.
Making software free, but only for folks with enough money
to buy first class hardware is an interesting concept.
Of course 5 years from now that will be different, but 5 years from now everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5.
I guess I should take more note when I predict things I think would be technically good ideas...
A "good idea" would be a Lisp OS running on a "commodity" multicore, super-power-efficient RISC processor with a hardware MMU accelerated garbage collection and programmed in SSA-assembly.
btw, Windows and Java are what actually became obsolete - first because of that's simply enough, second - because there is no such goal as 'runs everywhere' anymore.
Linux is just a mainstream, which means it starting to fall.
It's starting to look like virtualization will deliver the world of microkernels which Tanenbaum prophecied: Xen is, for all practical purposes, a microkernel with which semiprivileged processes (OS kernels) interoperate.