I'm glad to see you all find it's useful but if so, please take a copy out of the library or (perish forbid) buy a copy. You can find new and used copies in the usual online stores.
The chapters on my web site are unedited review drafts with a lot of errors that were fixed in the printed book.
I have it on the bookshelf next to me as I type this.
I actually have quite a few books, and one of the subjects that really only two that I ever bought has treated well is DLLs in Win32, OS/2, and the like. Almost every author treats them as an afterthought or a variation on Unix shared libraries, when they are not. There's all sorts of stuff peculiar to DLLs; from compression in LX format DLLs, through techniques for ensuring that as many fixups as possible are concentrated in one or a few pages, through module deduplication and search paths, to the application-mode bootstrapping in the likes of DOSCALL1.DLL and NTDLL.DLL.
They were Matt Pietrek's books on Windows 95 and Windows NT. No-one properly documented OS/2 DLLs in a book, especially the way that LIBPATHSTRICT changed them.
Maybe I started programming professionally too late, but the memory saving you get from using a DLL never seemed worth the added complexity and and potential incompatiblity problems that invariably pop up.
There are other decent uses for DLLs (plugins, Unix support like Cygwin), but saving memory seems terribly insignificant.
I tend to find dynamic libraries extremely useful for allowing me to customize the behavior of applications without having to modify their binary. You can consider static linking to be a “hardcoded” implementation of a feature while a dynamically loaded one to be more flexible. This differs from plugin functionality because plugins are intended to provide new features only at certain extension points, but with dynamic linking I can change the behavior of essentially anything that’s pulled in at runtime.
I don’t really get this. If you’re already replacing shared libraries, how much more difficult is it to replace the full application? Seems that replacing shared libraries on an ad-hoc manner is exactly the behavior that leads to the infamous incompatibility problems down the road.
Of course, fixing the application is always the best solution, but that’s not always possible. If you didn’t write the full application, then this isn’t something you can do. One of my standard use cases for this behavior is loading plugins into system apps and overriding certain behavior, which is generally not intended by the application author (although, in certain cases, this has been tacitly encouraged).
I think there is a shift to not use shared libraries at all. That is sort of the selling point of Go: "Everything in one binary. Plop it on a server and run it!".
Also more and more C++ is written like this. Libraries are distributed as header-only and just compiled in.
Definitely. Although I think the C++ header-only trend is more about not having to deal with C++'s various terrible build systems than dealing with dynamic linking.
This is a often huge difference between programming for Unix-likes and Windows.
With Windows, you tend to bundle your application with all of its dependencies in your installation folder...
With traditional Unix developpement you often rely on your dependencies being installed on the target system by the package manager, and building against shared librairies is the norm. You don't need the complexity of Windows DLLs either, the toolchain handle most of the complexity
This is a good introduction to a topic that very little is generally written about, and is practically useful to few people outside operating systems and compilers - and compilers usually use someone else's linker, although they still need to know about relocations, fixups, external linkage schemes etc to generate workable machine code.
If you squint a bit, there's a continuum between image-based languages like Smalltalk and Lisp (optionally), where allocations persist across restarts, and linkers, where the memory "allocation" happens at compile time and is only used at runtime, through to smart linkers, which closely resemble the marking and compacting phases of a simple garbage collector. The compiler needs to advertise its roots, pointers and pointed to targets to the linker so that relocations can work. Even fixups, the micro language that the linker needs to evaluate when resolving symbols, has analogies with restoring an image - any non-persistable references need to be restored.
C and C++ programmers could benefit from this. So many times I had to explain an error message or problems that would be obvious if they knew how it worked.
Cool! I did a little review of this book back when I was the CodeWarrior for Windows compiler/linker developer at Metrowerks. It's a really useful work.
curious how useful this would be given that the published date of the book is 2000 and this link is 1999. Serious question if anyone can answer it, I'm getting into lowlevel work lately and would appreciate the education if its still an accurate reflection of technique/technology.
I bought a copy back then out of curiosity (as a Linux user). It's an excellent book, and I don't think many of the techniques described in it are out of date. Things like the ELF format still form the infrastructure for Linux binaries. There's a bit of historical info too, but IMO reading that is analogous to reading about a language like C — it's of central importance if you want to understand how and why these systems developed.
I'm glad to see you all find it's useful but if so, please take a copy out of the library or (perish forbid) buy a copy. You can find new and used copies in the usual online stores.
The chapters on my web site are unedited review drafts with a lot of errors that were fixed in the printed book.