Hacker Newsnew | past | comments | ask | show | jobs | submit | spamvictim's commentslogin

Hi. I wrote "Linkers and Loaders."

I'm glad to see you all find it's useful but if so, please take a copy out of the library or (perish forbid) buy a copy. You can find new and used copies in the usual online stores.

The chapters on my web site are unedited review drafts with a lot of errors that were fixed in the printed book.


What? Buy a second copy? (-:

I have it on the bookshelf next to me as I type this.

I actually have quite a few books, and one of the subjects that really only two that I ever bought has treated well is DLLs in Win32, OS/2, and the like. Almost every author treats them as an afterthought or a variation on Unix shared libraries, when they are not. There's all sorts of stuff peculiar to DLLs; from compression in LX format DLLs, through techniques for ensuring that as many fixups as possible are concentrated in one or a few pages, through module deduplication and search paths, to the application-mode bootstrapping in the likes of DOSCALL1.DLL and NTDLL.DLL.

They were Matt Pietrek's books on Windows 95 and Windows NT. No-one properly documented OS/2 DLLs in a book, especially the way that LIBPATHSTRICT changed them.

* https://groups.google.com/d/msg/comp.os.os2.programmer.misc/...

* https://groups.google.com/d/msg/comp.os.os2.programmer.misc/...


Maybe I started programming professionally too late, but the memory saving you get from using a DLL never seemed worth the added complexity and and potential incompatiblity problems that invariably pop up.

There are other decent uses for DLLs (plugins, Unix support like Cygwin), but saving memory seems terribly insignificant.


I tend to find dynamic libraries extremely useful for allowing me to customize the behavior of applications without having to modify their binary. You can consider static linking to be a “hardcoded” implementation of a feature while a dynamically loaded one to be more flexible. This differs from plugin functionality because plugins are intended to provide new features only at certain extension points, but with dynamic linking I can change the behavior of essentially anything that’s pulled in at runtime.


I don’t really get this. If you’re already replacing shared libraries, how much more difficult is it to replace the full application? Seems that replacing shared libraries on an ad-hoc manner is exactly the behavior that leads to the infamous incompatibility problems down the road.


Of course, fixing the application is always the best solution, but that’s not always possible. If you didn’t write the full application, then this isn’t something you can do. One of my standard use cases for this behavior is loading plugins into system apps and overriding certain behavior, which is generally not intended by the application author (although, in certain cases, this has been tacitly encouraged).


I think there is a shift to not use shared libraries at all. That is sort of the selling point of Go: "Everything in one binary. Plop it on a server and run it!".

Also more and more C++ is written like this. Libraries are distributed as header-only and just compiled in.


Definitely. Although I think the C++ header-only trend is more about not having to deal with C++'s various terrible build systems than dealing with dynamic linking.


This is a often huge difference between programming for Unix-likes and Windows.

With Windows, you tend to bundle your application with all of its dependencies in your installation folder...

With traditional Unix developpement you often rely on your dependencies being installed on the target system by the package manager, and building against shared librairies is the norm. You don't need the complexity of Windows DLLs either, the toolchain handle most of the complexity


But that's only because Linux etc make it really stupidly difficult to distribute apps in a Windows-like way. It's not because nobody wants to do it.

Look at the failure of autopackage and the recent Snap and Flatpak efforts for proof of this.


> But that's only because Linux etc make it really stupidly difficult to distribute apps in a Windows-like way.

Actually, it does not. If that's what you want then you are free to implement your windows-like installer. The process goes something like this:

* Install all binaries (executable and dynamic libraries) in a target directory (say, /opt/<your_app> or /usr/local/<your_app>)

* Create a shell script that sets the LD_LIBRARY_PATH to the directory where you've installed your program files and afterwards runs your application

* Run the application by launching the script

I would also like to add that the Windows-like process you've mentioned is also a crude hack developed to get around Windows' DLL problem.


Yeah, some people want to. But that's not the way these systems are designed.

Having a fully statically built binary is not difficult, but it isn't the default. People still often use dynamic loading for its benefits.

People wanting to distribute apps on the Windows way are going to run into problems, but the opposite is even more true...


One use is for things like OpenGL, where there's no specific library called "opengl", since every hardware vendor provides their own implementation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: