I dearly regret that I adopted autotools for some of my projects. As TFA identifies, the primary problem autotools solved is no longer a problem: Linux, BSD, Darwin, and Cygwin are the targets today and they're more-or-less portable and compliant. Meanwhile, autotools is a constant tax on the development cycle.
1. autoconf is awful to learn.
2. autoconf can be useful in determining the platform, dealing with a few quirks, and ensuring that system dependencies/libraries are satisfied.
3. autoconf's usefulness here is completely undermined by automake, as changing Makefile.am requires rerunning autoreconf, which then requires rerunning configure. So, developers must constantly rerun the configure script, even though nothing about their system environment has materially changed. For example, this happens when a new source file is created and added to Makefile.am.
4. Perhaps the most egregious missing weakness from TFA is acknowledgement that configure scripts are horrifically slow. It's 2022 and developers have their time wasted by configure checking whether printf() exists.
5. automake documentation discourages developers from globbing source files in source directories, and insists they list them individually. It's silly and just results in needless running of configure.
6. The complex interdependencies between generated files is nigh-impossible to understand and reason about. It's cruft.
7. libtool is completely braindead, the absolute worst of it all. It serves absolutely no purpose on contemporary systems, but the docs continue to admonish developers that they must use it to stand a chance of creating a shared library (no, just learn to specify -fPIC). Instead of analyzing a project once, as part of autoconf, libtool determines what to do on the fly for every source file--and all in bash. It doubles a project's compilation time.
8. Creating alternatives to libtool with new autoconf macros, deprecating libtool, and cleansing all traces of it from the planet should absolutely be the highest priority of the autotools project.
> 4. Perhaps the most egregious missing weakness from TFA is acknowledgement that configure scripts are horrifically slow. It's 2022 and developers have their time wasted by configure checking whether printf() exists.
I'd say on some platforms "horrifically slow" is an understatement. At my day job we were building software in Cygwin on Windows machines infected with Symantec's virus scanner which adds hundreds of milliseconds to every process launch. The configure script for Protocol Buffers took something like half an hour to run.
We eventually switched to CMake but not without a lot of pain first. For what it's worth, CMake fixes the speed issue but not much else. It still has its own totally esoteric custom language, it still discourages globbing source files and requires re-running upon adding any file or changing anything, and it has even more insane default settings than autotools (for example file(DOWNLOAD) from an https:// URL by default ignores certificates!)
There is something wrong with all of these buildsystems. I don't know what the solution is but I feel we are very far from it.
> It still has its own totally esoteric custom language, it still discourages globbing source files and requires re-running upon adding any file
We use cmake, with Windows as our main (but not the only) target platform. We found that not globbing is slow and impractical for us.
1. We generate VS project files, which then guides the build process. When you add source files using the IDE, the generated project(s) will be updated, no regenerate is triggered, the build system will do the correct thing: only the added file, not everything is (re-)built.
2. If we didn't glob, then given the nice local workflow above, people would often forget to update the cmake files and get CI failures.
3. When injecting cmake regeneration into the build process, then for some strange reason we found a bad serialization of build steps, which slows everything down.
4. The slow part on Windows seems to be that cmake has to use some system call to figure out if source file name casing is correct, which is very slow (for us the slowest part of the generation process by far). I don't recall the details here, but I think globbing somewhat reduces this issue.
All in all, I can say: figure out what works best for you and don't trust experts that discourage globbing altogether.
I'm all for bashing autotools, but in this one case, I'd throw Symantec (SEP?) into the fire first.
And the solution to all build systems being bad is imho not to use C/C++. It's model for platform independence is fundamentally broken. It's not like snafu doesn't exist elsewhere too (Python), but other systems are clearly less troublesome (Go, NPM, Cargo).
> it still discourages globbing source files and requires re-running upon adding any file or changing anything
Globbing is discouraged exactly for the reason that then cmake can't discover when files should be added or removed from the build process, or build options have changed. But if globbing isn't used there's no reason why manually re-running cmake would be required (there have been some problems in the past with Visual Studio not automatically reloading a modified project, but I haven't seen those problems in a long time).
>5. automake documentation discourages developers from globbing source files in source directories, and insists they list them individually. It's silly and just results in needless running of configure.
No it's not silly, realistically you can't actually do that anywhere without re-running configure. Any build system based on directory trees will have this limitation. See meson's docs for more on this: https://mesonbuild.com/FAQ.html#why-cant-i-specify-target-fi...
I also came here to comment that libtool is totally useless on modern systems and I hope it gets removed or made optional, but you beat me to it :)
The other problem is that during a Git conflict resolution, other files matching that glob get dropped into your source tree. Globbing these up is…never going to go well.
The part I always struggled with regarding 4. is that if printf() didn't exist, it's not like it bundles its own compat library. It just errors out, with a very similar error to what a user would get if they simply tried to compile the code that called a library you didn't have. This "pre compile test" didn't actually add anything to the picture.
Would you rather your build failed during configure or 10 minutes into the build, where parallelism will make the compile still print tons of things after the error message?
Honestly? If the question is if I would A) make everyone everywhere spend minutes (or potentially hours on Windows) all the time running the configure script any time anything changes, or B) not spend all that extra time running configure but maybe get a compile error a few minutes later if I'm compiling on extremely weird systems (such as ones without printf), I think I'd take B.
Especially since configure doesn't actually make anything any faster even in most cases where it _does_ detect errors. It aborts at the first error. So when I'm missing packages for example, I will wait half a minute while `./configure` runs, then it errors with a single missing package, then I will install that package and re-run `./configure`, wait another half a minute until it prints another single missing package, etc. Repeat until I have all packages.
I hate autotools so much. Configure scripts should either run in parallel, or cache its results, or both (ideally both), and they should report as many errors as it can before aborting.
I would rather the printf check be handled by the compiler than in the configure check because the compiler will of course know that printf is present and not waste your time.
I find autoconf to be good at its job - building a standalone script which runs micro tests - and still use it, but that automake and libtool add very little value for the reasons you described.
I do agree that m4 is rather unpleasant to write, and that the resulting tests are too slow (often because user defined macros do a poor job of cacheing) but I also think that these things could be addressed (newer macro languages, bundled or threaded micro-tests) while still preserving the good bits of autoconf.
The lack of parallelization is also a pain point. When I needed something to test platform specific stuff and set defines/etc, I found that writing something to generate ninja & integrate the define generation (which ended up in some generated headers) into the total build process. This was very snappy.
These days I use meson mostly, but I would love to see a complete build system really get into parallelizing platform checking and merge it into the normal build process.
I once munged a configure.ac so that it would blast out its variables to a text file, i.e.,
CFLAGS="-g -O2"
LDFLAG="-l openssl"
I then replaced automake and libtool with tup, and the Tupfile would just source these variables from configure. It worked great! It's a pity that tup hasn't caught on.
Hasn't WSL replaced Cygwin these days? I'm surprised that it and even MinGW are still used. I even recently fixed a bug filed about a build failure on MinGW due it not having System V shared memory functions.
I use WSL with mingw-w64 to cross compile things for native Windows sometimes... e.g. I keep my own builds of ffmpeg and mpv around, as well as some of my own "legacy" tools that I use both on linux and windows natively.
And in this space, it's often a lot easier to get something autotools-based to build than something cmake or meson, at least in my experience. With autotools more often than not all I need to do is to specify the right target hosttriple and be done. With cmake and meson, it's often a fight to figure out how to cross-compile something like that.
This approach of cross-compiling for windows with mingw-w64, sometimes on "proper" linux, sometimes on WSL2, sometimes even on cygwin, seems to be still somewhat common in C/C++ land and in open source in particular, especially for projects that pull in a fair amount of dependencies (such as the aforementioned ffmpeg) - you will find a lot of Windows build guides that say only mingw cross-compilation from linux is supported and tested.
Maintaining MinGW builds of your deps must be annoying, would be nice if some distros had a MinGW partial port/architecture people could use for cross-building other things.
There are some projects that try to be partial dependency "package" managers for mingw, but for whatever reason I never really used them, so I cannot comment on how well they work and how up-to-date they are kept.
I guess, I just like fighting with the interwoven mess of build-systems and dependencies sometimes (and really, I don't spend more than maybe 1 or 2 hours a month on that)... a bit like people doing linux-from-scratch, except probably less time consuming than building an entire kernel and userland step-by-step.
WSL is a Linux virtual machine. I'm developing low-latency audio & video software, I'm not going to have my users run it in a VM and require them to install WSL ... So MinGW it is (which is just a port of the GNU tools to windows and a free reimplementation of the windows API headers - the called code is still the MS's "universal runtime" shared libraries so it really does not change anything in my case except more sanity and final binaries faster than than the MSVC ones)
> Linux, BSD, Darwin, and Cygwin are the targets today
They're the most popular targets today. They certainly weren't yesterday, and very probably won't be tomorrow. I've seen a lot of changes over the decades and the attitude that "nobody will ever need more than 640 k" has always been popular. I also say this from the point of view of someone who makes a living with porting software to a reasonably popular platform "today" that is not one of those listed. Also someone who has spent many years doing "maintenance programming" as opposed new development on the leading edge. "Maintenance" accounts for something like 80% of revenue in the software industry, and the old machines in maintenance mode are not always identical to the new sleek affairs on developer's desks.
> 1. autoconf is awful to learn.
Yes. Like vim, it has a learning curve. It's not actually that bad, and the internet is chock full of documentation and examples. It's about on a par with alternatives like CMake and meson in terms of learning difficulty.
> 3. autoconf's usefulness here is completely undermined by automake, as changing Makefile.am requires rerunning autoreconf, which then requires rerunning configure. So, developers must constantly rerun the configure script, even though nothing about their system environment has materially changed. For example, this happens when a new source file is created and added to Makefile.am.
Ah, you got stuck at point 1. You don't have to re-run autoconf after re-running automate (autoreconf knows this), so you don't have to re-run configure. The makefiles generated by automake know this. That's why when you just type "make" after modifying a Makefile.am it will do the right thing automatically, and that is to re-run the config.status script which contains all the autodetected configuration already cached. It's seriously faster than running configure although the printed output is identical (because configure itself generates and runs that script).
> 4. Perhaps the most egregious missing weakness from TFA is acknowledgement that configure scripts are horrifically slow. It's 2022 and developers have their time wasted by configure checking whether printf() exists.
Yes. You pay the price for portability. You have argued "but I don't port my software why should I pay the price? That only helps other people and there's nothing in it for me". OK, fair enough.
> 5. automake documentation discourages developers from globbing source files in source directories, and insists they list them individually.
The documentation also says why. It's a well-reasoned argument. Also, despite it being a pain in the butt it turns out that explicitly showing your work is a benefit in the long run.
> 6. The complex interdependencies between generated files is nigh-impossible to understand and reason about.
They're a DAG; they should be straightforward for someone with a background in programming to grasp. The relationships are there a priori but the autotools document them and make them explicit.
> 7. libtool is completely braindead
OK. Agree with you there. Libtool was a godsend in its time. It's improved massively (yes, I see the complaints about frustrations with it over a decade ago) but it's largely superfluous today.
> Yes. You pay the price for portability. You have argued "but I don't port my software why should I pay the price? That only helps other people and there's nothing in it for me". OK, fair enough.
Portability to what, though?
It's 2022, not 1982. Most of the commercial UNIXes that autotools was written to handle are long dead now. When was the last time anyone saw a (non-embedded) system with CHAR_BITS ≠ 8, for example?
1. autoconf is awful to learn.
2. autoconf can be useful in determining the platform, dealing with a few quirks, and ensuring that system dependencies/libraries are satisfied.
3. autoconf's usefulness here is completely undermined by automake, as changing Makefile.am requires rerunning autoreconf, which then requires rerunning configure. So, developers must constantly rerun the configure script, even though nothing about their system environment has materially changed. For example, this happens when a new source file is created and added to Makefile.am.
4. Perhaps the most egregious missing weakness from TFA is acknowledgement that configure scripts are horrifically slow. It's 2022 and developers have their time wasted by configure checking whether printf() exists.
5. automake documentation discourages developers from globbing source files in source directories, and insists they list them individually. It's silly and just results in needless running of configure.
6. The complex interdependencies between generated files is nigh-impossible to understand and reason about. It's cruft.
7. libtool is completely braindead, the absolute worst of it all. It serves absolutely no purpose on contemporary systems, but the docs continue to admonish developers that they must use it to stand a chance of creating a shared library (no, just learn to specify -fPIC). Instead of analyzing a project once, as part of autoconf, libtool determines what to do on the fly for every source file--and all in bash. It doubles a project's compilation time.
8. Creating alternatives to libtool with new autoconf macros, deprecating libtool, and cleansing all traces of it from the planet should absolutely be the highest priority of the autotools project.