One of the jobs of autotools (the configure script in particular) is to figure out which compilers to use, how to invoke them, and what capabilities they support. For example, is char signed or unsigned? Or does the compiler support static_assert(expr) or need to resort to extern void func(int arg[expr ? 1 : -1])? Language standards progress, and, particularly in the case of C++, compilers can be slow to correct their implementations to the language.
The reason I bring this up is because I discovered today that my configure failed because my compiler "didn't produce an executable." Having had to deal with this error several times (my earlier work with dxr spent a lot of time figuring out how to manipulate $CXX and still get a workable compiler), I immediately opened up the log, expecting to have to track down configure guessing the wrong file is the executable (last time, it was a .sql file instead of .o). No, instead the problem was that the compiler had crashed (the reason essentially boiling down to std::string doesn't like being assigned NULL). That reason, this is the program that autoconf uses to check that the compiler works:
#line 1861 "configure" #include "confdefs.h" main(){return(0);}
(I literally copied it from mozilla's configure script, go look around line 1861 if you don't believe me). That is a problem because it's not legal C99 code. No, seriously, autoconf has decided to verify that my C compiler is working by relying on a feature so old that it's been removed (not deprecated) in a 12-year old specification. While I might understand that there are some incredibly broken compilers out there, I'm sure this line of code is far more likely to fail working compilers than the correct code would be, especially considering that it is probably harder to write a compiler to accept this code than not except it. About the only way I can imagine writing this "test" program to make it more failtastic is to use trigraphs (which is legal C code that gcc does not honor by default). Hey, you could be running on systems that don't have a `#' key, right?
Addendum: Okay, yes, I'm aware that the annoying code is a result of autoconf2.13 and that the newest autoconfs don't have this problem. In fact, after inspecting some source history (I probably have too much time on my hands), the offending program was changed by a merge of the experimental branch in late 1999. But the subtler point, which I want to make clearer, is that the problem with autoconf is that it spends time worrying about arcane configurations that the projects who use them probably don't even support. It also wraps the checks for these configurations in scripts which render the actual faults incomprehensible, including "helpfully" cleaning up after itself so you can't actually see the offending command lines and results. Like, for example, the fact that your compiler never produced a .o file to begin with.
7 comments:
In all fairness, the code you refer to is generated with autoconf 2.13, which was released 12 years ago. More recent autoconf generates this:
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
;
return 0;
}
_ACEOF
In all fairness, C99 is also 12 years old, and it probably wasn't a big surprise that implicit int was removed from C. Even if autoconf tries to worry about old compilers, I doubt there would be any packages that would succeed on a compiler that failed to compile int main.
To state my point more clearly than I did in the post: autoconf is clearly trying to worry about problems that don't exist now and didn't exist even 12 years ago. And, on top of that, there is a thick layer of machine generation which makes problems essentially undiagnosable and difficult to fix.
Well, the question is, why are people still using such old versions of autoconf when their projects and platforms do not need the tests it uses.
http://www.cul.de/images/autotoolscg.jpg
Bashing autoconf 2.13 in 2011 is remarkably lame. You might as well bash gcc-2.95.
Bashing someone that's bashing autoconf 2.13 in 2011/2012 is beyond remarkably lame.
C'mon...I'm fighting d*mn libtool/autoconf/automake issues within Scratchbox2 trying to get cross-compilation going. In the late part of 2012. With the LATEST sets of this cr*p installed on host, target, etc.
I don't have these issues (or any other real problems) building with SCons, BJam, or CMAKE. If you end up trying to work with one of the narfy breakage edges (Like trying to pin code to API's even just TWO years ago...) you will go bald pulling hair out over it.
It's a cr*pball of nasty kludges one piled atop another all the way to the core. Defending Autohe...er...tools is the mind-bogglingly lame thing to do...something you just did...
As an example:
/bin/sh ../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I.. -I../include -I../include -O20 -Wall -ffast-math -fsigned-char -g -O2 -MT framing.lo -MD -MP -MF .deps/framing.Tpo -c -o framing.lo framing.c
../libtool: 1636: preserve_args+= --tag CC: not found
eval: 1: base_compile+= gcc: not found
eval: 1: base_compile+= -DHAVE_CONFIG_H: not found
eval: 1: base_compile+= -I.: not found
eval: 1: base_compile+= -I..: not found
eval: 1: base_compile+= -I../include: not found
eval: 1: base_compile+= -I../include: not found
eval: 1: base_compile+= -O20: not found
eval: 1: base_compile+= -Wall: not found
eval: 1: base_compile+= -ffast-math: not found
eval: 1: base_compile+= -fsigned-char: not found
eval: 1: base_compile+= -g: not found
eval: 1: base_compile+= -O2: not found
eval: 1: base_compile+= -MT: not found
eval: 1: base_compile+= framing.lo: not found
eval: 1: base_compile+= -MD: not found
eval: 1: base_compile+= -MP: not found
eval: 1: base_compile+= -MF: not found
eval: 1: base_compile+= .deps/framing.Tpo: not found
eval: 1: base_compile+= -c: not found
libtool: compile: you must specify a compilation command
libtool: compile: Try `libtool --help --mode=compile' for more information.
make[1]: *** [framing.lo] Error 1
It screws up because of the /bin/sh invocation in front of the line calling the script. WTF?
Post a Comment