Many years ago, I got interested in VAX computers,
which at that time had a rough Linux port running.
However, GCC continued to be a problem: Every now and
then, something broke the VAX backend, so I started to
do regular GCC builds to catch breakages as early as
possible.
Over time, this testing effort grew. A lot.
The setup now uses the
Laminar CI
system to run all these different build jobs.
These days, I build
Binutils/GAS/GDB
for quite a lot of targets, as well as
GCC. This GCC is
then used to build as many
Linux defconfigs as
possible. In addition to my initial VAX/Linux
interests, I started to do
NetBSD cross
builds from a Linux host, as well as building NetBSD
from within a NetBSD amd64 VM. I also integrated
glibc's
build-many-glibcs.py script to build full cross
toolchains including glibc.
To match by initial VAX interests, all upstreamed
Open SIMH
are also built. The microvax3900 simulator is then used to
boot the latest VAX NetBSD install ISO and install it (using
expect)
fully automated. These NetBSD VAX instances are then used
to build a few pkgsrc
packages.
Along with these large projects, a few smaller code based
are built:
elfutils
are a competing alternative to parts of Binutils. And the
Open SIMH project is accompanied by also builting
an emerging CMake integration, as well as the SIMH tools
containing various filesystem conversion utilities.
Also, the
IANA timezone code
is built, as I once had issues with it after being imported
to NetBSD and breaking a NetBSD build, as well as all the
sample configurations documented for
crosstool-NG
and buildroot.
The Telegram group
"Toolchain Build Results"
gets a message whenever a job switches between
success and failed (usually very
low traffic.)
TODO
Find sponsors for new(er) build hardware to be able
to do more builds faster. (Current host thanks
to my employer
Getslash
GmbH: HP ProLiant DL360e Gen8, 2x Xeon
E5-2450L, 64 GB RAM, 3 TB HDD) and
extend to use multiple build hosts.
Setup available test hardware (Alpha, VAX, PA-RISC,
UltraSPARC, MIPS) and automate NetBSD and Linux
kernel installations on these.
As jobs: auto-install NetBSD for as many targets
as possible (should work easily for amd64,
alpha, vax; maybe others as well) in available
simulators (Qemu, SIMH, GXemul).
News
While I'm still playing with NetBSD VAX
and an automated setup of
pbulk builders, I also added a
CI job
for building retro-fuse
(https://github.com/jaylogue/retro-fuse).
It's a neat project that adopts
original (!) operating system sources
to a FUSE adapter, carefully updating
the old sources. So this should be
compatible with old systems, even
bug-compatible. As a bonus, access to
old filesystems through the retro-fuse
programs is even verified using the
original old operating system on a SIMH
simulated PDP/11. (The SIMH PDP/11
binary actually also originates from
our CI builds.)
Round #4 is done. Took a bit longer,
as I did some more stuff in between.
Building Ada as a cross-compiler has its own issues:
It actually requires an up-to-date Ada compiler
on the host system. With the new round started,
a locally built compiler will generally be used
instead of one from the gcc-snapshot
package. That should give us a compiler that's
suitable to build Ada. Let's see if that works
for all targets, it was only tested for a
aarch64-linux build.
Round #3 Results
are out! This is the last round to contain
OpenSIMH builds with cmake, and the first
round to also include builds using
the buildcross script. It provides
some more coverage, but is expected to also
produce a good number of failed targets as
it also builds ancient GCC configurations
that will only build using older GCC versions.
Finished a new central script to start
emulators (Qemu, GXemul and Open SIMH) in a common way to
either start the NetBSD
installer on a fresh disk;
boot an installed disk; or
start instances of an installed
disk (using a writeable
overlay, keeping the disk
itself untouched). This is
great for running a number of
clean build VMs.
That should make it quite easy to start different
NetBSD install ISOs on different simulated
hardware, and start pkgsrc builders afterwards.
Installation is driven by an expect
script, which should quite useable
to also install NetBSD on real
hardware. Maybe something like
RaSCSI/PiSCSI could help here.
As all the stuff seems to build correctly, I
started to look into an issue with the
GCC testsuite. As you'd see, the tests
weren't really attempted at all. Turns
out, the local user ID needs to resolve
to a name. Too bad that doesn't work
when you run with numeric uid/gid
supplied in a Docker container.
Username lookup now works and the
testsuite attempts to run all the tests
locally. Unsuccessful of course. Next
step: Figure out how to properly
configure Dejagnu for all the
simulators. Fortunately, there is a
How
to test GCC on a simulator page
around. Thanks a lot for it!
With the first full documented round of
builds, all Buildroot configurations
where attempted as well. With 268
samples, Buildroot took about 1/5 of
the whole compile time. Of those, 254
(= 95%) were successful, while 14
(= 5%) failed. Appropriate tickets
were opened.
Instead of triggering certain jobs, I started
to build a full round of all
jobs. Thus there's now a
Round #0 Results
overview page.
When cross-building NetBSD-amd64 from a
Linux-amd64 setup, we'd observe an
interesting behavior: During the build
process, nbmandoc is built
which is used to convert man pages to
HTML. That's invoked multiple times
during the build, until ...
zlib gets build. Suddenly,
nbmandoc fails to start with a
dynamic linker problem: it cannot load
libc.so.12. The GNU/linux
system uses libc.so.6, but the
requested library, libc.so.12,
is the NetBSD target libc!
What caused this? For all the CI builds, I
have a little shell fragment which
allows me to choose between different
compilers. It sets $CC,
$CXX and
$LD_LIBRARY_PATH to use system
GCC, some different clang versions or
the GCC from Debian's
gcc-snapshot package. Usually,
the gcc-snapshot configuration
is used.
While setting $CC and $CXX
is as trivial as expected, setting up
$LD_LIBRARY_PATH turned out to
be more interesting than expected. The
initial approach was export LD_LIBRARY_PATH="/usr/lib/gcc-snapshot/lib:${LD_LIBRARY_PATH}"
which was ment to have the required
path set and preserve whatever was in
$LD_LIBRARY_PATH. This results
in an empty path component which I
expected the dynamic linker to ignore.
However, this was not the case! As I
learned, an empty path component in
$PATH (man bash) as
well as in $LD_LIBRARY_PATH
(man ld.so) evaluates to the
current directory. As the NetBSD
libz was also built for amd64,
it was pulled in by ld.so from
$PWD, but of course it
couldn't find NetBSD's
libc.so.12 as the
destdir was in no search path.
To correctly extend a variable like
$PATH or
$LD_LIBRARY_PATH, use this: export LD_LIBRARY_PATH="/usr/lib/gcc-snapshot/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}"
Initial support for buildroot was added.
That's 268 new Laminar jobs and as
these aren't exactly quick to build (at
least on my available hardware),
they'll probably be queued at irregular
intervals. Rough estimation: This will
add four days of build time.
With the new crosstool-NC and buildroot jobs,
Laminar now manages about 2300 FOSS
configurations!
All 115 crosstool-NG configurations were
scheduled. Of these initial builds, 111
were successful and 4 failed:
Installing ISL for host → Building ISL: /tmp/x86_64-multilib-linux-uclibc/bin/../lib/gcc/x86_64-multilib-linux-uclibc/12.2.0/../../../../x86_64-multilib-linux-uclibc/bin/ld.bfd: failed to set dynamic section sizes: bad value
Installing ISL for host → Building ISL: /tmp/x86_64-multilib-linux-uclibc/bin/../lib/gcc/x86_64-multilib-linux-uclibc/12.2.0/../../../../x86_64-multilib-linux-uclibc/bin/ld.bfd: failed to set dynamic section sizes: bad value
Not as reproducible as thought: The source
filenames may get into binaries
(__FILE__), which went
undetected as the test script
(https://salsa.debian.org/qa/jenkins.debian.net/-/blob/master/bin/reproducible_netbsd.sh)
does all builds it compares to each
other in the very same directory. Also,
with -g in CFLAGS it
seems the full $PWD ends up in
the binaries as well. All this came up
while getting reproducible VAX
builds. Some more work is underway to
actually find these differences in
reproducible builds.
During Modula-2 components: Assembler messages: Error: bad value (sr71k) for default CPU Internal error in mips_after_parse_args at config/tc-mips.c:15291
error: unknown conversion type character 'v' in format [-Werror=format=] and error: format '%ld' expects argument of type 'long int', but argument 9 has type 'unsigned int' [-Werror=format=]
/tmp/crosstoolng/.build/sparc-leon-linux-uclibc/src/gcc/gcc/graphite-isl-ast-to-gimple.c:349:3: error: 'isl_val_free' was not declared in this scope; did you mean 'isl_vec_free'?
With --full-gcc, all builds break
(probably due to no
--disable-gcov) on eg.
glibcbot-alpha-linux-gnu/21/src/gcc/libgcc/libgcov.h:49:10: fatal error: sys/mman.h: No such file or directory