As the building-on-modern-Linux-systems issues should now all be solved, I was running a new round of reproducibility tests for NetBSD, resulting in NetBSD Reproducibility Report #5. Indeed, those issues are all fixed, just one new problem showed up (cross-building from Linux) after some MOP rework for the VAX port.
On Linux, of 82 of all tested 94 port/arch combinations built successfully, with 78 building reproducible on two consecutive builds. Exceptions are the alpha/alpha, mac68k/m68k, macppc/powerpc and zaurus/earm ports. Building on NetBSD current, 83 (of 94) combinations build successfully, of those 68 were reproducible. Issues were seen with algor/mipsel, dreamcast/sh3el, evbarm/earmv4, evbarm/earmv4eb, evbarm/earmv6eb, evbmips/mips64eb, evbmips/mips64el, evbmips/mipseb, evbmips/mipsel, evbppc/powerpc, evbsh3/sh3el, iyonix/earm, mac68k/m68k, macppc/powerpc, riscv/riscv32 and zaurus/earm. 44 port/arch combinations are totally reproducible, creating bit-identical output on NetBSD and Linux.All known NetBSD build issues on up-to-date Linux systems seem to be fixed now. Some of my patches are already applied, two are waiting to go upstream. With that state, I've started a new NetBSD-only build round to get new data for reproducibility.
I don't yet have proper hardware, but managed to get my hands on a few laptops and thus I'm extending my current setup to be able to utilize multiple build hosts.
A new build round finished: Round #13, along with the next NetBSD reproducibility report: NetBSD Reproducibility Report #4.
A new build round finished: Round #12, along with the next NetBSD reproducibility report: NetBSD Reproducibility Report #3.
The NetBSD build issues on very new Linux distros seem to be fixed by now, but at that late stage, I opted to build only amd64 and vax. The next build round (starting today) will test all cpu/port variants again.
A new build round finished: Round #11. Unfortunately, it is coined by some GCC issues: A good number of previously GNU libc builds failed as well as Linux kernel configurations.
On the NetBSD side, it doesn't look much brighter: The most recent update to GCC 14 for the Docker container building NetBSD led to some issues with non-declared functions, breaking the NetBSD Tools build (and thus the whole build.) In addition to that, the native NetBSD Qemu machines also vanished (as the amd64 ISO image wasn't build.) This results in an all-red NetBSD Reproducibility Report #2. Discussion about this issue is happening on the tech-toolchain mailing list.
While I didn't post updates here to this short blog articles, build tests did of course happen: Round #8, Round #9, and Round #10.
Most actual work happened on NetBSD: I submitted a good number of patches and reproducibility is now on a level where it starts to make sense to actually test it over all ports and CPU architectures: NetBSD Reproducibility Report #1. For most ports, two consecutive (cross-)builds done on Linux are reproducible, as well as two consecutive native NetBSD (amd64-based) builds. Right now, I'm chasing down the remaining spots. It seems these are, most of the time, caused by an unstable qsort() output (where the sort key for multiple items is equal.) These are, however, quite tedious to work out...
As many buildroot builds are failing (libstdc++ linking issues while building cmake), I just killed these builds.
Also, the regenerated Macroassembler repo is moved away from GitHub to my own hosting. It can be found at https://lug-owl.de/~jbglaw/git/macroassembler.git.
There's is Alfred Arnold's Macroassembler AS around which I wanted to include into the regular CI builds. As Alfred only published tarballs, I created a GIT repo from available historic tarballs. (Please notice that this repo will be moved to lug-owl.de later on as I don't want to forcefully "participate" in Github's 2FA.)
Building the Macroassembler AS was a great experience: Even with -Wall -Wextra -Werror -pedantic if simply built without any issues. Well done!
That's nice: Centralized storing away build artifacts, which instantly allows me to store build results for several builds of the same commit hash. (The GIT commit hash was the main filename difference when storing build results.) By simply adding the build number (now as it is using a single function across all builds) I can store more build results, which in turn can be used to compare two consecutive builds of the same sources for reproducibility.
Round #7 finished to build. This includes some Binutils bisecting as a small upstream patch to ld broke command line handling, which led to broken Linux kernel builds for the sh and hppa targets.
Some small patch went in into NetBSD to fix date strings in the INSTALL documents. These showed differences for NetBSD- vs. Linux-based builds due to differences in date's command line options.
That fixes almost all INSTLAL differences, except paper size. While it seems that the Postscript version should be in US Letter format, a freshly installed NetBSD current will produce A4 output, while a Linux cross-build will actually generate US Letter format... This isn't fully understood by now, need to further investigate on this.
After using a Qemu-based NetBSD 9 VM for the longest time to do native NetBSD builds from within a NetBSD, I've now reworked my setup to use the latest self-built NetBSD amd64 Install ISO to auto-install a -current Qemu VM (which, with throwaway overlays, can be spawned several times to do a number of builds in parallel.)
The next full Round #6 report is available.
The next full Round #5 report is available.
SIMH is a well-known machine simulator, which I use a lot for simulating VAX hardware. However, there is another such simulator for a KA630 around, by Mouse, at git://git.rodents-montreal.org/Mouse/emul/vax/full. That started as a project running on just one machine, but Mouse is actively adding support for current stock NetBSD as well as Linux.
This simulator still has some glitches which prevents current NetBSD from booting, but I guess we can get that sorted out. Once done, it should be able to run a regular NetBSD install ISO, have network support (most notably PCAP and TAP support, but also BPF and a homegrown TCP-encapsulation format) as well as disk support (including writeable overlays, somewhat similar to SIMH's or Qemu's VHD overlay support.)
While I'm still playing with NetBSD VAX and an automated setup of pbulk builders, I also added a CI job for building retro-fuse (https://github.com/jaylogue/retro-fuse). It's a neat project that adopts original (!) operating system sources to a FUSE adapter, carefully updating the old sources. So this should be compatible with old systems, even bug-compatible. As a bonus, access to old filesystems through the retro-fuse programs is even verified using the original old operating system on a SIMH simulated PDP/11. (The SIMH PDP/11 binary actually also originates from our CI builds.)
As compile-testing works without any major issues, my current plan is to put together a VAX VM (OpenSIMH) and prepare it as a pbulk builder using an amd64-based distcc host. Something like that was already done by other people (eg. see https://hackaday.io/project/218-speed-up-pkgsrc-on-retrocomputers/details in combination with general pbulk (http://wiki.netbsd.org/tutorials/pkgsrc/pbulk/) and distcc (https://wiki.netbsd.org/tutorials/pkgsrc/cross_compile_distcc/) docs), but my approach is to script it to a point where I can easily reproduce such a setup for further targets.
Round #4 is done. Took a bit longer, as I did some more stuff in between.
Building Ada as a cross-compiler has its own issues: It actually requires an up-to-date Ada compiler on the host system. With the new round started, a locally built compiler will generally be used instead of one from the gcc-snapshot package. That should give us a compiler that's suitable to build Ada. Let's see if that works for all targets, it was only tested for a aarch64-linux build.
Round #3 Results are out! This is the last round to contain OpenSIMH builds with cmake, and the first round to also include builds using the buildcross script. It provides some more coverage, but is expected to also produce a good number of failed targets as it also builds ancient GCC configurations that will only build using older GCC versions.
Round #2 Results are out!
Round #1 Results are out!
Finished a new central script to start emulators (Qemu, GXemul and Open SIMH) in a common way to
- either start the NetBSD installer on a fresh disk;
- boot an installed disk; or
- start instances of an installed disk (using a writeable overlay, keeping the disk itself untouched). This is great for running a number of clean build VMs.
That should make it quite easy to start different NetBSD install ISOs on different simulated hardware, and start pkgsrc builders afterwards.
Installation is driven by an expect script, which should quite useable to also install NetBSD on real hardware. Maybe something like RaSCSI/PiSCSI could help here.
As all the stuff seems to build correctly, I started to look into an issue with the GCC testsuite. As you'd see, the tests weren't really attempted at all. Turns out, the local user ID needs to resolve to a name. Too bad that doesn't work when you run with numeric uid/gid supplied in a Docker container. Username lookup now works and the testsuite attempts to run all the tests locally. Unsuccessful of course. Next step: Figure out how to properly configure Dejagnu for all the simulators. Fortunately, there is a How to test GCC on a simulator page around. Thanks a lot for it!
With the first full documented round of builds, all Buildroot configurations where attempted as well. With 268 samples, Buildroot took about 1/5 of the whole compile time. Of those, 254 (= 95%) were successful, while 14 (= 5%) failed. Appropriate tickets were opened.
Instead of triggering certain jobs, I started to build a full round of all jobs. Thus there's now a Round #0 Results overview page.
When cross-building NetBSD-amd64 from a Linux-amd64 setup, we'd observe an interesting behavior: During the build process, nbmandoc is built which is used to convert man pages to HTML. That's invoked multiple times during the build, until ... zlib gets build. Suddenly, nbmandoc fails to start with a dynamic linker problem: it cannot load libc.so.12. The GNU/linux system uses libc.so.6, but the requested library, libc.so.12, is the NetBSD target libc!
What caused this? For all the CI builds, I have a little shell fragment which allows me to choose between different compilers. It sets $CC, $CXX and $LD_LIBRARY_PATH to use system GCC, some different clang versions or the GCC from Debian's gcc-snapshot package. Usually, the gcc-snapshot configuration is used.
While setting $CC and $CXX
is as trivial as expected, setting up
$LD_LIBRARY_PATH turned out to
be more interesting than expected. The
initial approach was
export LD_LIBRARY_PATH="/usr/lib/gcc-snapshot/lib:${LD_LIBRARY_PATH}"
which was ment to have the required
path set and preserve whatever was in
$LD_LIBRARY_PATH. This results
in an empty path component which I
expected the dynamic linker to ignore.
However, this was not the case! As I
learned, an empty path component in
$PATH (man bash) as
well as in $LD_LIBRARY_PATH
(man ld.so) evaluates to the
current directory. As the NetBSD
libz was also built for amd64,
it was pulled in by ld.so from
$PWD, but of course it
couldn't find NetBSD's
libc.so.12 as the
destdir was in no search path.
To correctly extend a variable like
$PATH or
$LD_LIBRARY_PATH, use this:
export LD_LIBRARY_PATH="/usr/lib/gcc-snapshot/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}"
Initial support for buildroot was added. That's 268 new Laminar jobs and as these aren't exactly quick to build (at least on my available hardware), they'll probably be queued at irregular intervals. Rough estimation: This will add four days of build time.
With the new crosstool-NC and buildroot jobs, Laminar now manages about 2300 FOSS configurations!
All 115 crosstool-NG configurations were scheduled. Of these initial builds, 111 were successful and 4 failed:
Configuration | Issue |
---|---|
crosstoolng-loongarch64-unknown-linux-gnu | Invalid configuration. Run 'ct-ng menuconfig' and check which options select INVALID_CONFIGURATION. |
crosstoolng-x86_64-multilib-linux-uclibc,moxie-unknown-moxiebox | Installing ISL for host → Building ISL: /tmp/x86_64-multilib-linux-uclibc/bin/../lib/gcc/x86_64-multilib-linux-uclibc/12.2.0/../../../../x86_64-multilib-linux-uclibc/bin/ld.bfd: failed to set dynamic section sizes: bad value |
crosstoolng-x86_64-multilib-linux-uclibc,powerpc-unknown-elf | Installing ISL for host → Building ISL: /tmp/x86_64-multilib-linux-uclibc/bin/../lib/gcc/x86_64-multilib-linux-uclibc/12.2.0/../../../../x86_64-multilib-linux-uclibc/bin/ld.bfd: failed to set dynamic section sizes: bad value |
crosstoolng-x86_64-pc-linux-gnu,arm-picolibc-eabi | Required toolchain x86_64-pc-linux-gnu does not exist, though x86_64-unknown-linux-gnu exists. So it's possibly enough to rename this configuration. |
New generator job for building all of crosstool-NG's sample configurations. Final first builds should show up during the next days.