That's nice: Centralized storing away build artifacts, which instantly allows me to store build results for several builds of the same commit hash. (The GIT commit hash was the main filename difference when storing build results.) By simply adding the build number (now as it is using a single function across all builds) I can store more build results, which in turn can be used to compare two consecutive builds of the same sources for reproducibility.
Round #7 finished to build. This includes some Binutils bisecting as a small upstream patch to ld broke command line handling, which led to broken Linux kernel builds for the sh and hppa targets.
Some small patch went in into NetBSD to fix date strings in the INSTALL documents. These showed differences for NetBSD- vs. Linux-based builds due to differences in date's command line options.
That fixes almost all INSTLAL differences, except paper size. While it seems that the Postscript version should be in US Letter format, a freshly installed NetBSD current will produce A4 output, while a Linux cross-build will actually generate US Letter format... This isn't fully understood by now, need to further investigate on this.
After using a Qemu-based NetBSD 9 VM for the longest time to do native NetBSD builds from within a NetBSD, I've now reworked my setup to use the latest self-built NetBSD amd64 Install ISO to auto-install a -current Qemu VM (which, with throwaway overlays, can be spawned several times to do a number of builds in parallel.)
The next full Round #6 report is available.
The next full Round #5 report is available.
SIMH is a well-known machine simulator, which I use a lot for simulating VAX hardware. However, there is another such simulator for a KA630 around, by Mouse, at git://git.rodents-montreal.org/Mouse/emul/vax/full. That started as a project running on just one machine, but Mouse is actively adding support for current stock NetBSD as well as Linux.
This simulator still has some glitches which prevents current NetBSD from booting, but I guess we can get that sorted out. Once done, it should be able to run a regular NetBSD install ISO, have network support (most notably PCAP and TAP support, but also BPF and a homegrown TCP-encapsulation format) as well as disk support (including writeable overlays, somewhat similar to SIMH's or Qemu's VHD overlay support.)
While I'm still playing with NetBSD VAX and an automated setup of pbulk builders, I also added a CI job for building retro-fuse (https://github.com/jaylogue/retro-fuse). It's a neat project that adopts original (!) operating system sources to a FUSE adapter, carefully updating the old sources. So this should be compatible with old systems, even bug-compatible. As a bonus, access to old filesystems through the retro-fuse programs is even verified using the original old operating system on a SIMH simulated PDP/11. (The SIMH PDP/11 binary actually also originates from our CI builds.)
As compile-testing works without any major issues, my current plan is to put together a VAX VM (OpenSIMH) and prepare it as a pbulk builder using an amd64-based distcc host. Something like that was already done by other people (eg. see https://hackaday.io/project/218-speed-up-pkgsrc-on-retrocomputers/details in combination with general pbulk (http://wiki.netbsd.org/tutorials/pkgsrc/pbulk/) and distcc (https://wiki.netbsd.org/tutorials/pkgsrc/cross_compile_distcc/) docs), but my approach is to script it to a point where I can easily reproduce such a setup for further targets.
Round #4 is done. Took a bit longer, as I did some more stuff in between.
Building Ada as a cross-compiler has its own issues: It actually requires an up-to-date Ada compiler on the host system. With the new round started, a locally built compiler will generally be used instead of one from the gcc-snapshot package. That should give us a compiler that's suitable to build Ada. Let's see if that works for all targets, it was only tested for a aarch64-linux build.
Round #3 Results are out! This is the last round to contain OpenSIMH builds with cmake, and the first round to also include builds using the buildcross script. It provides some more coverage, but is expected to also produce a good number of failed targets as it also builds ancient GCC configurations that will only build using older GCC versions.
Round #2 Results are out!
Round #1 Results are out!
Finished a new central script to start emulators (Qemu, GXemul and Open SIMH) in a common way to
- either start the NetBSD installer on a fresh disk;
- boot an installed disk; or
- start instances of an installed disk (using a writeable overlay, keeping the disk itself untouched). This is great for running a number of clean build VMs.
That should make it quite easy to start different NetBSD install ISOs on different simulated hardware, and start pkgsrc builders afterwards.
Installation is driven by an expect script, which should quite useable to also install NetBSD on real hardware. Maybe something like RaSCSI/PiSCSI could help here.
As all the stuff seems to build correctly, I started to look into an issue with the GCC testsuite. As you'd see, the tests weren't really attempted at all. Turns out, the local user ID needs to resolve to a name. Too bad that doesn't work when you run with numeric uid/gid supplied in a Docker container. Username lookup now works and the testsuite attempts to run all the tests locally. Unsuccessful of course. Next step: Figure out how to properly configure Dejagnu for all the simulators. Fortunately, there is a How to test GCC on a simulator page around. Thanks a lot for it!
With the first full documented round of builds, all Buildroot configurations where attempted as well. With 268 samples, Buildroot took about 1/5 of the whole compile time. Of those, 254 (= 95%) were successful, while 14 (= 5%) failed. Appropriate tickets were opened.
Instead of triggering certain jobs, I started to build a full round of all jobs. Thus there's now a Round #0 Results overview page.
When cross-building NetBSD-amd64 from a Linux-amd64 setup, we'd observe an interesting behavior: During the build process, nbmandoc is built which is used to convert man pages to HTML. That's invoked multiple times during the build, until ... zlib gets build. Suddenly, nbmandoc fails to start with a dynamic linker problem: it cannot load libc.so.12. The GNU/linux system uses libc.so.6, but the requested library, libc.so.12, is the NetBSD target libc!
What caused this? For all the CI builds, I have a little shell fragment which allows me to choose between different compilers. It sets $CC, $CXX and $LD_LIBRARY_PATH to use system GCC, some different clang versions or the GCC from Debian's gcc-snapshot package. Usually, the gcc-snapshot configuration is used.
While setting $CC and $CXX
is as trivial as expected, setting up
$LD_LIBRARY_PATH turned out to
be more interesting than expected. The
initial approach was
export LD_LIBRARY_PATH="/usr/lib/gcc-snapshot/lib:${LD_LIBRARY_PATH}"
which was ment to have the required
path set and preserve whatever was in
$LD_LIBRARY_PATH. This results
in an empty path component which I
expected the dynamic linker to ignore.
However, this was not the case! As I
learned, an empty path component in
$PATH (man bash) as
well as in $LD_LIBRARY_PATH
(man ld.so) evaluates to the
current directory. As the NetBSD
libz was also built for amd64,
it was pulled in by ld.so from
$PWD, but of course it
couldn't find NetBSD's
libc.so.12 as the
destdir was in no search path.
To correctly extend a variable like
$PATH or
$LD_LIBRARY_PATH, use this:
export LD_LIBRARY_PATH="/usr/lib/gcc-snapshot/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}"
Initial support for buildroot was added. That's 268 new Laminar jobs and as these aren't exactly quick to build (at least on my available hardware), they'll probably be queued at irregular intervals. Rough estimation: This will add four days of build time.
With the new crosstool-NC and buildroot jobs, Laminar now manages about 2300 FOSS configurations!
All 115 crosstool-NG configurations were scheduled. Of these initial builds, 111 were successful and 4 failed:
Configuration | Issue |
---|---|
crosstoolng-loongarch64-unknown-linux-gnu | Invalid configuration. Run 'ct-ng menuconfig' and check which options select INVALID_CONFIGURATION. |
crosstoolng-x86_64-multilib-linux-uclibc,moxie-unknown-moxiebox | Installing ISL for host → Building ISL: /tmp/x86_64-multilib-linux-uclibc/bin/../lib/gcc/x86_64-multilib-linux-uclibc/12.2.0/../../../../x86_64-multilib-linux-uclibc/bin/ld.bfd: failed to set dynamic section sizes: bad value |
crosstoolng-x86_64-multilib-linux-uclibc,powerpc-unknown-elf | Installing ISL for host → Building ISL: /tmp/x86_64-multilib-linux-uclibc/bin/../lib/gcc/x86_64-multilib-linux-uclibc/12.2.0/../../../../x86_64-multilib-linux-uclibc/bin/ld.bfd: failed to set dynamic section sizes: bad value |
crosstoolng-x86_64-pc-linux-gnu,arm-picolibc-eabi | Required toolchain x86_64-pc-linux-gnu does not exist, though x86_64-unknown-linux-gnu exists. So it's possibly enough to rename this configuration. |
New generator job for building all of crosstool-NG's sample configurations. Final first builds should show up during the next days.