The milestones outlined in the previous mail are not exactly complete
but as I've been off on my own for a little while it's important for me
to share my status somewhere, so here it goes.
Aboriginal Build Sandbox
~~~~~~~~~~~~~~~~~~~~~~~~
This was my first milestone, and basically consists of setting up a
chroot environment inside the aboriginal/qemu runtime where:
o Binaries required for running scripts are all statically linked
(this avoids the trouble of setting up any runtime libs in the early
build stages)
o The distcc compiler/toolchain is bind mounted into the
chroot so that we can compile.
This was fairly simple to accomplish and is done, have a fancy little
sandboxing script which assumes that some artifacts may have been
staged at a given directory and then sets up any missing pieces and
runs a chroot on it.
The script in question can be found here:
https://github.com/gtristan/aboriginal-controller/blob/master/control/b
in/sandbox
It works well when run in my aboriginal fork's 'extra-changes' branch
here:
https://github.com/gtristan/aboriginal/tree/extra-changes
Automating the aboriginal sandbox
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is currently very ugly but works.
Basically it assumes that there is an image with a directory containing
some staged artifacts and something to build, it just wraps the
aboriginal emulator startup scripts, runs a specified command (which
can be the invocation of a build script) inside the emulator
environment and propagates back the return value of the commands run in
the sandbox in the emulator.
I have a starting point for this in the same git repo above here:
https://github.com/gtristan/aboriginal-controller/blob/master/aborigina
l-sandbox
I've come to realize this approach really has to change sooner than
later, more on that later in this email.
NOTE: Eventually this 'controller' repository should assume that the
image is always running and be able to accept commands over some simple
IPC, run those in a sandbox and report the result. This approach will
also assume there is some file system sharing going on.
Building the stack
~~~~~~~~~~~~~~~~~~
In my previous mail I put this milestone later, but as I was putting
together these sandboxing scripts I got ahead of myself because I
wanted to test that this was really going to work.
This is where I made most progress, and the outlook is good, I was able
to build my own interpretation of build-essential in aboriginal and
have some build instructions saved which perform the following steps:
o Build static gawk against musl libc
o Build static sed against musl libc
o Install linux kernel headers
o Build glibc (requires some patches)
o Build bash against new glibc
o Build GCC runtime libraries including libstdc++
against the new glibc
After these steps are complete, I have tarballs (artifacts) of each
component... when they are staged and the sandbox script is run with
the new /lib/ld.so runtime linker specified, we can now compile C and
C++ programs against the fresh new runtime using the musl-based
compiler over distcc.
Hacking YBD to use the aboriginal sandbox
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Now this is where I am having trouble.
My original plan as a first iteration was to 'simply':
o stage the artifacts into a mounted disk image
o serialize the build commands into a script on that image
o run the aboriginal wrapper script asking it to execute
that script in the staging area
o Collect the output and create an artifact
It turns out that ybd requires more changes than I expected to make
this first iteration work.
So, since the eventual and saner approach is to use some filesystem
sharing (either nfs or ideally the virtfs qemu option), and since
having a shared directory would dramatically reduce the amount of
changes required to ybd, I started to try to get that to work.
But this is where I fall flat on my face, firstly even though the
aboriginal kernel is compiled with the relevant virtio and 9p options
enabled, I was unable so far to mount the virtfs share from the
aboriginal image.
I also tried to compile the aboriginal kernel with nfs support and
export a directory using nfs-kernel-server on my host, but was unable
to find the magic incantations to nfs mount a share from the image
either.
Networking is really my weakest point, when someone says the words
routing table or gateway, my brain just starts to fry and I am unable
to retain any useful information, it's like an allergy.
So in summary
~~~~~~~~~~~~~
o The concept works well, we can build packages against the libc of
our choosing using the cross compiler and aboriginal bootstrapping
process.
o Plugging this into YBD is tricky especially without any kind of
nfs.
o I'm trying to get the fs sharing working (nfs/virtfs or otherwise)
but I could use a hand, I have a feeling that this could be much
easier for someone who knows networking better than I do.
Cheers,
-Tristan