YBD + Aboriginal status update and HOWTO

Tristan Van Berkom tristan.vanberkom at codethink.co.uk
Fri Mar 4 10:30:31 GMT 2016


Hi all,

Warning, this email will be a bit long...

I've included a new and revised list of milestones towards getting the
ybd-aboriginal-baserock thing into a mergable state.

Also, I have a prototype so I've included instructions at the end of
this email for anyone who might be interested in trying to reproduce
this setup.

As an additional note which doesn't fit anywhere else, I've discovered
another, surmountable obstacle just yesterday, which is concerning
building baserock with aboriginal as a regular user and dropping the
requirement to run the build as root.

What I have found is that some of our build instructions try to install
things explicitly as root, instructions such as:

  install -m644 -o root -g root ...

These instructions work when run in an emulator where the emulator has
a filesystem image mounted and is installing in a chroot on that
filesystem.

These instructions however do not work when running in the emulator and
installing onto a 9p virtfs mount (a mount of a shared host directory),
this is because in the virtfs share scenario, the emulator can only
really write to the target filesystem using the privileges it has.

I think that essentially this can be worked around by:

  o Disallowing any install-commands which try to explicitly install
    things as the root user/group.

  o Performing some additional chown commands in the system-integration
    commands.

This would be a practical approach because using a virtfs mount for the
builds is very practical, as one need not create fixed size filesystem
images to slave to each qemu worker, and then extract the content of
those fs images in between builds. However, once all of the artifacts
are prepared for a given system image, it can be trivial enough to
create a fixed size, large filesystem image for the emulator to perform
the system integration commands on.


            Revised status and milestones
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Integration with YBD
~~~~~~~~~~~~~~~~~~~~
Some progress here already, still things to do:

  o Currently one must specify the emulator & controller directories in
    the ybd.conf file, this needs to be smarter so that depending on
    the target arch ybd uses the correct emulator for that arch...
    somehow.

  o YBD is not shutting down the emulators when exiting yet.

  o Eventually we probably want to absorb parts of the
    aboriginal-controller scripts into YBD and upstream some of this to
    aboriginal startup scripts proper, but it's not a hard blocker.


Build base system
~~~~~~~~~~~~~~~~~
Currently we have most definitions for build-essential ported, build
essential at this moment builds glibc, I would have ported my gcc
runtime library builds by now (which has been tested and works) except
that it takes a very long time to build and validate this.

Once this is done, need to get through a build of a complete base
system.


Run Deployment in the emulator
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This item will essentially let us have a more robust deployment
technique which is not dependent on host tooling in any way.

The basic idea at this stage is to create a special chroot environment
inside the emulator where we stage all the artifacts necessary to run
the build extensions in a given deployment, and run the deployment in
that chroot.

  o need to figure out how to identify which chunks are required to
    be staged, this may be hard coded in a first pass before becoming
    a part of the specification.

  o need to figure how to run the extensions inside a staged build area
    in the emulator in order to deploy the image, probably this means
    staging the build extension scripts into the same chroot area.


Avoiding the emulator for native builds
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Need to sort out how we can optionally use a simple chroot and avoid
the emulator workers.

Here we'll want to use the same aboriginal compiler and the same
build-essential of course. Aboriginal already has a 'chroot-splice.sh'
script which sets up a directory with it's compiler tooling suitable
for chrooting into which we should be able use almost as is.

This should be fairly simple, possibly we will want the option to keep
using the emulator even for a native build only in the final system
integration and deployment stages. Doing this in combination with
fixing any issues we have with linux-user-chroot would most probably
allow us to shed the root requirement even for native builds.


Need to assert the version of the aboriginal image from the definitions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the interest of perfect reproducibility, we dont want arbitrary
versions of the compiler used in conjunction with definitions and ybd.

So, we will need a way to have the definitions themselves assert the
exact version of sources used in the aboriginal kit and use a very
specific version.

There are a few ways to skin this cat, from creating a versioning
scheme for the aboriginal compiler builds to just committing compiled
copies of the aboriginal kit (they are relocatable) and specifying an
sha1sum of the kit we want to use.

NOTE: A more complex approach would be to internalize the aboriginal
      build completely and integrate it as a part of build essential.

      Personally I would rather find a way to cooperate with upstream
      Aboriginal as much as possible and avoid duplicating it inside
      of Baserock.


Implement important arches in upstream aboriginal
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is not strictly a part of the exercise, but it should be noted
that aboriginal has support for the following arches:

  armv4l, armv4tl, armv5l, armv6l, i486, i586, i686, x86_64, m68k,
  mips, mips64, mipsel, powerpc, powerpc-440fp, sh2eb, sh2elf, sh4,
  sparc  

A hand full of these have not been ported to build with musl libc
in aboriginal and as such will not build with the new GCC 5.3
toolchain. Some of these build for musl but still need some minor
tweaks to aboriginal sources.

I have only tested arm and mips target arches.

Most importantly, we're lacking armv7, armv7lhf and aarch64.


                    HOWTO Build (for now)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are now a few moving parts, in order to try this out you are
going to need the following components:


Step 1, Getting the source
~~~~~~~~~~~~~~~~~~~~~~~~~~

  Aboriginal fork
  ~~~~~~~~~~~~~~~
  URL:    https://github.com/gtristan/aboriginal
  Branch: extra-changes

  This fork contains all the parts of the work intended for upstreaming
  to aboriginal proper (port to gcc 5.3, some patches to the kernel
  config), some things from the controller (below) may be added to this
  downstream over time as well.


  Aboriginal Controller
  ~~~~~~~~~~~~~~~~~~~~~
  URL:    https://github.com/gtristan/aboriginal-controller
  Branch: master

  This is a wrapper for the aboriginal launch scripts, it allows
  running the qemu environment in the background as a build slave and
  sending some commands to it over an IPC.

  YBD Fork
  ~~~~~~~~
  URL:    https://github.com/gtristan/ybd/
  Branch: aboriginal

  My downstream ybd work branch which contains the experimental support
  for now.


  Definitions wip branch
  ~~~~~~~~~~~~~~~~~~~~~~
  URL:   https://git.baserock.org/git/baserock/baserock/definitions.git
  Branch: baserock/tristan/wip/aboriginal



Step 2, Building stuff
~~~~~~~~~~~~~~~~~~~~~~
First, remember to take note of the branches I pointed out in the
previous section.


  First build aboriginal
  ~~~~~~~~~~~~~~~~~~~~~~
  You will have to choose 2 things at this point:

    - HOST:   The arch you are building on 
    - TARGET: The arch you are building for

  These are file names in ${ABORIGINAL}/sources/targets, but note that
  not all targets are supported with the newer gcc 5.3 toolchain we
  require.

  You can safely use intel arches, armv5l (and probably all the
  supported arm variants) and mips.

  To build, choose your host and target and run:

    cd /path/to/aboriginal
    ENABLE_GPLV3=1 \
    CROSS_COMPILER_HOST=${HOST} \
    SYSIMAGE_TYPE=ext2 \
      ./build.sh ${TARGET}

  A reasonable invocation is:

    ENABLE_GPLV3=1 \
    CROSS_COMPILER_HOST=x86_64 \
    SYSIMAGE_TYPE=ext2 \
    ./build.sh armv5l

  This will take anywhere between 30min (modern x86_64 quad core cpu
  with plenty of SSD) to 4hours to complete (reported for older 
  dual-core x86 laptop and regular HDD).

  Now build the controller
  ~~~~~~~~~~~~~~~~~~~~~~~~
  To build the controller, you need to have mksquashfs installed. To
  run the controller, you will need socat installed.

  To build, just:

    cd /path/to/aboriginal-controller
    make

  And your done.


  Configure YBD
  ~~~~~~~~~~~~~
  For now ybd is hooked up to the emulator in a very nasty way, it
  needs to know both the path to the emulator (aboriginal system image)
  and it needs to know the path to the controller.

  Edit the file /path/to/ybd/ybd/config/ybd.conf and add the following:

    aboriginal-controller: '/path/to/aboriginal-controller'
    aboriginal-system: '/path/to/aboriginal/build/system-image-armv5l'

  To run multiple emulators in parallel, you should remember to also
  add:

    instances: 2

  NOTE: The aboriginal system image and cross compilers dont really
        have to stay in the aboriginal build directory, but the cross
        compiler must be *beside* the system image wherever it is
        located.


  Build build-essential using YBD / Aboriginal
  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  Now you can run YBD in the regular way, assuming you have the correct
  definitions branch checked out:

    cd /path/to/definitions
    /path/to/ybd/ybd.py strata/build-essential.morph armv5l

  The first time you run this, startup will take at least a minute,
  this is because the first time you startup an emulator it will create
  a 1GB swap file.

  Subsequent YBD invocations once the swap is built will just pause for
  the time it takes to boot up the emulators.

  Logging is copied to the regular place after each build command is
  run, however if you are running a long lived build command and want
  to observe the output, you can look in the staging directory which
  should be somewhere like:

    ~/ybd/tmp/<tmpdir>/build.log

  I did have a 'tail -f build.log > realoutput.log' running on the host
  system at one point, but this was just expensive for nothing so I
  removed that.







More information about the baserock-dev mailing list