New year's resolution: improve desktop Linux distributions
by Sam Thursfield
Hi all
If I understand correctly, part of the reason for starting work on
Baserock was to research fixing various problems in existing desktop
Linux distributions.
The Baserock project is 4 years old now, and I thought maybe it's time
to look at that aim again.
I had a go at listing everything that is wrong with the current set of
free software operating systems that I could theoretically run as a
desktop OS. Please let me know if you disagree with any of it or if you
can think of other things to add.
Here's the list.
Arch Linux:
- infinite number of possible variants that cannot all be tested
- upgrades are not atomic
- cannot rollback after upgrade
Baserock reference systems:
- lack of packages
- lack of easy runtime configurability
- lack of momentum
- lack of good documentation
- no serious handling of security updates
- security fix in a single component may require rebuilding 1000s of
other components
Debian/Ubuntu:
- very complex, lots of moving parts, many of which overlap
- sources spread across many repos of different types
-> dgit mitigates this problem: it makes all Debian source packages
available through consistant Git interface
- infinite number of possible variants that cannot all be tested
- upgrades are not atomic
- cannot rollback after upgrade
- packages lag behind upstream versions at times
- packages sometimes have a big delta compared to upstream
Fedora:
- very complex, lots of moving parts, many of which overlap
- infinite number of possible variants that cannot all be tested
-> rpm-ostree project will help with this: you can create a manifest
that is a specific set of tested packages, and provide the
resulting rootfs as a single OSTree commit. Users can then layer
changes on top of the well-tested base image, and those changes
could be reapplied on each upgrade of the base OS.
- upgrades are not atomic
-> rpm-ostree project will help with this: a rootfs managed with
`ostree` can be upgraded atomically
- cannot rollback after upgrade
-> rpm-ostree project will help with this: a rootfs managed with
`ostree`can be rolled back to an old version
Gentoo:
- no way to share prebuilt artifacts
- infinite number of possible configurations that cannot all be tested
- upgrades are not atomic
- cannot rollback after upgrade
GNU Guix Software Distribution:
- lack of packages
- nonstandard filesystem layout makes packaging some software very
difficult
- security fix in a single component may require rebuilding 1000s of
other components
-> experimental 'grafts' feature solves this:
https://www.gnu.org/software/guix/manual/html_node/Security-Updates.html
- no serious handling of security updates
NixOS:
- lack of packages
- nonstandard filesystem layout makes packaging some software very
difficult
- security fix in a single component may require rebuilding 1000s of
other components
- I'm not sure how quickly NixOS handle security updates currently
I'm doing a talk in the distros devroom in FOSDEM 2016 which will relate
to all this a bit...
Happy Christmas & New Year
Sam
--
Sam Thursfield, Codethink Ltd.
Office telephone: +44 161 236 5575
7 years, 8 months
Cross compiling Baserock - a rough analysis
by Tristan Van Berkom
Hi all,
Earlier this week I started looking into the possibility of cross
compiling systems with Baserock. Just to get an idea of what kind of
workload it would represent to get it up and running and to maintain
such a beast.
To this end, I took some time to refresh my memory with a buildroot
build and as I was curious to compare, I also installed a raspbian
system in QEMU to see if I could easily build the GNOME system easily
enough using virtualization.
What I have written below is a rough assessment of the work which needs
to be done to cross compile a system in general, and a rough draft of
the kinds of changes which would be appropriate for us to do it well in
Baserock.
This is still only a rough analysis, but I wanted to share this on the
list so others could interject, if there are other approaches to this
which make more sense, or if things are in fact simpler (or more
difficult) than I suspect, it would be great to hear your feedback in
the interest of accurately assessing what it would take to evolve
baserock into a cross-build system.
Take your time, enjoy the holiday season :)
Cheers,
-Tristan
What has to be done to cross compile a system
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
o Build/Have a cross compiler.
This is pretty much solved with baserock, we can easily build a
cross compiler in the early build phase.
o Build all of the required host tooling
We also need every tool which can be used by a makefile, which
starts with your regular binutils and fileutils packages and then
your interpretors for various scripting languages you may need to
run during the course of your compilation.
This host vs. target game unfortunately occurs all the way up the
stack. Whenever a module which is high up in the stack provides
some compiled program to help compile itself, or, assist in
compiling modules which depend on it; then that given module and
it's dependencies must also be built for the host arch and staged
for further cross compilation.
Examples of programs which are required during the compilation
phase:
o pkg-config
o pkg-config is a later incarnation of an earlier construct,
where a dependency will install a binary for discovering itself
and reporting how one should compile against that specific
installation of itself, sometimes coupled with a convenience m4
macro to be used by dependent package makefiles.
While many of the previous incarnations of this have been
phased out in favor of pkg-config, some remain and need to be
compiled for the host (icu-config for instance is still in
use).
o various tools from glib need to be compiled for the host, such
as glib-mkenums, glib-genmarshal, glib-compile-resources and
glib-compile-schemas
o Tools for manipulating translations, like msgfmt and gettext
o Once all of the host tooling and it's dependencies are built, we
can start to consider cross compiling the set of modules we want.
At this point the question of how to approach staging and paths and
setting up the build environment arises, to which there are various
possible approaches.
Taking buildroot as an example, they do not use any chroot and
stage all of the host builds in one location before building
everything into the target location.
Cross compiling any given module is a delicate dance of setting up
the environment correctly and providing the correct environment so
that:
o Host tooling is prioritized in $PATH so that build scripts in
the target correctly use the tools from the host tools staging
location
o Host tools link to host libraries in the host staging location
when they are used, this is perhaps done by setting
$LD_LIBRARY_PATH
o When compiling anything, the target assets are found to
assemble any product. That is to say that even though we link
our host build tools to host libraries while they run, the
built assets are assembled and linked against target libraries.
o Combat with module-local build tooling
Some modules compile programs which are code generators required
during the course of their regular compilation, if these are also
installed binaries, it is good practice to avoid using the system
installed generator as it is most probably out of date and lacks a
feature which is only implemented in the checked out source tree.
Programs like glib-genmarshal and glib-mkenums are examples of such
tooling which are built in-tree and then used later on to compile
gio from the same source tree.
This correct practice of preferring the in-tree binary is however
incorrect if we explicitly built one which can run on the host in
the hope that glib would use that one for it's own code generation.
In cases such as this; it can require custom hacks and sometimes
patches against the upstream module to force it to select the
already compiled host compatible tool.
Buildroot works around this particular case for glib 2.46 by
setting the following in the environment:
ac_cv_path_GLIB_GENMARSHAL=$(HOST_DIR)/usr/bin/glib-genmarshal
o Correcting and stripping rpaths
This may or may not be an issue (it is an edge case at best), with
current Baserock we build in a chroot and link against libraries
found in a path which will be the correct path on the final target.
So in the case where a build script forces an encoded -rpath to
link somewhere under /usr/lib, then /usr/lib will be found in the
resulting target.
When cross-compiling however, we necessarily link against libraries
found in /opt/target/usr/lib, so there may be some additional
legwork in stripping binaries of their hard coded link paths and
setting up the environment in the final target so that binaries
still find their expected libraries.
Plausible approach to cross compiling in Baserock
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Given the above preliminary assessment, here follows a rough outline of
how we could potentially approach this in the baserock build system.
Part 1 - The cross compiler
~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is essentially covered but could be improved, what we currently
have a way to build a host compiler from scratch and then use that host
compiler to build a cross compiler for armv7lhf.
What could be improved upon in this picture, is that it would be nice
if we could build the cross compiler we want for the arch we intend to
build for.
Then we could have a single cross-build-essential with the host
compiler and the cross compiler required for the $TARGET we are
currently building for.
Part 2 - Host and Target artifacts and staging
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Baserock would need to be setup in such a way as to understand that
there are 2 sets of artifacts, one for the host tooling and one for
target assets.
Of course, for a given system, we need not build the host version for
the great majority of the chunks required in the target, and even then
we can usually get away with a minimalist variant of any given
requirement (you dont need a glib with pcre just to run glib-mkenums).
When building a host artifact, the build process implemented by
Baserock and YBD would remain unchanged, we only need to stage the
artifacts in '/' and then do the regular thing.
When building a target artifact, we need to stage both the host
artifacts and the target artifacts, the staging for a target build
would look something like this:
/ - Host artifacts staged in '/' as usual
/opt/target - Target artifacts we depend on staged here
/package.build - The checkout of the chunk we intend to build
/package.inst - The DESTDIR where we will pickup the result
Part 3 - Actually building the target artifacts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Here we have a slight advantage over buildroot which is that we've
staged our host tooling into the / of a chroot, which should slightly
reduce the legwork involved in setting up the paths we need to use,
maybe.
Here we will want to setup an environment that will use the cross
compilers and appropriate pathing by default, reducing the verbosity of
chunk definitions which need to be cross compiled.
We will want something roughly like:
# Tools...
CC=/usr/bin/${arch}-gcc
CXX=/usr/bin/${arch}-g++
... all cross compilers ...
# Paths...
PKG_CONFIG_PATH=/opt/target/lib/pkg-config
CFLAGS="-I /opt/target/usr/include"
LDFLAGS="-L /opt/target/lib -L /opt/target/usr/lib"
ACLOCAL_FLAGS="-I /opt/target/share/aclocal"
Then, chunk by chunk we will discover problems which can ideally be
solved by tweaking the default cross environment appropriately, and
sometimes by encoding some brute force into the chunk recipe itself,
ensuring the right paths are used.
Part 4 - Assembling and deploying the system
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Here is an interesting problem.
Of course to assemble the final target image, we would need to simply
stage all of the cross-built artifacts in '/' instead of in
'/opt/target', but, at this stage what do we do for the system
integration commands ?
Perhaps we have a very special setup for this which stages the whole
thing backwards so that:
/ - Contains the assembled cross-built target
/opt/host - Contains the staged host tooling
And we have a sort of floating chroot which only runs tooling from
/opt/host with a working shell in which system integration commands
could be run from the host tooling and effect the root target ?
Or, we stage it in a regular way with host tools in '/' and the target
in /opt/target, but we ensure that all tools we use for system
integration can handle the relocation, possibly by patching the
upstreams providing those tools.
And things are probably more complicated than just this when it comes
to tooling which generates some binary format, it's unclear to me
whether the host runnable gtk-update-icon-cache will generate an icon
cache which will be readable on the target system, for instance.
Summary
~~~~~~~
In the above picture I have skimmed over some details, such as build
dependency enhancements; when cross compiling it would become important
to specify which "host" chunks a given target chunk depends on.
There is also a running theory that one could simply build depend on
the cross compiler and use that to cross-compile chunks without
significantly modifying baserock functionality and definitions format.
It's a possibility but we would still have to ensure that we have our
host assets in '/' and our cross built assets in a separate location
for each and every chunk build (also we still have the same system
integration issues).
Cross compiling is a continuous uphill battle primarily because it is
difficult to justify the importance of supporting cross-builds to
upstreams when a self-hosting compilation works.
The current release of buildroot contains a total of 1469 downstream
patches in it's package directories and is still building GTK+ 3.14.x.
7 years, 9 months
FOSDEM 2016
by Alexandre Detiste
Hi,
It's to see two tools sharing a bit of the same ideas (yaml definitions, cross-distro)
while having completely different goals at FOSDEM.
Here's the resume of my talk:
(I can't say it's "my" tool, It's a shared work and I didn't did the most difficult parts)
https://fosdem.org/2016/schedule/event/introducing_game_data_packager/
game-data-packager is a tool that automate the creation of .deb or .rpm packages for local consumption from commercial games assets.
Handling of non-free non-redistributable data like shareware games assets are commonly handled by fragile shell scripts that involve wget, unzip, md5sum provided by the different distribution.
game-data-packager provides a way to share these unofficial packaging efforts in a central location; only a tiny part of this tool is distro-specific and is handled in a nice OO-model.
GDP has builtin support for the Steam & GOG.com online sellers and will also apply the patches needed by your games.
GDP would also fill a common need of upstream game engines providers that either have to document all the needed files: http://forum.scummvm.org/viewtopic.php?t=11754 http://wiki.scummvm.org/index.php/Datafiles
... or have to provide their own installer.
http://pkg-games.alioth.debian.org/game-data/
I'll review your tool, but I'm not subscribed to this ML.
Greets,
Alexandre Detiste
7 years, 9 months
GNOME on Baserock - Milestone 4 (early christmas)
by Tristan Van Berkom
So it's a bit early for another milestone email, this is because I will
be changing my focus for a while and before doing so I wanted to just
quickly document the current status of GNOME on Baserock.
So, as of Milestone 3, the platform is already quite stable and not
much changes were needed below the GNOME stack, with the exception of a
required update to the network-security stratum which provides
mozilla's nspr & nss libraries (this was overdue for an update anyway,
and was required to build Evolution).
Since Milestone 3, we have integrated the following GNOME apps:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
o gedit (iirc, Paulo prefers the lower case 'g')
o Glade
o GNOME Calendar
o GNOME Todo
o Empathy (with a lot of dependencies and all the fancy
telepathy backends: gabble, salut, idle and haze)
o GNOME Contacts
o GNOME Maps
o GNOME Dictionary
o eog (Eye of GNOME image viewer)
o baobab (Cool disk usage analyzer)
o gnome-font-viewer
o gnome-screenshot
o Evolution Mail (fun screenshot: http://imgur.com/aDuTbRV)
So in summary, we now have a pretty complete desktop experience. Even
though we are still missing a lot of core GNOME modules, a GNOME on
Baserock system can do the essentials such as:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
o Use GNOME Initial Setup to create and setup the first user account
o Browse GNOME user documentation with Yelp to get you started
o Enjoy GNOME in any language and input method of your choosing
o Use a terminal
o Edit source code with gedit
o Browse the web, watch videos on the web (except youtube)
o Watch videos in Totem
o Take screenshots
o View those screenshots in eog
o Chat and videoconferencing in Empathy, on google chat/XMPP, IRC,
even on MSN and ICQ and many many more
o Manage your contacts with GNOME Contacts, including your contacts
from your online accounts
o View and edit your calendar with GNOME Calendar, including your
online accounts calendar, such as google calendar
o View and manage tasks with GNOME Todo
o Use GNOME Maps, finding your location with your GPS if you've
allowed that
o Read and compose email with Evolution mail (and manage contacts
and calendar and tasks in Evolution mail as well, of course)
o And probably a lot more stuff...
So, happy holidays and merry christmas !
Cheers,
-Tristan
7 years, 9 months
GNOME on Baserock - Milestone 3
by Tristan Van Berkom
This is another status report on the GNOME OS on Baserock.
As of today I would say the platform is quite stable, the user
experience now features a working initial setup, you can enter some
online accounts during the setup process and they will magically stay
active after the setup is complete and the new user session commences -
at which point the "Getting started" documentation splashes
automatically in yelp, providing some helpful videos which explain to
the new user how to use the desktop. And the videos have sound :)
I have not been extremely organized this month so I may be missing some
things, but here is a tentative list of issues which have been
addressed since the last milestone.
o Many core platform modules and external dependencies added
o Completely functional online accounts
o All (available) translations and locales installed and functional
o Input methods work out of the box for most, of not all languages,
including Korean, Chinese and Japanese input methods
o Automatically guess your timezone in gnome-initial-setup[0]
o Various system integration hooks added, such as:
o update GTK+ 2/3 immodules caches
o update icon theme caches
o desktop-file-utils cache update
o update mime database
o Overhaul of PAM configuration, properly update the gnome keyring
when password changes, properly unlock default keyring when
logging into GNOME session
o Credentials handoff works in gnome-initial-setup now[1] (after fun
debugging session with Rob Taylor in Manchester)
o Audio now works well, pulseaudio integrated well as of today,
WARNING: Kernel only supports the intel driver CONFIG_SND_INTEL8X0,
should work fine at least in VirtualBox.
o Most video formats now supported with gstreamer after upgrading
all gstreamer packages to 1.6, adding libmpeg2, libvpx and
gst-libav (which compiles the ffmpeg codecs as a submodule).
3D videos notably still do not display correctly.
o Epiphany browser (WebKitGtk) is installed and plays videos in the
browser, however, it does NOT support youtube properly. See[2]
these[3] links[4] to track what's going on here. I also tried to run
Epiphany on Debian testing and get the same result, youtube just
does not currently work with WebKitGtk.
o Yelp now installed with user documentation
o Totem now installed and works well with all gstreamer supported
codecs, well integrated into GNOME and automatically launches
from clicking on files in nautilus, which have nicely rendered
thumbnails (you have to download some video to see this of course).
o Bluetooth integration also fixed as of today (was not launching
properly), however this remains untested as I was unable to feed a
bluetooth device to VirtualBox.
At this stage, I am moving on to integrate the remaining core GNOME
applications into the GNOME stratum, I think most of the critical
integration issues are fixed and most everything will "just work" out
of the box.
There is always, of course, room for better integration, for instance
it is a known issue that fingerprint based login and smartcard login
just wont work properly. This is only a choice of where to spend my
time, at some point you just draw the line somewhere in the sand.
So, please rebuild your local GNOME images, reboot and enjoy !
Cheers,
-Tristan
[0]:https://bugzilla.gnome.org/show_bug.cgi?id=758214
[1]:https://bugzilla.gnome.org/show_bug.cgi?id=758592
[2]:https://lists.webkit.org/pipermail/webkit-gtk/2015-January/002227.h
tml
[3]:https://trac.webkit.org/wiki/WebKitGTK/Roadmap
[4]:https://bugs.webkit.org/show_bug.cgi?id=140078
7 years, 9 months
[RFC] defs version 8: define repo aliases in definitions
by Richard Ipsum
Hello friends,
I got a little time to experiment with moving repo aliases into definitions,
motivated by the apparent confusion they cause.
The idea is to define them in the DEFAULTS file instead of having them
baked into morph/ybd.
They would look something like,
# Repo aliases
repo-aliases:
upstream:
pull: git://%(trovehost)s/delta/%(repo)s
push: git://%(trovehost)s/delta/%(repo)s
baserock:
pull: git://%(trovehost)s/baserock/baserock/%(repo)s
push: ssh://%(trovehost)s/baserock/baserock/%(repo)s
freedesktop:
pull: git://anongit.freedesktop.org/%(repo)s
push: ssh://git.freedesktop.org/%(repo)s
gnome:
pull: git://git.gnome.org/%(repo)s
push: ssh://git.gnome.org/git/%(repo)s
github:
pull: git://github.com/%(repo)s
push: ssh://git@github.com/%(repo)s
bitbucket:
pull: https://bitbucket.org/%(repo)s
push: ssh://git@bitbucket.org/%(repo)s
As well as making these aliases more accessible, defining them this way
also simplifies the RepoAliasResolver, it might even be as simple as
the class pasted below, which is quite a bit simpler than what we have
currently. I haven't executed this at all so it's just an idea at this point
and so may well be wrong.
class RepoAliasResolver(object):
def __init__(self, trovehost, aliases):
self.trovehost = trovehost
self.aliases = aliases
def expand_url(self, url, urltype):
alias, rest = self._split_url(url)
if alias is None:
return url
return self.aliases[alias][urltype] % {'trovehost': self.trovehost,
'repo': rest}
def push_url(self, url):
return self.expand_url(url, 'push')
def pull_url(self, url):
return self.expand_url(url, 'pull')
def _split_url(self, url):
'''Split url into prefix and suffix at :.
The prefix is returned as None if there was no prefix.
'''
pat = r'^(?P<prefix>[a-z][a-z0-9-]+):(?P<rest>.*)$'
m = re.match(pat, url)
if m:
return m.group('prefix'), m.group('rest')
else:
return None, url
The slightly annoying part here is having to maintain backwards compatibility
with version 6 and 7, so I want to ask 2 questions.
Firstly I want to know whether people think this is even worth doing,
if people don't think this is worthwhile then I can possibly save
myself a lot of time. :)
Secondly I want to know whether people think it would be acceptable to
drop support for version 6 and 7 and ask folks to migrate to 8.
I think it *might* be okay to do this (because we don't have a lot of users
right now) but I am not certain, dropping the backwards compatibility here
allows me to remove code instead of adding it, but as we know LOC isn't
everything.
If there's no response to this RFC then I'll assume people think this is
a bad idea and drop it!
Thanks again,
Richard Ipsum
7 years, 9 months
Host independent deployments
by Tristan Van Berkom
Hi,
It has come to my attention this week that we have some issues creating
bootable images, since our deployment scripts depend heavily on the
host tooling, the results of a deployment can be unpredictable.
The specific issues I ran into concern btrfs tools and syslinux, in
order to deploy an image we currently rely on having btrfs tools v4.0
or earlier and syslinux 4.0.6.
Btrfs Tools
~~~~~~~~~~~
Using a too new btrfs tools package on the build host will create a
file system which is not bootable by, at least our current version of
syslinux 4.0.6.
Syslinux
~~~~~~~~
After discussion with syslinux developers on irc, it has become clear
to me that the developers do not expect mixing of syslinux versions, or
even *builds* to properly boot, although they admit that differing
versions of the older syslinux 4.x releases may have worked.
That is to say, when invoking 'extlinux --install' to install a
bootloader, one should use the extlinux binary from the same build as
the syslinux modules installed on the target OS.
I have not found relevant syslinux documentation claiming this
specifically.
In my specific case, building from my actual host natively using YBD, I
ran into a mix of these problems which were not an issue when using the
older ubuntu 14.04 LTS which just happened to have older versions of
these packaged (and happened to match up well enough to boot the
system).
There are various ways to work around this; my feeling right now is
that the cleanest way is to use the tools which we build for our
deployment scripts; this may mean that we need to stage the stage1 or
stage2 build environment (with host-runnable binaries) and run the
deployment scripts from *that* environment, and, that we build all the
tooling required to perform a deployment in that build stage. However I
have not given this loads of thought, perhaps there are cleaner ways,
or techniques which require less refactoring.
Now, at this point you may be wondering; ... so this is only a problem
for YBD right ? Why would one EVER want to deploy a baserock system
from a non-baserock system ?
However tempting it might be to write off the problem as such, it's not
possible, especially considering that no two baserock generated systems
need to be similar in any way. For example, I suspect that this
shortcoming is the reason why we are stuck with syslinux 4.0.6 while my
new debian system is already installing 6.0.3: It is currently not
possible afaics to safely upgrade syslinux on a baserock system using
Morph, because the deployment scripts will access the baserock system's
extlinux (4.0.3) to install a bootloader on the new deployment which
may have a new syslinux with mismatching modules installed.
Does anyone have some other bright ideas on how we can solve this ?
Cheers,
-Tristan
PS: I have written this all up here on the mailing list as currently it
seems that storyboard.baserock.org is down, I will try to move this to
storyboard once it comes back online.
PPS: I sent this email aready from wrong address and it was held for
moderation, if the other post gets through, please disregard double
posting ;-)
7 years, 9 months