Request for ciat@baserock.org
by Will Holland
Hello,
I would like to request an email address ciat(a)baserock.org to be used by
ciat for submitting integrations to gerrit. Emails sent to this address
should be forwarded to me (william.holland(a)codethink.co.uk) and to the
baserock ops team.
Thanks
Will
7 years, 11 months
Help wanted with WebKitGtk lorry
by Tristan Van Berkom
Guys,
I do not have access/rights on g.b.o or other online baserock
infrastructure.
Can someone who does have access please run this lorry:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
{
"WebKitGtk": {
"type": "svn",
"url": "https://svn.webkit.org/repository/webkit/",
"layout": {
"trunk": "trunk",
"branches": "releases/WebKitGTK/*"
}
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I have been trying to run it on my laptop, but the situation is fragile,
my laptop is not a dedicated machine and sometimes I shut it down to
move from one place to another.
I expect the process of initially importing this to take at least 48hrs
without interruption, perhaps it may take up to a week (after 12hrs
running straight, I still only get a git repository of 670MB... while I
know that the full checkout is over 5GB in size, if that is any metric
then we can expect the task to take around 1 week).
Can someone please try this task on a server where it can run
uninterrupted ?
Preferably, we should run it in (or close to) g.b.o so that if an when
it ever completes, we can easily use the result directly as the initial
import.
Best,
-Tristan
7 years, 11 months
System Configuration Strategies
by Tristan Van Berkom
Hello again,
In the journey towards the integration of components in the
gnome-system-x86_64 system, one thing which is not obvious to me is what
the policy is (or should be in the long term) in the baserock
definitions repository for system configuration data.
So, I'd like to start some discussion here to see what people think
about this and where we should ideally be headed, I'll start by putting
forth some of my current thoughts on the subject.
Avoiding overlap and conflicting configurations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Since the definitions repository currently describes every build known
to Baserock in a single repository, one has to be careful that a
configuration targetted at one system does not conflict with another
system: one should be able to work on a specific build and patch the
definitions repository with the full confidence that "my configuration
for my build does not break another build in any negative way".
For now, I think it's pretty widely accepted that a given system target
provides it's own individual 'install-files' subdirectory and manifest
(at least this is how I understand it so far); while this can eventually
lead to a lot of duplication, at least this prioritizes the stability of
the resulting build over the elegance with which we achieve that result.
Individual strata providing their own configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some of the individual morph files provide some system configuration
either in the form of custom install directives or system integration
'hooks' which run at the end of system deployment, and hopefully before
the install-files extension runs.
I think we need to identify cases where this makes sense and root out
cases where strata-level configuration data makes assumptions about the
target system to which it will be deployed.
An example where it makes sense for a strata to take responsibility for
it's own config would be the xorg input drivers, such as evdev which
installs files in the ${prefix}/share/X11/xorg.conf.d directory; in this
case we know the driver is being installed and we have a default
configuration for it - we are configuring only data that will be used by
the module itself and we make no assumptions about third parties.
Another example of good strata-level config is how gdk-pixbuf runs it's
own gdk-pixbuf-query-modules in it's system-integration hook; it makes
no assumption about third parties but conveniently configures itself at
the end of the build after some third parties may have installed some
other modules (i.e. librsvg which might be present).
An example where it does NOT make sense (and I am responsible for a few
of these problematic stratum), is some of the services installed from
the strata/gnome.morph collection. For instance the
accountsservice.morph now performs 'systemctl enable accounts-daemon'
in it's own system-intergration hook. The negative side effect of this
of course being that the GNOME strata is now making a sweeping
assumption about being installed on some target which has systemd
installed, and so the build will fail miserably as soon as we install
the GNOME strata on a system without systemd.
Moving forward
~~~~~~~~~~~~~~
Given what we have in place, my personal inclination would be to prefer
immense duplication of config, either statically in install-files/
directories or using some other extensions for programatically setting
up config data, but strictly duplicating these at the 'system' level.
The benefit of this being that even if we have large redundancy, we
avoid all risk of system instability stemming from assumptions made at
the strata level, at least as a first step.
To improve things from here, there are a few potential different
directions one could take:
A.) Segment the 'definitions' repository into separate branches - avoid
the problem completely by only ever specifying a single system
target in a given branch.
This would imply much more redundancy, but at least each branch
could have their respective maintainer who would have more intimate
knowledge of how 'their system' is intended to build. Nobody could
ever push a patch to a definitions repository with the potential of
breaking another system unintentionally.
B.) Break down system configurations into components, possibly grouping
configurations into the same groups we find in strata/, such as
'foundataion', or 'gtk-deps'. Configurations such as /etc/pam.d/
without with systemd modules but without selinux; could be segmented
into it's own module - and multiple systems which do use systemd
could share that configuration.
The install-files extension already supports the inclusion of
multiple manifests in a given system. While most of the cluster
definitions only use a single manifest, it would seem that
install-files was designed to work this way.
C.) Conditional system-specific statements inside morph files. This is
one possibility to have each individual package configure itself
differently (or configure it's dependencies differently) based on
the system it is being built for.
I have to admit I don't like this approach at all, as I don't
believe it to be wise to impose knowledge about the target system
inside individual strata definitions. The result would be a large
overhead in configuring individual morphs whenever a new system
definition is added, and another overhead on any contributor
whenever modifying a single strata's morph.
A worthwhile experiment
~~~~~~~~~~~~~~~~~~~~~~~
Before even trying to come up with an ideal solution, it would be an
enlightening experiment to build an existing highlevel strata on top of
drastically different middleware; perhaps a GNOME system running on a
FreeBSD kernel with a more traditional init + OpenRC. Without this sort
of experimentation it may be difficult to assess what the ideal solution
should be.
Cheers,
-Tristan
7 years, 11 months
Welcome to Baserock GNOME
by Tristan Van Berkom
Hi all,
This is just a basic progress report on the status of the GNOME Desktop
build on Baserock (gnome-system-x86_64). I will send these out
periodically as we reach any significant milestones in this project,
this being the first milestone.
This week, as of yesterday we've landed some patches which make
gnome-shell run in the gnome-system-x86_64 system.
With the current state of affairs, GDM is not configured and we still
boot up into a shell, if you would like to try it; for now you will have
to follow the these simple steps:
1.) Login to your new system and set a root password with `passwd'
2.) ssh into your host as root, which is probably a VM
3.) echo "exec gnome-session" > .xinitrc
4.) startx
It will then pause while gnome-keyring daemon fails in the background,
then after some time, gnome-shell will appear.
You may be wondering, why ssh ? Well, this is just a workaround to
achieving pointer input. When the X server starts up from an ssh login
it is not allowed to take control of the session created by logind; when
starting from a regular tty the X server successfully takes control of
the session, and then systemd-logind "pauses" all the input devices - I
am investigating this presently and suspect that adding a display
manager will fix this issue.
No useful applications are installed at this time, but you can access
the calendar and use the built in search feature to look around for the
applications which you don't have.
So, this is obviously a work in progress, stay tuned for the next
milestone (automatically booting into GDM) which will hopefully be soon.
Enjoy,
-Tristan
7 years, 11 months
CIAT wiki page: 1.- Introduction. Proposal
by Agustin Benito Bethencourt
Hi,
Following the previous mail, here is a first proposal for the
Introduction, the first point.
------
1.-Introduction
Continuous delivery is a very popular concept nowadays, specially among
the application and web services space. It is little by little being
adopted by other segments though.
Continuous Integration and Automated Testing (C.I.A.T.) is as a Baserock
initiative that creates a Linux based operative system using existing
Open Source technologies and tools based on existing Open Source
technologies and tool, through an automated heart-beat pipeline process
with 6 main stages:
1.- Detection of a new version of a component included in our customized OS.
2.- Integration of the update into the OS definition.
3.- Creation a new build based on the updated definition.
4.- Preparation of the builds and artifacts and provision of a
bare-metal or VM farm for (automated) testing.
5.- Test the build.
6.- Publication of the candidate and the associated artifacts, making
them ready for verification and validation.
But CIAT does not stop there...
Now that many embedded projects and organizations are widely using
GNU/Linux based systems, there is an opportunity to take full advantage
of upstream development processes, consuming newer software on regular
basis. The idea behind Baserock, implemented in CIAT, is to integrate
the latest software from the most relevant Open Source community
projects (upstream) as soon as they release it. Staying as close to
upstream as possible simplifies integration and maintenance at big scale.
Hence, this combination of existing Open Source technologies applied to
modern automated delivery processes to build customized GNU/Linux based
OS, using the latest released software from upstream, provides to any
organization the ideal foundation to deliver complex solutions
significantly faster and cheaper.
As any reference implementation, CIAT was originally designed with a
specific context, goal and target in mind:
* Context: CIAT has been initially developed for the embedded industry
although it is being extended to cover use case from other segments
* Goal: by design, CIAT intends to reduce the time window that goes from
detecting an new version of a relevant component of a customized OS to
publish a candidate ready for verification and validation (delivery) in
just a few hours, allowing any corporation to create a 24 hours
continuous delivery cycle for very complex and mission critical solutions.
* Target: The initial target is the car industry. This is a consequence
of the long term relation between Baserock and GENIVI/AGL, where some of
the ideas and technologies that are part of CIAT has proven to be
production ready for over 2 years.
A first version of CIAT was shown at ELCE. Further improvements are
being developed and implemented on ciat.baserock.org(link). Additional
use cases are being considered. As any Baserock initiative, CIAT is an
Open and Free Software based project.
-----
Best Regards
--
Agustin Benito Bethencourt
agustin.benito(a)codethink.co.uk
7 years, 11 months
CIAT wiki page: proposed structure and intro
by Agustin Benito Bethencourt
Hi all,
this is my first mail to this ML I believe so the polite thing is
introducing myself.
I am Agustin Benito Bethencourt (toscalix)[1]. I recently joined
Codethink as consultant. I am working on CIAT and baserock.
As some of you know, we are preparing a demo for ELCE called CIAT.
Together with some of my colleagues we will be updating the CIAT wiki
page[2] of the baserock wiki to better reflect what the initiative is
about and what we are implementing.
Right after this mail I will send the actual content of a couple of the
proposed sections. I am not a English native speaker so I will
appreciate any correction (typos, expressions, etc).
This is the proposed structure of the CIAT wiki page content.
-----
Structure
1.- Introduction: high level definition of CIAT. Why CIAT. Target. Benefits.
2.- CIAT description:
2.1.- CIAT diagram. More detailed description of CIAT including links to
other wiki pages that describe the Baserock technologies associated to CIAT.
2.2.- Demo description. Explanation of what you can see at
http://ciat.baserock.org (the demo itself)
3.- Next steps, links and contact.
------
The ELCE demo is next Tuesday so the goal is to have the wiki page ready
on Monday at 12:00 UTC approx.
[1] http://toscalix.com
[2] http://wiki.baserock.org/ciat/
Best Regards
--
Agustin Benito Bethencourt
agustin.benito(a)codethink.co.uk
7 years, 11 months