[RFC] Multiple source repositories per-chunk and git submodule handling
by Richard Maw
We currently handle definitions that need to pull multiple source
repositories together in an ad-hoc way.
For gcc we import the gmp, mpc and mpfr source trees in by checking them
into our delta branch directly.
Some upstreams handle importing the sources by adding submodules.
However, we have to make them fetch sources from our Trove for
reproducibilty and latency concerns, so we end up having to add a delta
on top to change where the submodules fetch their sources from.
This works against one of our goals for minimising the delta, so we
need a way to encode in our builds how to deal with components that need
sources from multiple repositories to function.
Our current approach from submodules introduces a _lot_ of extra work
when we need to update multiple submodules recursively, so we need a
better solution.
Proposal
========
To solve this, I propose we extend the source repository information from
just (repo, ref) to be a list [(repo, ref, path?, ?submodule-commit)].
So rather than:
name: base
⋮
chunks:
- name: foo
morph: strata/base/foo.morph
repo: upstream:foo
ref: cafef00d…
unpetrify-ref: baserock/morph
⋮
build-depends: []
We extend it to be able to take a "sources" field.
⋮
chunks:
- name: foo
morph: strata/base/foo.morph
sources:
- repo: upstream:foo
ref: baserock/morph # used to validate commit is anchored
# properly and as a hint for where
# you ought to merge changes to
commit: cafef00d…
- repo: delta:gnulib
commit: deadbeef…
ref: baserock/GNULIB_0_1_2
path: .gnulib # where to put the cloned source
submodule-commit: feedbeef… # check that this matches, so you
# can be told if you need to update
# your delta
⋮
The `path` field is used to specify that this tree should be placed in
the parent tree at this position.
The `path` field is optional, defaulting to `.`
If multiple paths, after normalising, clash, then it results in a
build-failure at source creation time.
Sub-sources can be placed over an empty directory, when there's no
existing entry at that path, or when there's a git submodule there AND
the commit of the submodule matches the `submodule-commit` field.
If there's a file, symlink, non-empty directory or submodule that doesn't
match `submodule-commit` there, then it results in a build-failure
at build-time.
The `submodule-commit` check exists as a safeguard against the parent
repository being updated and requiring a new version, which your specified
commit isn't appropriate for.
If you get a build fail because the submodule isn't appropriate then
you have to check whether the version you specify works, then update
the commit.
Cache key changes
=================
This shouldn't require any cache key changes to existing definitions,
but builds that make use of multiple source repositories will also hash
the commit tree and path.
Alternative solutions for submodules
====================================
We could continue to use the current model, and deal with the pain of
having to make multiple branches in multiple repositories to satisfy
the change to the repository paths.
We could have a magic look-up table to replace repository urls when we
parse the submodules.
We could use git-replace to magically make git do the url correction, but
then we'd have to handle replacements properly, rather than dropping them.
7 years, 10 months
[RFC] Fix sloccount on g.b.o
by Paul Sherwood
Hi all
I've noticed that sloccount is now officially viewable as git from
git://git.code.sf.net/p/sloccount/code
This history clashes with the single commit that g.b.o has been seeing
from sourceforge svn. After discussion on irc I propose that
- we park the current branch in case any downstream users are using
that commit in built systems, and
- from now on we could fix g.b.o to mirror master from the official git
is this acceptable?
br
Paul
7 years, 11 months
[RFC] Reorganise definitions....
by Paul Sherwood
Hi all,
I'd like to propose a further tidyup of definitions. Recently we moved
most things out of the top-level directory, to get to
d--------- clusters
d--------- extensions
d--------- install-files
d--------- migrations
d--------- schemas
d--------- scripts
d--------- strata
d--------- systems
I suggest we go further, and separate out example integration sets into
their own directories. So for example:
d--------- agl
d--------- baserock
d--------- cip
d--------- clusters
d--------- extensions
d--------- genivi
d--------- gnome
d--------- install-files
d--------- migrations
d--------- openstack
d--------- openwrt
d--------- schemas
d--------- scripts
The idea would be that the baserock dir (for example) would contain
d--------- strata
d--------- systems
and this would include the definitions of our core integration set
(build-system*, cross-bootstrap*, devel*, minimal*). Then we could have
new directories for openstack, genivi and any other sets we decide to
publish over time (eg automotive grade linux, civil infrastructure
platform).
The openstack dir (for example) would contain systems and strata
specific to openstack. Those systems and strata would refer to strata
and components from the baserock set (or override them with their own
things) as needed.
This would mean that we can treat different subdirs in specific ways
depending on the requirement e.g.
- baserock gets integrated constantly against mainline kernel, systemd
and others
- genivi gets six-weekly refresh
- openstack maybe gets six-monthly release to coincide with mainline
etc
This approach would also allow us to recommend that downstream users
create their own subdirectory or subdirectories, to contain their own
custom definitions. If they keep their deltas separate I think this
would reduce the incidence of merge conflicts for them.
Any thoughts on this?
br
Paul
PS I took a shot at a patch...
http://git.baserock.org/cgi-bin/cgit.cgi/baserock/baserock/definitions.gi...
7 years, 11 months
Orchestration paragraph
by Will Holland
Hi Agustin,
Here is a first draft of the paragraph describing CIAT Orchestration for
w.b.o.:
Orchestration is the brain and nervous system of CIAT. When Baserock's
Definitions changes in a way that CIAT thinks is interesting,
orchestration kicks ybd into building the system, deploying it and then
kicks ciat-test into testing it.
7 years, 11 months
CIAT configuration
by Will Holland
Hello everyone,
I am currently thinking about how CIAT's build/deploy behaviour should
be configured; these thoughts are fairly woolly but I would like to get
more opinions before I continue.
CIAT needs to know what to build and deploy. One way I have thought to
do this would be to have a list of the clusters one wants to monitor.
This list could then be parsed to determine which systems should be
built and which strata should be integrated with Firehose. perhaps
Firehose configurations should be predefined rather than generated. This
assumes that clusters are what users ultimately cares about; I would
like to know if this is the case. Building a system that is not wanted
can be very expensive in terms of time and resource so this needs to
thought about.
There is also a question of where this configuration should live. It
seems to me like it should be in Definitions because it defines the
behaviour of a tool relating to the Baserock project. Another option
would be that it has it's own repo.
Peoples thoughts on this would be much appreciated.
Thanks
Will
7 years, 11 months
Continuous integration and testing
by Daniel Silverstone
Greetings Baserockers,
For too long we have toiled in the great underbelly of the F/LOSS world
shovelling changes into the great furnaces of build machinery like morlocks
serving the eloi of the shiny shiny world above.
Now let us step into the light of the dawning sun and rejoice, for
http://ciat.baserock.org:8010/waterfall exists and we shall revel in its light.
Over the past few weeks, a few strong men and true have been toiling to achieve
one of the very earliest goals of the Baserock project -- to automatically
integrate upstream changes and push them through test machinery. We had a very
brief foray into this with a previous Mason experiment, but it wasn't terribly
flexible. Our new approach, we hope, is.
The current CIAT pipeline integrates a small number of components regularly and
tests them on x86, producing a genivi system which is then stood up in another
cloud and validated. Currently the tests are very rudimentary but they are a
start.
Over the coming week we intend to try and put some lipstick on the pig and
produce an easier to consume UI than the Buildbot waterfall and we will also
look to close the loop and file candiate merge requests for changes which pass
through to the end of the pipeline for a selected subset of Baserock
components.
The other members of the CIAT team will likely be posting to the list or adding
to the wiki in various places, this week and next, ensuring that details,
documentation, ideas, etc are all disseminated for discussion.
I look forward to walking with you all into the light of automated integration,
build, deployment, testing, and candiate submission; with great anticipation.
D.
--
Daniel Silverstone http://www.codethink.co.uk/
Software Engineer GPG 4096/R Key Id: 3CCE BABE 206C 3B69
7 years, 12 months
Current developments of Firehose
by Richard Maw
Hi all.
You may remember that we've previously developed a tool called firehose to
generate candidate changes to Baserock's definitions, based on upstream's
changes.
We previously developed it to push changes to gerrit, with the idea that we
would be using some of OpenStack's tooling to perform tests on these candidate
branches.
Current thinking is that now it should be fed through a CI pipeline before
candidates are posted, to reduce the amount of noise from candidates proposed
by automation, which are subsequently rejected by automation.
So currently we're working with an older version which has rolled back to
before it was made to only be able to work with gerrit, with the intention of
resurrecting the useful code at a later stage in the pipeline, after we have
some confidence in being able to make use of the change.
This would have the benefit of also being useful for projects not using Gerrit
for code review, as a human could watch the pipeline and merge the candidate
changes manually.
This is the first element in the pipeline shown in http://52.19.1.31:8010/waterfall
We're not yet where we need to be with this, as no iteration of Firehose allows
it to integrate multiple candidate branches simultaneously.
The original design specified that Firehose should allow this, but given it may
need a redesign to not depend heavily on morphlib and morph's internal
behaviours, there's an argument for working around it rather than developing it
further, and impose an external mechanism to meet our needs.
The simplest version proposed is to require integrators to manually group sets
of integrations, probably by directory, such that everything in a directory is
configured to use the same landing ref.
7 years, 12 months
Fwd: diagram
by Will Holland
The attached diagram describes CIAT orchestration (in yellow). It has
been shown in #baserock a few times but has been revised many times.
regards,
Will
-------- Forwarded Message --------
Subject: diagram
Date: Thu, 17 Sep 2015 15:21:27 +0100
From: Will Holland <william.holland(a)codethink.co.uk>
To: Agustin Benito Bethencourt <agustin.benito(a)codethink.co.uk>
I've attached the diagram
8 years
Which system-version-manager to use for "--json" arg?
by James Thomas
Hi,
Was trying to upgrade my jetson using master of definitions, however the
deployment was failing on the label check,
system-verison-manager list --json
I tried using the system-version-manager from tbdiff master, however I still get
system-version-manager: error: unrecognized arguments: --json
What's the correct version of system-version-manager to use here?
I worked around it for now by removing the label check in ssh-rsync.write
--
Thanks
James
8 years
Research on exporting build definitions from other build tools
by Velikov, Emil (E.)
Hi Sam,
>It's unlikely that generating .morph files from other build systems will
generate 'maintainable' results
I would like to ask if you think it would be possible to do the other way round e.g. to generate bitbake recipes from .morph files.
8 years