[RFC] Multiple source repositories per-chunk and git submodule handling
by Richard Maw
We currently handle definitions that need to pull multiple source
repositories together in an ad-hoc way.
For gcc we import the gmp, mpc and mpfr source trees in by checking them
into our delta branch directly.
Some upstreams handle importing the sources by adding submodules.
However, we have to make them fetch sources from our Trove for
reproducibilty and latency concerns, so we end up having to add a delta
on top to change where the submodules fetch their sources from.
This works against one of our goals for minimising the delta, so we
need a way to encode in our builds how to deal with components that need
sources from multiple repositories to function.
Our current approach from submodules introduces a _lot_ of extra work
when we need to update multiple submodules recursively, so we need a
better solution.
Proposal
========
To solve this, I propose we extend the source repository information from
just (repo, ref) to be a list [(repo, ref, path?, ?submodule-commit)].
So rather than:
name: base
⋮
chunks:
- name: foo
morph: strata/base/foo.morph
repo: upstream:foo
ref: cafef00d…
unpetrify-ref: baserock/morph
⋮
build-depends: []
We extend it to be able to take a "sources" field.
⋮
chunks:
- name: foo
morph: strata/base/foo.morph
sources:
- repo: upstream:foo
ref: baserock/morph # used to validate commit is anchored
# properly and as a hint for where
# you ought to merge changes to
commit: cafef00d…
- repo: delta:gnulib
commit: deadbeef…
ref: baserock/GNULIB_0_1_2
path: .gnulib # where to put the cloned source
submodule-commit: feedbeef… # check that this matches, so you
# can be told if you need to update
# your delta
⋮
The `path` field is used to specify that this tree should be placed in
the parent tree at this position.
The `path` field is optional, defaulting to `.`
If multiple paths, after normalising, clash, then it results in a
build-failure at source creation time.
Sub-sources can be placed over an empty directory, when there's no
existing entry at that path, or when there's a git submodule there AND
the commit of the submodule matches the `submodule-commit` field.
If there's a file, symlink, non-empty directory or submodule that doesn't
match `submodule-commit` there, then it results in a build-failure
at build-time.
The `submodule-commit` check exists as a safeguard against the parent
repository being updated and requiring a new version, which your specified
commit isn't appropriate for.
If you get a build fail because the submodule isn't appropriate then
you have to check whether the version you specify works, then update
the commit.
Cache key changes
=================
This shouldn't require any cache key changes to existing definitions,
but builds that make use of multiple source repositories will also hash
the commit tree and path.
Alternative solutions for submodules
====================================
We could continue to use the current model, and deal with the pain of
having to make multiple branches in multiple repositories to satisfy
the change to the repository paths.
We could have a magic look-up table to replace repository urls when we
parse the submodules.
We could use git-replace to magically make git do the url correction, but
then we'd have to handle replacements properly, rather than dropping them.
7 years, 6 months
[PATCH 0/6] Firehose Implementation
by Lauren Perry
Hi all,
Back in November I sent to the list an RFC for Firehose. This preliminary patch series demonstrates the current work I have done, based on the pre-existing prototype Firehose, and the steps taken towards automating this process.
Currently, Firehose is not fully automated; there are still a number of steps that need to be performed manually, such as setting up a Firehose user, enabling the firehose.service and firehose.timer files and installing the firehose shell for general use. These steps are demonstrated in firehose.configure (prelim) and I will post a follow up patch with a README demonstrating steps that need to be taken currently.
Firehose will also need a Gerrit user configured with an appropriate set of ssh keys to access Gerrit.
A Firehose machine has been demonstrated to work properly on OpenStack, though it requires the firehose command to be done manually.
Regards,
Lauren
Lauren Perry (6):
Add tools for installing Firehose and its dependencies
Add systemd styled timers to automate Firehose checking process
Removed morph after placing its functionality in firehose.sh
Modify firehose email configuration and push location to ensure
commits are pushed to the testgerrit server with an appropriate
commit message and branch name
Install githook to add Change-Id value to commit message
Add firehose.configure, containing the necessary instructions to run
firehose
firehose.configure | 39 ++++++++++++++++++++++++++
firehose.service | 12 ++++++++
firehose.sh | 21 ++++++++++++++
firehose.timer | 11 ++++++++
{plugin => firehose/plugin}/firehose_plugin.py | 20 +++++++++----
morph | 17 -----------
setup.py | 16 +++++++++++
7 files changed, 113 insertions(+), 23 deletions(-)
create mode 100644 firehose.configure
create mode 100644 firehose.service
create mode 100755 firehose.sh
create mode 100644 firehose.timer
rename {plugin => firehose/plugin}/firehose_plugin.py (92%)
delete mode 100755 morph
create mode 100644 setup.py
--
1.8.4
--
1.7.10.4
7 years, 11 months
[PATCH 0/9 v3] Implement Mason with Zuul and turbo-hipster
by Michael Drake
sha1: b7c98ed9bf83c703846965144156a91fa2ebd0b3
repo: ssh://git@git.baserock.org/baserock/baserock/definitions.git
branch: baserock/tlsa/mason2
This patch series adds Zuul, turbo-hipster and their dependencies
to definitions and creates a Mason system containing them. It uses
mason.configure to insert some configuration files and set up some
SSH keys and the like. An Ansible playbook runs on boot which
fills in the template configuration files and starts the Zuul and
turbo-hipster services that are required.
It is important to note that there is some configuration that needs
to be done to the Gerrit instance you wish to deploy this against.
Namely, you will need to create a Mason user, add a Verified label
and allow the Mason user to vote on that label, and add the required
SSH key to the Mason user.
It has only been tested using OpenStack, as it is understood that
is how the Baserock project intends to use it.
Most of this work was originally done by Adam Coldrick, and is based
on his branch at baserock/adamcoldrick/zuul-mason-v2. I have done
a few fixes, rebased it on a more recent master of definitions and
attended to previous review comments.
Adam Coldrick (5):
Move Paramiko into its own stratum from Ansible
Add a system for Mason
Make mason.configure install the new Mason config
Add an example cluster morphology for mason
Add a README for mason
Michael Drake (4):
Fix tar to build with acl in foundation stratum.
Add a stratum for Zuul, turbo-hipster and their dependencies
Add a stratum for the tests used by Mason
Update python-prettytable.
.../mason-system-x86_64-openstack-deploy.morph | 55 +++++
mason.configure | 140 +++++++-----
mason.configure.help | 127 +++++++++++
mason/README | 120 ++++++++++
mason/ansible/mason-setup.yml | 129 +++++++----
mason/{httpd.service => lighttpd.service} | 2 +-
mason/mason-generator.sh | 101 ---------
mason/mason-report.sh | 252 ---------------------
mason/mason.service | 10 -
mason/mason.sh | 93 --------
mason/mason.timer | 10 -
mason/share/lighttpd.conf | 21 ++
mason/share/mason.conf | 14 --
mason/share/os.conf | 1 -
mason/share/turbo-hipster-config.yaml | 47 ++++
mason/share/zuul-layout.yaml | 22 ++
mason/share/zuul-logging.conf | 44 ++++
mason/share/zuul.conf | 26 +++
mason/ssh-config | 2 +
mason/turbo-hipster.service | 10 +
mason/zuul-merger.service | 10 +
mason/zuul-server.service | 10 +
strata/baserock-ci-tests.morph | 14 ++
strata/baserock-ci-tests/system-tests.morph | 5 +
strata/python-common.morph | 4 +-
strata/python-paramiko.morph | 24 ++
strata/python-paramiko/pycrypto.morph | 3 +
strata/webtools/tar.morph | 11 +-
strata/zuul-ci.morph | 137 +++++++++++
systems/mason-system-x86_64-generic.morph | 58 +++++
30 files changed, 901 insertions(+), 601 deletions(-)
create mode 100644 clusters/mason-system-x86_64-openstack-deploy.morph
create mode 100644 mason.configure.help
create mode 100644 mason/README
rename mason/{httpd.service => lighttpd.service} (69%)
delete mode 100755 mason/mason-generator.sh
delete mode 100755 mason/mason-report.sh
delete mode 100644 mason/mason.service
delete mode 100755 mason/mason.sh
delete mode 100644 mason/mason.timer
create mode 100644 mason/share/lighttpd.conf
delete mode 100644 mason/share/mason.conf
create mode 100644 mason/share/turbo-hipster-config.yaml
create mode 100644 mason/share/zuul-layout.yaml
create mode 100644 mason/share/zuul-logging.conf
create mode 100644 mason/share/zuul.conf
create mode 100644 mason/ssh-config
create mode 100644 mason/turbo-hipster.service
create mode 100644 mason/zuul-merger.service
create mode 100644 mason/zuul-server.service
create mode 100644 strata/baserock-ci-tests.morph
create mode 100644 strata/baserock-ci-tests/system-tests.morph
create mode 100644 strata/python-paramiko.morph
create mode 100644 strata/python-paramiko/pycrypto.morph
create mode 100644 strata/zuul-ci.morph
create mode 100644 systems/mason-system-x86_64-generic.morph
--
2.1.4
8 years
[PATCH 0/4 v2] Put Mason's test code in the system-tests repo
by Michael Drake
sha1: 7fb68b7f377583dac40634338870583baaa2fe65
repo: ssh://git@git.baserock.org/baserock/baserock/system-tests.git
branch: baserock/mason-v2
This copies the scripts/release-* scripts from definitions into system-tests
and reworks them to run as turbo-hipster jobs.
These turbo-hipster jobs are example jobs which may be used by version 2
of mason, which is a gerrit-driven zuul/turbo-hipster based automated testing
system that tests candidate branches before merging them to master.
While the existing tests from definitions are reused here, we may choose
to replace them or augment them with other different tests.
Most of this work was originally done by Adam Coldrick, and there were
some fixes from Sam Thursfield. I have taken the work from where Adam
left off (branch: baserock/adamcoldrick/mason-tests-v2) and done some
fixes, and improvements based on testing and previous review comments.
Adam Coldrick (2):
Put the existing code into trove-upgrades
Add a Zuul job runner.
Michael Drake (2):
Import code from the baserock definitions release-* scripts.
Add mason 'build', 'test' and 'upload' as Zuul plugins
mason/__init__.py | 5 +
mason/deployment.py | 272 +++++++++++++++++++++
mason/publishers.py | 256 +++++++++++++++++++
mason/runners.py | 161 ++++++++++++
mason/tests/__init__.py | 3 +
mason/tests/artifact_upload.py | 112 +++++++++
mason/tests/build.py | 128 ++++++++++
mason/tests/build_test.py | 188 ++++++++++++++
mason/util.py | 145 +++++++++++
config.py => trove-upgrades/config.py | 0
.../test_trove_upgrades.py | 0
util.py => trove-upgrades/util.py | 0
12 files changed, 1270 insertions(+)
create mode 100644 mason/__init__.py
create mode 100644 mason/deployment.py
create mode 100644 mason/publishers.py
create mode 100644 mason/runners.py
create mode 100644 mason/tests/__init__.py
create mode 100644 mason/tests/artifact_upload.py
create mode 100644 mason/tests/build.py
create mode 100644 mason/tests/build_test.py
create mode 100644 mason/util.py
rename config.py => trove-upgrades/config.py (100%)
rename test_trove_upgrades.py => trove-upgrades/test_trove_upgrades.py (100%)
rename util.py => trove-upgrades/util.py (100%)
--
2.1.4
8 years
Discussion about deploying web/database apps on Baserock
by Richard Dale
There was an interesting discussion yesterday on the #baserock IRC
channel about deploying production web/database apps on Baserock. I
thought I would post it here so that what was said doesn't get lost.
[16:13:45] <DavePage_> This is possibly a philosophical question, but if
you're deploying a web app server image with Baserock there are
basically three components - (a) the underlying OS, web server and
interpreter; (b) the web application code; and (c) the data the web app
operates on. Do people think it makes most sense to use the Baserock
tooling to create reproducible builds of a, a&b, or a&b&c?
[16:17:50] <paulsherwood> DavePage_: depends on the app... what kind of
'data' for example?
[16:18:34] <DavePage_> paulsherwood: As an example, a wiki instance.
[16:19:26] <persia> DavePage_: I'd do a&b&c, and use different sets of
strata for each. This would give me confidence that I could adjust the
strata for c, without changing a&b, or the strata for a, without
changing b&c, safely. If a change in a is low enough to require a
rebuild of b or c, morph will manage this for you, but your artifacts
will differ, so you can know (but never have to ship an A that provides
a different ABI than your B expects, etc.)
[16:20:05] -*- persia adds to the chorus for having a baserock-changes@
or similar list, rather than causing non-patch discussions to be lost in
baserock-dev again.
[16:20:31] <jmacs> a&b definitely. C gets interesting - the data would
be coming from some existing external source, not git, wouldn't it?
[16:20:53] <DavePage_> Unless you start keeping verbose MySQL dumps in
git, I guess
[16:21:00] <DavePage_> Which sounds like it'd get rapidly painful.
[16:21:12] <rdale> i could see how you could define a specific database
schema for 'c', and configuration, but not the data itself
[16:21:14] <paulsherwood> DavePage_: if the wiki is git-backed, a+b+c imo
[16:21:47] <DavePage_> paulsherwood: And if it isn't?
[16:22:02] <jmacs> Yes, I suppose you could just treat the wiki's
repository as a component and grab the latest version of it
[16:23:11] <jmacs> Dump existing MySQL database, then import it as
tarball, rather than a repository
[16:23:36] <DavePage_> And then rebuild the whole image when the web app
code (b) changes?
[16:25:10] <SotK> DavePage_: yes, but assuming you don't lose the cached
artifacts only the web app code and anything which depends on it will
actually be rebuilt
[16:26:06] <DavePage_> *nods*
[16:26:20] <DavePage_> I'm just trying to groupthink what "best
practice" is here
[16:26:24] <gary_perkins> In a recent project, the web-app code was just
ruby, so nothing to buid as such. It might as well be just HTML.
[16:26:54] <DavePage_> Well, building can just be copying files into the
right place :)
[16:27:10] <DavePage_> See `man install` for details :)
[16:27:25] <gary_perkins> Seems a lot easier to maintain the web-app
code separately in a differnt repo and have the web-system pull from it
[16:27:41] <DavePage_> Or just lorry it from that different repo?
[16:29:49] <ssam2> DavePage_: in practice, I've found that it's much
quicker to develop the web application from a git checkout on the target
[16:30:29] <gary_perkins> DavePage_: That sounds like it could work, but
I'm open to suggestions. I've never actully setup lorrying myself.
[16:30:38] <DavePage_> ssam2: Yeah, I'm sure it's quicker, but that
doesn't mean it's necessarily right :) Particularly if you're not
actively developing, but providing Baserock systems using an upstream.
[16:30:40] <ssam2> DavePage: and so if the web application is reasonably
self-contained, I'd recommend deploying the base OS with Baserock, then
deploying the web-app after the fact with an Ansible script that you run
after deployment
[16:31:02] <ssam2> if the Ansible script is commited to Git, then the
process is just as reproducible as if you used Morph, but it makes it
much easier to hack on the application if you need to
[16:31:32] <ssam2> when you're done hacking, update the ansible script
and rerun it, and your running system is in the same state as if you had
redeployed it
[16:31:36] <ssam2> (modulo any bugs in Ansible)
[16:32:13] <ssam2> if you're not actively developing then there probably
isn't any advantage to that approach
[16:34:06] <DavePage_> ssam2: And your way does assume the upstream
does't disappear...
[16:34:20] -*- radiofree wonders what mason is up to these days
[16:35:00] <ssam2> DavePage_: in my case, I was the upstream, so was
pretty confident of that :)
[16:35:47] <DavePage_> ssam2: Yes, that won't always be the case though :)
[17:00:38] <persia> I missed the good bit, but I'm actively against
using a local checkout of code on production systems.
[17:01:06] <persia> One should develop it on a development system:
running `morph update` to update a target with the new changes in the
updated git repo is relatively easy and smooth.
[17:01:38] -*- persia has seen too many cases where the production
checkout was dirty, causing all sorts of headaches when troubleshooting
production issues
8 years
Lorrying Tizen-IVI
by Richard Dale
I've been having some discussions about lorrying the Tizen-IVI repos,
and we wondered if it would make sense to lorry them in the Baserock.org
trove. See the attached 'tizen-ivi.lorry' file for what it would look like.
The Tizen-IVI project compliments the Genivi work, and contains some or
all of their repos. I haven't removed Tizen repos that are duplicated
from Genivi in the attached lorry.
Would there be any interest in lorrying the Tizen repos, and possibly
building a Baserock flavour of Tizen-IVI?
8 years
[PATCH 0/2] Baserock on google compute engine.
by mark.doffman@codethink.co.uk
From: Mark Doffman <mark.doffman(a)codethink.co.uk>
repo: git://git.baserock.org/baserock/baserock/definitions
branch: baserock/markdoffman/google-compute-engine
commit: ce7ea2923f1dc8c8fbd5600f14e356ffed99072b
target: master
This is a set of changes that creates baserock images that can be run
on google compute engine.
The first patch adds the neccessary kernel config. The second provides stratum
for googles instance configuration software and some example system and cluster
files.
Why would anyone want this?
---------------------------
The cloud is super cheap. Its actually a pretty nice way to run a baserock
devel instance, much easier than setting up KVM. Possibly easier than Virtualbox.
How do I try it out?
--------------------
I'm glad you asked. In blue peter style, here is one I made earlier:
http://storage.googleapis.com/baserock_images/devel-system-x86_64-gce.tar.gz
Here is the 10 minute quickstart for google compute:
https://cloud.google.com/compute/docs/quickstart
Once you have the tools installed and your account set up and authorized you can do this:
gcloud compute images create baserock-devel-system \
--source-uri=http://storage.googleapis.com/baserock_images/devel-system-x86_64-gce.tar.gz
gcloud compute instances create baserock-devel --image baserock-devel-system
gcloud compute ssh baserock-devel
You are now in a baserock system sitting in googles cloud. Yay!
How do I BUILD a gce baserock image?
-----------------------------------
morph deploy clusters/google-compute-engine.morph
This write extension doesn't upload it to google storage, nor does it
create any instances right now.
To upload it to your google storage, which will be needed to create an image run:
gsutil mb gs://baserock_image_bucket
gsutil cp devel-system-x86_64-gce.tar.gz gs://baserock_image_bucket
What works, what doesn't?
-------------------------
Images run on google compute engine. Some of googles tools still don't work. You can't
create images from currently running instances. You can't use googles 'safe-mount'
scripts and probably some others.
Googles udev rules for attaching new disks work fine. (Thats pretty important)
The write extension doesn't currently do much. It just compresses the raw disks in a way
that is acceptable to googles sofware for image creation. It doesn't do any of the
work of creating images or instances.
There are probably a few quirks. One you will notice is that
'google compute ssh instance-name' always exits with an error (ssh 255)
Googles account managment scripts assume a bash shell, rather than modifying this
or going crazy I added a bash skel from ubuntu.
Mark Doffman (2):
Add neccessary kernel config for google compute engine.
Add strata, systems and cluster morpologies for google compute engine.
clusters/google-compute-engine.morph | 14 +++
google-compute-engine.configure | 29 +++++
google-compute-engine.write | 64 +++++++++++
google-compute-engine/etc/skel/.bash_logout | 7 ++
google-compute-engine/etc/skel/.bashrc | 117 +++++++++++++++++++++
google-compute-engine/etc/skel/.profile | 22 ++++
google-compute-engine/manifest | 5 +
.../usr/lib/systemd/system/setmtu@.service | 10 ++
.../bsp-x86_64-generic/linux-x86-64-generic.morph | 3 +
strata/google-compute-engine.morph | 16 +++
strata/google-compute-engine/google-daemon.morph | 5 +
.../google-startup-scripts.morph | 5 +
systems/devel-system-x86_64-gce.morph | 62 +++++++++++
13 files changed, 359 insertions(+)
create mode 100644 clusters/google-compute-engine.morph
create mode 100644 google-compute-engine.configure
create mode 100755 google-compute-engine.write
create mode 100644 google-compute-engine/etc/skel/.bash_logout
create mode 100644 google-compute-engine/etc/skel/.bashrc
create mode 100644 google-compute-engine/etc/skel/.profile
create mode 100644 google-compute-engine/manifest
create mode 100644 google-compute-engine/usr/lib/systemd/system/setmtu@.service
create mode 100644 strata/google-compute-engine.morph
create mode 100644 strata/google-compute-engine/google-daemon.morph
create mode 100644 strata/google-compute-engine/google-startup-scripts.morph
create mode 100644 systems/devel-system-x86_64-gce.morph
--
2.3.0
8 years, 1 month
Issue with 'master' of definitions.git mounting external disks
by Sam Thursfield
Hi
I've noticed a possible issue with systems built from 'master' of
definitions. I found that on my Jetson with an external SATA drive, an
older system version works fine, but 'master' fails to mount the
external SATA drive at boot time. I lost quite a bit of time thinking
this was a hardware issue. Since it only occurs in one system version,
it must be a software issue.
I don't have time to dig out the actual SHA1s at the moment, or to
investigate further. I'm sending this just in case anyone else has
noticed new disk mounting failures after upgrading -- make sure you try
rolling back to the old system version before you start checking
hardware etc. I think it could be a bug in the version of systemd we are
using (we updated to 'master' recently) but I'm not sure about that.
Sam
--
Sam Thursfield, Codethink Ltd.
Office telephone: +44 161 236 5575
8 years, 1 month
[RFC] Disambiguate definitions
by Paul Sherwood
Hi folks,
it appears that we have several examples of the same name being used to
denote a stratum, and a chunk inside it:
ruby
swift
lorry
ansible
I think that this is confusing, and therefore propose that we could
standardise as
ruby-common, ruby
swift-common, swift
lorry-common, lorry
ansible-common, ansible
or alternatively
ruby, ruby-chunk
swift, swift-chunk
lorry, lorry-chunk
ansible, ansible-chunk
I'd also like to suggest that we could aim for unique names in a given
definitions.git set, but that require more work and probably be more
invasive.
br
Paul
8 years, 1 month