This set of patches adds functionality to Lorry Controller to notice
jobs whose MINION (worker process running the job) has disappeared;
the patches introduce the term "ghost job" for this. This allows such
jobs to be cleaned up automatically, and stops them from preventing
new jobs from running once the table of jobs is filled with ghosts.
Lars Wirzenius (5):
Update ARCH about new API call (remove-ghost-jobs)
Add yarn tests for removing ghost jobs
Add --ghost-timeout option to WEBAPP
Add systemd units to remove ghost jobs automatically
ARCH | 5 ++
lorry-controller-webapp | 10 +++
lorrycontroller/__init__.py | 1 +
lorrycontroller/removeghostjobs.py | 65 +++++++++++++++++++
units/lorry-controller-remove-ghost-jobs.service | 9 +++
units/lorry-controller-remove-ghost-jobs.timer | 6 ++
yarns.webapp/040-running-jobs.yarn | 79 ++++++++++++++++++++++++
7 files changed, 175 insertions(+)
create mode 100644 lorrycontroller/removeghostjobs.py
create mode 100644 units/lorry-controller-remove-ghost-jobs.service
create mode 100644 units/lorry-controller-remove-ghost-jobs.timer
Anywhere you refer to a system or stratum morphology, you can specify
its file path with the "morph" field.
The path must be relative, based off the top-level of the repository. It
has been suggested to me that we should allow morphologies to specify each
other with relative paths, so you could put all the chunk morphologies
for a stratum with it.
This raises a question about the current implementation, since it would
make sense for morphology paths starting with / to be rooted from the
top of the repository, while ones without it to be relative, which would
mean there's an API change to get the right API for relative paths.
You can also do this to refer to chunk morphologies in strata, but you
need both "name" and "morph". "name" is used by `morph edit` to work
out which chunk morphology you meant.
I'm not entirely convinced that allowing this is a good idea, since the
only moving of chunk morphologies we care about, is the move into the
definitions repository, and checking whether it looks like a path would
be a useful way during the transition to determine whether to look in
the chunk repository or the definitions repository.
Similarly a decision whether we want to use `morph edit CHUNK_NAME` or
`morph edit CHUNK_PATH` will guide whether I should discard the change
that makes it look-up chunks by name, rather than adjusted path.
I updated the yarns to use morphologies from file paths, it changed a
lot since the assumption was deeply used. While I haven't tested it,
we should also be able to have morphologies that don't end with `.morph`.
They should be able to have any suitable extension, and can even go
without one if they are in a subdirectory.
The limitation about it being in a subdirectory or having an extension
is required to allow the transition away from just using the "name".
Richard Maw (8):
Remove code made vestigial from the `morph edit` rewrite
Remove check on checkout/branch that there are systems
Exorcise some old and unused commands
Add class hierarchies for Morphology load errors
Don't assume morphologies end in .morph
Use exact filenames to refer to morphology files
Don't check if a file exists before trying to read it
yarns: Adapt to put morphologies in subdirs
morphlib/__init__.py | 1 -
morphlib/app.py | 28 +-
morphlib/artifactresolver.py | 6 +-
morphlib/buildcommand.py | 2 +-
morphlib/morphloader.py | 62 +-
morphlib/morphologyfactory.py | 32 +-
morphlib/morphologyfinder.py | 71 -
morphlib/morphologyfinder_tests.py | 111 -
morphlib/morphset.py | 49 +-
morphlib/morphset_tests.py | 29 +-
morphlib/plugins/branch_and_merge_new_plugin.py | 91 +-
morphlib/plugins/branch_and_merge_plugin.py | 1906 -----------
morphlib/plugins/build_plugin.py | 15 +-
morphlib/plugins/cross-bootstrap_plugin.py | 2 +-
morphlib/plugins/deploy_plugin.py | 13 +-
morphlib/plugins/list_artifacts_plugin.py | 2 +-
morphlib/source.py | 2 +-
morphlib/sysbranchdir.py | 18 +-
morphlib/util.py | 20 +-
morphlib/util_tests.py | 29 +-
scripts/edit-morph | 6 +-
tests.branching/tag-fails-if-tag-exists.exit | 1 -
tests.branching/tag-fails-if-tag-exists.script | 33 -
tests.branching/tag-fails-if-tag-exists.stderr | 1 -
.../workflow-separate-stratum-repos.script | 72 -
tests.branching/workflow.script | 38 -
tests.branching/workflow.stdout | 0
tests.merging/basic.script | 82 -
tests.merging/basic.stdout | 5 -
tests.merging/conflict-chunks.script | 77 -
tests.merging/conflict-chunks.stderr | 1 -
tests.merging/conflict-chunks.stdout | 4 -
tests.merging/conflict-morphology-kind.exit | 1 -
tests.merging/conflict-morphology-kind.script | 32 -
tests.merging/conflict-morphology-kind.stderr | 1 -
tests.merging/conflict-stratum-field-ordering.exit | 1 -
.../conflict-stratum-field-ordering.script | 98 -
.../conflict-stratum-field-ordering.stderr | 1 -
.../conflict-stratum-field-ordering.stdout | 2 -
tests.merging/from-branch-not-checked-out.script | 32 -
tests.merging/from-branch-not-checked-out.stderr | 1 -
tests.merging/move-chunk-repo.exit | 1 -
tests.merging/move-chunk-repo.script | 55 -
tests.merging/move-chunk-repo.stderr | 1 -
tests.merging/rename-chunk.script | 58 -
tests.merging/rename-stratum.exit | 1 -
tests.merging/rename-stratum.script | 44 -
tests.merging/rename-stratum.stderr | 1 -
tests.merging/setup | 112 -
tests.merging/teardown | 22 -
.../warn-if-merging-petrified-morphologies.script | 34 -
.../warn-if-merging-petrified-morphologies.stdout | 1 -
tests/show-dependencies.stdout | 3358 ++++++++++----------
yarns/architecture.yarn | 8 +-
yarns/branches-workspaces.yarn | 52 +-
yarns/building.yarn | 4 +-
yarns/deployment.yarn | 130 +-
yarns/implementations.yarn | 126 +-
yarns/regression.yarn | 37 +-
yarns/splitting.yarn | 99 +-
60 files changed, 2016 insertions(+), 5106 deletions(-)
delete mode 100644 morphlib/morphologyfinder.py
delete mode 100644 morphlib/morphologyfinder_tests.py
delete mode 100644 morphlib/plugins/branch_and_merge_plugin.py
delete mode 100644 tests.branching/tag-fails-if-tag-exists.exit
delete mode 100755 tests.branching/tag-fails-if-tag-exists.script
delete mode 100644 tests.branching/tag-fails-if-tag-exists.stderr
delete mode 100755 tests.branching/workflow-separate-stratum-repos.script
delete mode 100755 tests.branching/workflow.script
delete mode 100644 tests.branching/workflow.stdout
delete mode 100755 tests.merging/basic.script
delete mode 100644 tests.merging/basic.stdout
delete mode 100755 tests.merging/conflict-chunks.script
delete mode 100644 tests.merging/conflict-chunks.stderr
delete mode 100644 tests.merging/conflict-chunks.stdout
delete mode 100644 tests.merging/conflict-morphology-kind.exit
delete mode 100755 tests.merging/conflict-morphology-kind.script
delete mode 100644 tests.merging/conflict-morphology-kind.stderr
delete mode 100644 tests.merging/conflict-stratum-field-ordering.exit
delete mode 100755 tests.merging/conflict-stratum-field-ordering.script
delete mode 100644 tests.merging/conflict-stratum-field-ordering.stderr
delete mode 100644 tests.merging/conflict-stratum-field-ordering.stdout
delete mode 100755 tests.merging/from-branch-not-checked-out.script
delete mode 100644 tests.merging/from-branch-not-checked-out.stderr
delete mode 100644 tests.merging/move-chunk-repo.exit
delete mode 100755 tests.merging/move-chunk-repo.script
delete mode 100644 tests.merging/move-chunk-repo.stderr
delete mode 100755 tests.merging/rename-chunk.script
delete mode 100644 tests.merging/rename-stratum.exit
delete mode 100755 tests.merging/rename-stratum.script
delete mode 100644 tests.merging/rename-stratum.stderr
delete mode 100755 tests.merging/setup
delete mode 100755 tests.merging/teardown
delete mode 100755 tests.merging/warn-if-merging-petrified-morphologies.script
delete mode 100644 tests.merging/warn-if-merging-petrified-morphologies.stdout
After my first RFC email about Trove configuration , many things (problems
and limitations) has happened in my development which have made me change my
Following my previous model
In my last model I explained my desire to remove all the configuration done in
the trove.configure extension. That would be possible creating a new systemd
unit to run before trove-early-setup, to configure the system in the same way
This new unit that I called trove-early-configure, would read the configuration
needed from a file (e.g. /baserock/trove/trove.conf), and then configure the
trove to leave everything ready for trove-early-setup. Some of this
configuration implied the modification of the trove-early-setup unit.
After this , I came to some conclusions:
1 - Create a different unit is undesirable. Is not clear what have to be done
in the different units.
2 - Performing substitutions on installed unit files in running systems is not
a good idea.
3- Modify files in /usr or in /baserock is not a good decision, I should treat
everything outside /etc and /var as read-only
This problems seemed easy to fix, so I started working in how to change the
model to make it fit with the previous conclusions
Le unit: One unit to configure Trove
After all the comments I had in the email, I decided that the configuration
needed to configure a Trove should be all together in one script, being
executed from one unit only once. Doing this I would fix the problem mentioned
in the conclusion No 1.
Also I made an effort to modify the trove-early-setup script to include all the
configuration needed in the Trove (that is, the configuration previously done
in trove.configure and trove-early-setup). Now the script would read the
configuration from /etc/trove/trove.conf, which means now it doesn't need to
perform substitutions in other units, avoiding the problem explained in the
conclusion No 2.
And, to finish the implementation of this model and fix the problem described
in the conclusion No 3, I moved the files I was modifying from /usr to /etc.
- Some files were from gitano, and upgrading gitano to the latest version
allowed me to move its skeleton files to /etc.
- Others files are scripts for migrations located in /home/shares. It must be
easy to move them to /etc.
But WAIT, doesn't it mean that we were already installing more than one unit to
configure a Trove? And also, will the upgrades work with this model?
Migrations and "why my model is wrong?"
I have to say I forgot to think about the migrations scripts, and about how
would my model handle them. So after this thought came to my mind, I decided to
check what would do my model if I try to upgrade an old Trove.
If, for example, I try to upgrade a Trove with the old lorry controller, the
configuration needed for the new lorry controller which are now generated in
trove.configure extension, will be generated (in my model) in trove-early-setup
(first boot), but this script won't be called, because the Trove is only
configured once, so I should put this specific configuration in a migration script.
* But, shouldn't I do the same (create a migration script and unit) for all the
changes we have done in trove.configure since it was created?
Yes I should.
* And doesn't it mean that we will have to create a new syetemd unit and a new
script for every further changes in the trove configuration?
Yes it does.
* And is it possible that we will end up with some migration scripts depending
in other migration scripts making the creation of new migration scripts complex?
Yes it is.
This conclusions made me realize that my new model make things more complex
than before. So I decided to rethink the model.
Rethinking the model
If we don't want to end up with a migration script for every change we do in
the configuration needed for Trove, we have some options:
1 - Following the model I've described in "Le Unit:...", if I move all the
configuration to the trove-early-setup script, and I want to add also the
migrations scripts into it, then trove-early-setup must run every boot, and be
clever enough to configure just the things that it needs to configure.
2 - Divide all the configuration needed in scripts, every script makes an
indivisible part of the configuration. A single systemd unit can call them, and
they have to run on the right order, and without breaking things if they run
more than once.
3 - Implement a Baserock configuration manager or integrate an existing one.
In my ideal world of configuration, we should be able to:
- Specify all the configuration in one place.
- Add more configuration scripts whenever needed and be sure that the new
script will run after a fresh deployment and after an upgrade.
When I was in the "Rethinking the model" phase, somebody suggested me to take a
look at "Declarative configuration". And I found it pretty interesting. If we
integrate a configuration manager in Baserock we can change the Trove
configuration model to a simpler way.
After researching about one concrete declarative configuration management tool
(Ansible ). I can say that using it we can:
- create simple pieces of configuration (scripts) to configure the systems.
- create dependencies between the pieces of configuration.
- it has facilities to create idempotent scripts.
This is really close to my "ideal world", if is not exactly what I was looking
for from the beginning. I would like to hear opinions about this, and see if
including Ansible as a configuration management tool in some of our systems
makes sense. Meanwhile I will create a Trove with Ansible integrated and
working as a proof of concept.
Fix problem with LC static status page being truncated when disk is full.
Lars Wirzenius (1):
Write static status HTML page via temporary file
lorrycontroller/status.py | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
From: Mark Doffman <mark.doffman(a)codethink.co.uk>
Add a script to create the lorry file for Openstack
services as well as the openstack.lorry file generated
by this script.
There are alot of projects within Openstack, and
they are being added fairly frequently. Seems useful
to have a script that can generate a lorry for current
Mark Doffman (1):
Lorry Openstack projects.
open-source-lorries/openstack.lorry | 211 ++++++++++++++++++++++++++++++++++++
scripts/generate-openstack.lorry.py | 75 +++++++++++++
2 files changed, 286 insertions(+)
create mode 100644 open-source-lorries/openstack.lorry
create mode 100755 scripts/generate-openstack.lorry.py
Fixes builds of gstreamer 0.10 with current versions of glib
The patch responsible for this is "Automatic update of common submodule" by
Tim-Philipp Muller, but I thought it best to merge all the changes to 0.10,
since we're not going be getting many more of them...
Kerrick Staley (1):
parse: make grammar.y work with Bison 3
Mark Nauwelaerts (1):
docs: no TOC design to dist
Sebastian Dröge (1):
basetransform: Fix handling of reverse caps negotiation if this
element alone is not enough to do the transform
Tim-Philipp Müller (4):
gst-uninstalled: gst-openmax -> gst-omx
configure: replace deprecated AM_CONFIG_HEADER with AC_CONFIG_HEADERS
Automatic update of common submodule
tests: remove silly test_fail_abstract_new check
common | 2 +-
configure.ac | 2 +-
docs/design/Makefile.am | 1 -
gst/parse/grammar.y | 2 +-
libs/gst/base/gstbasetransform.c | 3 ++-
scripts/gst-uninstalled | 2 +-
tests/check/gst/gstobject.c | 25 -------------------------
7 files changed, 6 insertions(+), 31 deletions(-)