hi i would like to at least try to provide insight
but not exactly insight, oh deer...
i use gitlab for reviewing code every day at my day job,
it is mostly good,
i am especially fond of the :tropical_fish: and :smile_cat: emoticons,
they make life better
and help me stay positive
the review model encourages the force push,
if you don't know what a force push looks like it looks like
https://i.imgur.com/XFQLB.jpg - hahahaha, but the commits from my v1
patchset are in that house! ;.;
so we don't do that on our project,
we manually submit a merge request for each revision of
what i'm suddenly going to call a candidate,
this means we can keep track of what happened,
gitlab doesn't provide a native way of showing diffs between
merge requests like gerrit does (if it does then i'd like to know!)
but that is actually okay because they are just refs so
you can git diff richardipsum/CHUNKYBACON_v1 richardipsum/CHUNKYBACON_v2
from your terminal, which may also be in colour.
some more notes,
gitlab pros (COMPARED TO GERRIT):
* it's written in ruby
* it's much prettier than gerrit
* it has :tropical_fish: emoticons
* it has simpler ci (actually i've never used this
i just read that it does)
* probably better support for submitting groups of patches
(though gerrit has pretty much sorted this out now)
gitlab cons (COMPARED TO GERRIT):
* it's written in ruby
* it has no native concept of a patchset (as gerrit does)
* having no native concept of a patchset,
it has no native way to diff between different versions of a patch
* CANNOT COMMENT ON NON-DIFF CONTENT (THIS IS SUPER ANNOYING
AND WE MUST BRIBE THEM TO FIX IT)
* No support for "BATCH" comments,
gerrit lets me save comments in a DRAFT and then post them
all when I'm sure they're sane, this is how review works,
(if gitlab has this then I am too dumb to find it)
by the end of a patch series I might realise my comment is wrong,
gerrit's model allows for this in a way that preserves my ego
which as you can see is important to me.
* i *think* it stores all the metadata in a database:
* if it does then that has replication/distribution IMPLICATIONS
which is why NOTEDB was made by the gerrit people at GOOGLE,
btw i even wrote a library for working with this,
USE AT YOUR OWN RISK,
I WONT BE HELD LIABLE FOR ANYTHING,
YOU CAN ALSO BUY ME A SANDWICH,
I LIKE PLOUGHMANS WITH LETTUCE, PLENTY OF PICKLE.
okay so i hope that helped!
I've been looking into continuous integration for Baserock and noticed
there doesn't seem to be much w.r.t. Gerrit and CI implemented right
now. I'm proposing that we take another look at whether Gitlab would
provide a better solution to code review and continuous integration than
it has in the past, and whether it would be better to consolidate down
to one hub for everything Baserock-related.
To start off a discussion, I've listed a few pros and cons of each piece
of software; if there are any mistakes I appreciate corrections, and
additions to pros and cons for each.
* The work to implement it is already done
* People have experience using it
* Gerrit -> Trove mirroring is already implemented
* No continuous integration implemented right now, and would take time
to do so
* User interface is not intuitive for new users, and some current users
* Relatively quick and easy to set up
* Continuous integration is implemented, and the built-in gitlab-ci.yml
file can be configured per patch to test building of a specific system,
cluster or stratum, with builds triggering when the patch is pushed
* User interface is easier for new users to get to grips with
* Contains wiki functionality and issue tracking, as well as
repositories, potentially creating a single base for all Baserock
information, issue tracking, code and code review, and CI
* Tracks patches
* Gitlab -> Trove mirroring is not implemented right now, but patches to
do this are available on gerrit.baserock.org
* We already have Gerrit, Storyboard, wiki.baserock.org and Trove that
people are familiar with
* Everything is visible on web browser (changes to the built in wiki,
code and CI can still be pushed via the command line)
Let me know what your thoughts are.
The main improvements this time are
- cache-key calculation now makes better use of the .trees file (no
need for check_trees)
- added a release-note config option, which creates a release-note with
- all definitions changes since last release
- all new commits (for all components) since last release
54ee540 Disable release-note by default
0a67719 Add release_note.py
4e7eaf8 Remove unused lines
02b79c4 Rename dn['build'] => dn['checkout']
7c4823f Rename definitions.py => morphs.py
cd2e3cd Separate out Morphs class and new Pots class
098063f No need for check_trees function, so remove it
c43828d Add 'get_last_tag' and 'run' functions
27d1879 Some renaming, lose a line
773a853 Revert concourse-specific internal data-model change
6e2cc03 Lose a line
c32a92f Swtich from [0, 1, 2] etc to range()
5cdc018 Rename component => dn in assembly.install_contents
2345535 Rename component => dn in assembly.install_dependencies
e0ffc6a Rename component => dn in assembly.compose
978ce1a Rename component => dn in assembly.build
c19764f Deal with not-yet declared foo mentioned as build-depend
8e354b1 Fix schema-validation (check or not) logic
cfa515e Write out internal yaml for no-build mode too
06ddf1b Various fleck8 fixes
f869d8c Fix missed import of app.exit
Quite a lot of cleanups, not much new functionality:
- settled on 'dn' as a generic variable name for a definition
- WIP parse-only mode now generates a Concourse pipeline which displays
(but doesn't run)
- log counts for systems, strata and chunks involved to make target
3cd67d7 Always log counts
815ee7d Shorten some lines
3b24123 Revert "Shorten some lines"
ec035e2 Shorten some lines
27b15fe WIP Concourse pipeline for parse-only
3c2adc8 Show counts for chunks, strata, systems
fbceb4d Tidy up for legibility
b2c56f5 Stop false warnings on old artifact-versions
3b3eae1 Fix #225 too many pies
26f2dfa Simplify some definitions code
a883cd4 Remove all the 'defs' parameters
57bb1c9 Rename definition => dn
d69e06b Rename 'this' => dn
3b55265 Only upload chunks by default
f6bfb0e Artifact version 5 - standardising 'path' key
6c464db De-morph paths from artifact-version 5
bfb93d9 As suggested by rjek, f.write(response.content)
ab3909a Lock down artifact-version check in integer range
5a29b4d Re-enable default-splits logic
95b75bc Fix for #223 dd checking of each definition name vs path
1754181 Add 'cleanup' option so we can leave debris around if required
After spending some time getting to the bottom of YBD issue 224, I have
come to a conclusion that artifact splitting is broken in YBD and I
wonder if it's broken at an even deeper level.
Pasting my conclusion from the following comment:
Artifact splitting needs to be implemented somewhere after system
integration commands are run, or alternatively, system integration
commands are not an option for split system artifacts.
Currently we do something like the following for each artifact we need
install_contents(component) // stages the dependencies
build(component) // runs commands to build the artifact
Here we have a special case in install_contents() which says, if we are
going to build a 'system' and splitting rules are active, then only
stage some of the system; this is wrong because we need the full
system, or an explicitly declared subset of the system which includes
everything we're going to need in order to run the system-integration
This might not be completely broken in other definitions build
but the above also leads me to suspect it is entirely broken, artifact
splitting should be performed on an existing system artifact possibly
as a part of the deployment.
One detail in particular which stands out to me, is the cache key for
a system artifact which was created with 'splits'
o Should we be caching artifacts of partial systems ? Why ?
o Is the cache key for a 'runtime' of a system properly created
based on which artifacts are declaratively included via splitting
options ? or do they differ arbitrarily because of some hash on
the current ybd configuration ?
I can't currently answer the above, but I think splitting must be
something which one would perform on an existing complete system
artifact, and the output of the split should either be a deployment
step, or additional artifacts which would be created *after*
completing the build of a system.
It may make more sense to not have any configuration in ybd.conf for
the default-splits, and instead have an option of whether to:
a.) Build split system artifacts
b.) Build split chunk artifacts
When building split system artifacts would would get the following
example artifacts in *addition* to the base-system-x86_64-generic,
which must be built anyway:
base-system-x86_64-generic-runtime - The runtime system artifact
base-system-x86_64-generic-docs - The docs artifact, always built
base-system-x86_64-generic-dev - The headers, pkg-config, etc,
However: base-system-x86_64-generic-minimal seems probably irrelevant
unless I misunderstand it by it's name, it seems to imply that it
might make sense to allow split artifacts to overlap in content,
does it ? Would that make sense really ?
Not sure, just going from the name, but a "minimal" system sounds
like something which needs to include the runtime and some binaries
and a kernel, so "minimal" is probably a sort of alias which includes
the minimal split categories to get a system booting, not a split
category in itself.
Any thoughts ?
Sorry, I couldn't resist the title.
I'm very confused by our build instructions at
&& python setup.py build
&& python setup.py install --prefix "$PREFIX" --root "$DESTDIR"
- python setup.py build
- python setup.py install --prefix "$PREFIX" --root "$DESTDIR"
My problems/questions are
1) pies2override does not exist. the correct directory seems to be
2) my latest development version of tbd *now* fails on the cd to broken
directory... but how did this ever work?
3) and how did it work in morph?
4) what are the && for? and why is my devel version  suddenly
erroring on them now?