hi i would like to at least try to provide insight
but not exactly insight, oh deer...
i use gitlab for reviewing code every day at my day job,
it is mostly good,
i am especially fond of the :tropical_fish: and :smile_cat: emoticons,
they make life better
and help me stay positive
the review model encourages the force push,
if you don't know what a force push looks like it looks like
https://i.imgur.com/XFQLB.jpg - hahahaha, but the commits from my v1
patchset are in that house! ;.;
so we don't do that on our project,
we manually submit a merge request for each revision of
what i'm suddenly going to call a candidate,
this means we can keep track of what happened,
gitlab doesn't provide a native way of showing diffs between
merge requests like gerrit does (if it does then i'd like to know!)
but that is actually okay because they are just refs so
you can git diff richardipsum/CHUNKYBACON_v1 richardipsum/CHUNKYBACON_v2
from your terminal, which may also be in colour.
some more notes,
gitlab pros (COMPARED TO GERRIT):
* it's written in ruby
* it's much prettier than gerrit
* it has :tropical_fish: emoticons
* it has simpler ci (actually i've never used this
i just read that it does)
* probably better support for submitting groups of patches
(though gerrit has pretty much sorted this out now)
gitlab cons (COMPARED TO GERRIT):
* it's written in ruby
* it has no native concept of a patchset (as gerrit does)
* having no native concept of a patchset,
it has no native way to diff between different versions of a patch
* CANNOT COMMENT ON NON-DIFF CONTENT (THIS IS SUPER ANNOYING
AND WE MUST BRIBE THEM TO FIX IT)
* No support for "BATCH" comments,
gerrit lets me save comments in a DRAFT and then post them
all when I'm sure they're sane, this is how review works,
(if gitlab has this then I am too dumb to find it)
by the end of a patch series I might realise my comment is wrong,
gerrit's model allows for this in a way that preserves my ego
which as you can see is important to me.
* i *think* it stores all the metadata in a database:
* if it does then that has replication/distribution IMPLICATIONS
which is why NOTEDB was made by the gerrit people at GOOGLE,
btw i even wrote a library for working with this,
USE AT YOUR OWN RISK,
I WONT BE HELD LIABLE FOR ANYTHING,
YOU CAN ALSO BUY ME A SANDWICH,
I LIKE PLOUGHMANS WITH LETTUCE, PLENTY OF PICKLE.
okay so i hope that helped!
When we use someone else's upstream code, in most cases for
well-behaved projects  we'd prefer to stick with mainline. When we
fix bugs or tweak functionality it's worth discussing with upstream. If
they take our suggestions or patches, we can continue to use mainline.
The more patches we have to carry, the more friction we have when we
want to upgrade to get latest mainline improvements.
For cases where we want to introduce significant additional/different
functionality, ideally upstream has already anticipated the situation
and has some kind of plugin architecture - we develop our add-on and can
maintain it without having to fork mainline.
Worst case, when some of the above doesn't work, we fork, and accept
that we have to maintain our own thing. We can still consider what the
original upstream is doing, but it gets progressively more difficult to
adopt their changes.
The default metadata usecase is different from the situation above, in
that we are *expecting* that users will fork. That's the whole point of
reference/example systems - user takes an existing reference metadata
and creates something useful out of it.
However, users do still need/want some of the normal upstream benefits:
- if we make improvements and upgrades, can they easily consume them in
their downstream systems?
- can they easily submit patches/improvements to us, as upstream?
As far as I know we've never properly demonstrated and explained how
users are supposed to maintain forked definitions for production
We've described how to get started and modify components and systems,
and how to add/remove components (although actually the documentation is
mostly out-of-date now). But for real projects with long-term usage,
we've left users to figure it out for themselves. Which means,
incidentally, that we've not been a well-behaved upstream by my
I know that for the production projects I've been involved in, the
users had to go their own way pretty quickly. In at least one case they
settled on not even upgrading morph, let alone consuming updates to
For YBD users, I've been recommending that all forking should occur in
a new subdirectory, alongside systems/ and strata/. So Foo corp might
have the following in definitions:
This means they can safely checkin and maintain their custom systems,
strata, and components, while being completely confident they can
consume updates from Baserock without any possibility of merge
However, what happens if they want to use a different version of gcc,
or upgrade things in foundation etc?
Currently my strong recommendation is that they need to create their
own build-essential and foundation. But this can lead to confusion or
breakage if YBD chooses the 'wrong' versions. So I want them to use
different names (gcc-foo, build-essential-foo, foundation-foo etc), and
ideally i want to enforce uniqueness on names when parsing, so no-one
gets confused. But names in definitions have never been unique.
I'm think we should have a flag-day, where uniqueness is enforced and
all names are tidied up. We could improve some other things also, eg.
systems with variants, instead of system-x86_64 + system-armv7-jetson
Does anyone agree, disagree, or have alternative suggestions?
 In my view a well-behaved project is one where upstream
- is reasonable and responsive to suggestions and patches
- deals responsibly with backwards/forwards compatibility
- has as few flag days as possible, and provides a working migration
path for each
- doesn't re-write history
- has a workable approach for security updates in its infrastructure
and for downstream users
All of these changes are fixes and tidy ups - no new functionality.
b6f6a55 Exit if unpack fails - it's no good continuing
3c82bc2 Drop deprecated GitLab badge for now
35c3e23 Log arch mismatch
45ef539 Exit if attempt to deploy unbuilt system
f0a7992 Check fields in morph files, and exit if something odd is found
2f25e36 Compose subsystems before systems
03c09d0 Default to warn on missing install-commands
88e2d88 Merge pull request #229 from leeming/leeming/issue-218
d6cd8f0 Adding in exception handler to catch Unicode Encoding error and
handle unicode filepaths
8b6012d Dump contents of dn
I've been looking into continuous integration for Baserock and noticed
there doesn't seem to be much w.r.t. Gerrit and CI implemented right
now. I'm proposing that we take another look at whether Gitlab would
provide a better solution to code review and continuous integration than
it has in the past, and whether it would be better to consolidate down
to one hub for everything Baserock-related.
To start off a discussion, I've listed a few pros and cons of each piece
of software; if there are any mistakes I appreciate corrections, and
additions to pros and cons for each.
* The work to implement it is already done
* People have experience using it
* Gerrit -> Trove mirroring is already implemented
* No continuous integration implemented right now, and would take time
to do so
* User interface is not intuitive for new users, and some current users
* Relatively quick and easy to set up
* Continuous integration is implemented, and the built-in gitlab-ci.yml
file can be configured per patch to test building of a specific system,
cluster or stratum, with builds triggering when the patch is pushed
* User interface is easier for new users to get to grips with
* Contains wiki functionality and issue tracking, as well as
repositories, potentially creating a single base for all Baserock
information, issue tracking, code and code review, and CI
* Tracks patches
* Gitlab -> Trove mirroring is not implemented right now, but patches to
do this are available on gerrit.baserock.org
* We already have Gerrit, Storyboard, wiki.baserock.org and Trove that
people are familiar with
* Everything is visible on web browser (changes to the built in wiki,
code and CI can still be pushed via the command line)
Let me know what your thoughts are.
Main changes are
- various tidyups to the code
- fixing some corner-cases to deal with malformed definitions
- coping with nasty overlaps
- kbas now has download links
- artifact sizes now contain commas, so they're easier to read
29c8430 Tidyup some messages
0b19284 Try unlink, then remove
a7696d1 Revert back to tristan's fix
e560f0f Probable fix for #228
8db496d Revert "Allow + artifact names"
6c7b182 Allow + artifact names
17d1f4d Reuse _process_tree code for _process_list
7f7032c Fix link colour, and try relative links
0ee8a50 Merge pull request #227 from gtristan/staging-symlinks
39ecc9d Ensure that we only stage relative paths for symbolic links.
eb23f0d Provide download links for artifacts
cbaeb02 add debug
8299778 Trap if chunks or strata are not lists
c18c649 Skip 'tree' and 'cache' for release notes comparison
08264d9 Renames for readability, i => key
a609a3c Fix release-note logic
f9e7671 Release note only includes changes to target
c3e1c98 Only save entries in .trees where ref: is a SHA
b81c8b3 Log number of trees re-used from trees file
6e61371 Save definitions.yml again
cb744a6 Tidyup checks in Morphs, so check-definitions: flag is honoured
68bba6f Exit if a definition lookup fails
7bf0892 Move app.exit function in to app.log
2e076dd Add more logging for split processing
36e950c Add traceback for failed split install
e36813e Fix stratum['name'] + split
0aba49a Better log of installing splits
865c202 Use .trees file properly, in artifacts dir
ac41d64 Fix #224 properly
7a9cda5 Revert "Fix #226"
6faf067 Revert "Rename unpack => save, and don't unpack!"
4809a04 Fix #226
4856263 Rename unpack => save, and don't unpack!
11d6dba WIP fix for #224 system-level splitting with
405b3ee Separate thousands in artifact sizes
b16dd6f Rename component => dn
82697b7 Rename item => dn