hi i would like to at least try to provide insight
but not exactly insight, oh deer...
i use gitlab for reviewing code every day at my day job,
it is mostly good,
i am especially fond of the :tropical_fish: and :smile_cat: emoticons,
they make life better
and help me stay positive
the review model encourages the force push,
if you don't know what a force push looks like it looks like
https://i.imgur.com/XFQLB.jpg - hahahaha, but the commits from my v1
patchset are in that house! ;.;
so we don't do that on our project,
we manually submit a merge request for each revision of
what i'm suddenly going to call a candidate,
this means we can keep track of what happened,
gitlab doesn't provide a native way of showing diffs between
merge requests like gerrit does (if it does then i'd like to know!)
but that is actually okay because they are just refs so
you can git diff richardipsum/CHUNKYBACON_v1 richardipsum/CHUNKYBACON_v2
from your terminal, which may also be in colour.
some more notes,
gitlab pros (COMPARED TO GERRIT):
* it's written in ruby
* it's much prettier than gerrit
* it has :tropical_fish: emoticons
* it has simpler ci (actually i've never used this
i just read that it does)
* probably better support for submitting groups of patches
(though gerrit has pretty much sorted this out now)
gitlab cons (COMPARED TO GERRIT):
* it's written in ruby
* it has no native concept of a patchset (as gerrit does)
* having no native concept of a patchset,
it has no native way to diff between different versions of a patch
* CANNOT COMMENT ON NON-DIFF CONTENT (THIS IS SUPER ANNOYING
AND WE MUST BRIBE THEM TO FIX IT)
* No support for "BATCH" comments,
gerrit lets me save comments in a DRAFT and then post them
all when I'm sure they're sane, this is how review works,
(if gitlab has this then I am too dumb to find it)
by the end of a patch series I might realise my comment is wrong,
gerrit's model allows for this in a way that preserves my ego
which as you can see is important to me.
* i *think* it stores all the metadata in a database:
* if it does then that has replication/distribution IMPLICATIONS
which is why NOTEDB was made by the gerrit people at GOOGLE,
btw i even wrote a library for working with this,
USE AT YOUR OWN RISK,
I WONT BE HELD LIABLE FOR ANYTHING,
YOU CAN ALSO BUY ME A SANDWICH,
I LIKE PLOUGHMANS WITH LETTUCE, PLENTY OF PICKLE.
okay so i hope that helped!
various people here have been exploring and using GitLab for a range of
things both in public and on private projects. In general this
experience has been very positive, and as a result we've now established
some gitlab CI for building definitions using GitLab's public CI
service. It will clearly be worthwile to have YBD itself validated
through CI, so I've agree to migrate the canonical upstream away from
Github to GitLab as of now.
I'll be following up with patches to deprecate the github url, migrate
the current basic ci from travis into the gitlab environment, and
whatever other housekeeping is required to make the change work. It may
be that I do one further official release tag to github, just for
tidiness and so that anyone continuing to pull YBD from Github (eg) as
part of a downstream CI pipeline, will get a known good state.
Any questions or suggestions on this are welcome
so far KBAS has proved useful as a quick-and-dirty artifact cache
solution, but it hasn't had any serious attention on
security/encryption/authentication of artifacts for users until now.
I'm aware of two distinct use-cases
A) publishing artifacts for public projects (eg baserock, ybd)
- artifacts are public
- need some reliable way for authorised/trusted tool instances to
upload new artifacts while users continue to access the service
B) publishing artifacts for commercial projects
- the artifacts are not public
- authorised/trusted tool-instances (eg CI runners, project-user runs
of ybd) can retrieve artifacts
- (a subset of, or possibly all) of the authorised/trusted
tool-instances can upload artifacts
The current implementation uses a plaintext password, which is clearly
by no means 'secure' but allows a basic implementation of A). For B), so
far the setups I'm aware of have put the whole service behind some other
privacy/protection scheme eg NGINX or VPN.
At this point I've started implementing basic SSL server support 
but need others' input to check if this is a sensible way to go. The
scheme I'm thinking of is
- KBAS can have its own cert+key, specified in config
- if cert+key are present, all services happen over HTTPS
- an optional password for viewing/retrieving artifacts
- a mandatory password for uploading artifacts
The key questions I have are:
- are there other use-cases worth thinking about too?
- are there any obvious problems in the above?
- would this general approach good enough for an organisation hosting a
cloud-based KBAS, to serve proprietary artifacts to its
- are there better alternatives for this situation?
New in this release
- YBD can now generate a Concourse-parseable pipeline for visualisation
(but not build, yet). Set mode: parse-only to try it.
- fixed a bug where ybd would ignore errors in the middle of set
combined commands in any build-step
- fixed another example of mishandled unicode file paths
- artifact-version bumped to 6
47424eb Update readme with basic kbas config info
7bcde1c Bump artifact-version for the sh -e bugfix"
887d941 Fix local variable 'key' referenced before assignment
588e7c7 Fix for #233
76b6fc7 Force mimetype for artifacts
8bb6e76 Concourse visualisation working
50eb61f Don't lose stratum depends
625060b Fix write_chunk_metafile so it can handle unicode paths
72ac462 Force env['LANG'] => 'en_US.UTF-8'
bbb9a66 Revert "Force UTF-8 encoding in OSFS"
eebce03 Only report arch mismatch once
bfda426 Force UTF-8 encoding in OSFS
c8522b8 Lose stderr too
f636779 Lose stdout from wget on tarballs
a6c5c02 Rename basenames => filenames