On Thu, 2017-05-18 at 13:46 +0100, Sam Thursfield wrote:
On 18/05/17 13:23, Tristan Van Berkom wrote:
[...]
I've been pretty tied up with other things and forgot to get back to
this email...
> Implementing a release schedule and stable/unstable branching
policy is
> basically an activity that we can do once. After this is done,
> maintaining this is a matter of following policy.
OK, how do we get this started?
>
> Also, I dont think we can expect to grow this community to extend to
> serious contributors which actually collaborate in order to release
> production systems, if we do not *first* create a safe venue for that.
I agree on that; but I think there are additional obstacles to having
multiple groups collaborating on a single stable branch. Just look at
the existing distros: one person's "stable" is another person's
"hopelessly outdated", one person's "modern" is another
person's "broken".
I'll answer the second part first, what you are describing is really
only a matter of perception and naming of such, if I understand your
concern correctly.
It's rather petty IMO, also, i.e. the fact that obviously the quality
of one stable branch will be low until some critical mass is reached,
is not an argument to not bother with proper release schedules at all.
That said, I find that naming of "stable" releases with code names, as
Yocto (e.g. "jethro") or Debian (e.g. "jessie") does, instead of
outright calling it only "stable", is a decent solution to this
concern.
How would we get this started ?
First we would agree on a release cadence, I personally feel that a
longer release cadence would be less costly than a short one (I think I
would prefer 1 year over something very short like 6 months), because
once a final release is reached, module major point versions should be
set in stone and only minor point version bumps can be acceptable (i.e.
the API never breaks, system wide in a final release version).
With a short release cadence we would either:
A.) End up releasing randomly selected bleeding edge software,
without really knowing the impact of having "foo 1.2"
interfacing with "bar 3.4"
There will be a certain amount of this anyway, but if we
are low on resources, we wont have a good enough turn around
with a short cadence.
Or:
B.) End up having some consecutive final release branches with
virtually no changes, because nobody had time to integrate
newest kernel + gcc + glibc and so everything is largely
the same as the last.
After deciding on a cadence, we need to implement a schedule, and the
schedule must be the law.
This means we have 2 or 3 levels:
* master branch is always bleeding edge
This is where the version junkies have their fun, upgrading all
kinds of modules and having the bleeding edge of everything, without
a care in the world of whether the resulting systems actually work.
It builds: Ship it !
* testing branch is where we are pulling in changes from master,
and trying to ensure that the package version vector makes sense
(eventually this will have some real world testing going on too,
I would hope)
This is where I think the real integration engineering happens,
maintainers of the "testing" branch, start their release cycle
from the mess that is today's master (after a "final" release)
At this point, we start seeing if systems actually boot, and find
that "foo 1.2" actually does _not_ work properly with "bar 3.4",
in upgrade to "foo 2.0" or back down to "bar 2.8" so that the
software which was intended to work together actually does, at
least mostly.
Here in "testing" we ensure we have a sane PAM configuration
strategy and that keyrings work, that the display manager boots
into a real session and most things are generally working.
* final branches are release branches and they are like a clock,
they take whatever is "testing" and call it "final" (or different
code named branch for every cycle).
If "testing" is not ready, I think just too bad. We release anyway,
the clock is god and if we fudged this release really badly, we'll
get the next one better, but no delays.
After final branches are released, major/minor point software
versions must stay the same, only accept micro point version bumps
and security patches etc.
Also the history of these "final" branches should serve as a good
basis for the next "testing" cycle; i.e. if we had to patch a bunch
of things in this "final" release, we should take a look at that
history and see if these things still apply to "master" when
constructing the next "testing".
The important thing to note with testing and final branches, is that we
have to be aware that they are the clock, and the clock is the law.
Unlike corporate driven software development, a stable release is never
postponed because management wants to have some fancy new feature in
"stable", that should really be in "master/unstable" at this phase.
As
mentioned above, if we fudged this release; we'll get it right in the
next.
This attitude towards releases lets us define it as policy, without
necessitating that resources be allocated and saying that time must
absolutely be spent; the release cycle is only a clock with some rules
and branches, the quality of the releases will vary; but at the very
least we will _have_ a policy defined to start with.
In the above I've thrown out a lot of my own opinions of how it can be
managed, but we should first draft a release schedule and argue about
those rules and define them a bit better, it's mostly an example of how
we can get started.
Cheers,
-Tristan