On Fri, 2015-11-27 at 16:12 +0000, Richard Ipsum wrote:
On Fri, Nov 27, 2015 at 03:00:18PM +0000, Tristan Van Berkom wrote:
> The expected outcome of this would be that it takes a bit more effort to
> get your patches upstreamed in definitions if they are lower in the
> stack, as they effect more 'blessed'/'maintained' systems, however
> effort is partly shared by those who maintain the target systems we care
> about as they should be able to volunteer the time to at least give it a
> test run and review some patches once a week.
> While it takes a bit more time to push low level changes upstream, we
> reduce the cost of bisecting definitions when a regression occurs, which
> I am quite convinced is a higher cost than implementing the stricter
> review process proposed here.
> Thoughts ?
I think we need pre-merge testing, we can do this with gerrit,
it was in fact the main reason for switching to gerrit.
The regressions I face are not build failures, in one scenario so far it
was a bricked system because gdm fails to start (apparently a fontconfig
change caused the problem), today and yesterday I have been spending
hours bisecting a case where gnome-shell crashes on an illegal
instruction (SIGILL) and yields a 4 line stack trace without debugging
First, remember I am only talking about low level changes which
potentially threaten the stability of systems we care about, so this is
hopefully not the majority of patches.
But second, I just dont see how any level of automation reachable in the
near future for baserock is going to catch and prevent regressions
automatically, instead we need responsible committers.