On Fri, Jul 31, 2015 at 04:46:38PM +0100, Sam Thursfield wrote:
Where we can go from here...?
We need to do a release of the Baserock reference systems so that we can
use definitions format version 6 (and hopefully version 7, if accepted)
in our definitions.git repo, and drop support for older versions from
Morph and YBD.
The description of the definitions format at
needs bringing up to
date. Hopefully a lot of it can be removed and replaced with a link to
JSON-Schema and OWL schemas.
I agree. It would be easier to keep it in sync with what the tool actually
Beyond that, I have no concrete plans. But here are some ideas...
Morph's validation code could be replaced with the Python 'jsonschema'
module, although so far I've not been overly happy with the error
messages that module gives, so far.
I agree that the structural-level input validation should be replaced
with a schema, there's other levels of validation required too though.
I would like to see how simple a Baserock build tool can be. YBD is
interesting, but now that there is no need to do build-system
autodetection (as of definitions format version 6) I think you could
write a 'definitions repo -> massive shell script full of build
instructions' tool in a couple of hundred lines.
Yep, that would be nice for cross-bootstrapping, distbuild, and GPL
It should also now be simple(ish) to convert Baserock definitions to
other languages such as Bitbake recipes, Makefiles, the Nix package
language, or anything else interesting.
Certainly easier than going from Bitbake to Baserock definitions.
Define a better data model. I came up with an initial one in
`baserock.owl` as part of <https://gerrit.baserock.org/1022/>
;, but I
don't think the terms 'Chunk', 'Stratum', 'System' or
really much use there. Our tools deal with 'something that has build
instructions and/or contents', largely. We need to find a name for
that concept (or extend an existing concept such as 'Package' from
SPDX), and make a more simple and flexible data model. YBD's internal
data model has gone some way in this direction already.
As much as we don't have packages, since they are not independently
installable, I could live with traditional package terminology and it
would assist learning.
Chunk Definition → Source Package
Chunk Artifact → Package
Stratum Definition → Source Package (since bits of a chunk are defined in both levels)
Stratum Artifact → Meta-package? (not 100% sure of this, I'm not a Debian Developer)
System → Task (I think)
Cluster → Fleet seems to be what people are using for this, though it implies a larger
scale than we usually consider.
Once we have a better data model we can update the YAML
format to match. We could try to change both things in parallel, but I
think that would be much more difficult.
I expect it will be iterative, since there may be bits we can only do to the
internal format if the serialisation format supports it.
Conditionals! I have some thoughts on how these could be done, which
won't fit in this email. The key idea is that conditionals should exist
at the level of the YAML representation format.
I fear that if we do the conditional evaluation at the YAML definition level,
then we won't be able to provide useful diagnostics when a condition results
in invalid definitions.
We'd need to annotate that this entry is in the definition data structure
because it was in a conditional in this block which did this comparison
on these variables which were set to this value here.
Once we start having variables in though, it will become harder to
translate our definitions into someone else's, unless we define the
conversion process as lossy.
In the abstract, all the possible variants of a system do actually
we need to keep track of which ones are built and which ones are tested.