[RFC] defs version 8: define repo aliases in definitions
by Richard Ipsum
Hello friends,
I got a little time to experiment with moving repo aliases into definitions,
motivated by the apparent confusion they cause.
The idea is to define them in the DEFAULTS file instead of having them
baked into morph/ybd.
They would look something like,
# Repo aliases
repo-aliases:
upstream:
pull: git://%(trovehost)s/delta/%(repo)s
push: git://%(trovehost)s/delta/%(repo)s
baserock:
pull: git://%(trovehost)s/baserock/baserock/%(repo)s
push: ssh://%(trovehost)s/baserock/baserock/%(repo)s
freedesktop:
pull: git://anongit.freedesktop.org/%(repo)s
push: ssh://git.freedesktop.org/%(repo)s
gnome:
pull: git://git.gnome.org/%(repo)s
push: ssh://git.gnome.org/git/%(repo)s
github:
pull: git://github.com/%(repo)s
push: ssh://git@github.com/%(repo)s
bitbucket:
pull: https://bitbucket.org/%(repo)s
push: ssh://git@bitbucket.org/%(repo)s
As well as making these aliases more accessible, defining them this way
also simplifies the RepoAliasResolver, it might even be as simple as
the class pasted below, which is quite a bit simpler than what we have
currently. I haven't executed this at all so it's just an idea at this point
and so may well be wrong.
class RepoAliasResolver(object):
def __init__(self, trovehost, aliases):
self.trovehost = trovehost
self.aliases = aliases
def expand_url(self, url, urltype):
alias, rest = self._split_url(url)
if alias is None:
return url
return self.aliases[alias][urltype] % {'trovehost': self.trovehost,
'repo': rest}
def push_url(self, url):
return self.expand_url(url, 'push')
def pull_url(self, url):
return self.expand_url(url, 'pull')
def _split_url(self, url):
'''Split url into prefix and suffix at :.
The prefix is returned as None if there was no prefix.
'''
pat = r'^(?P<prefix>[a-z][a-z0-9-]+):(?P<rest>.*)$'
m = re.match(pat, url)
if m:
return m.group('prefix'), m.group('rest')
else:
return None, url
The slightly annoying part here is having to maintain backwards compatibility
with version 6 and 7, so I want to ask 2 questions.
Firstly I want to know whether people think this is even worth doing,
if people don't think this is worthwhile then I can possibly save
myself a lot of time. :)
Secondly I want to know whether people think it would be acceptable to
drop support for version 6 and 7 and ask folks to migrate to 8.
I think it *might* be okay to do this (because we don't have a lot of users
right now) but I am not certain, dropping the backwards compatibility here
allows me to remove code instead of adding it, but as we know LOC isn't
everything.
If there's no response to this RFC then I'll assume people think this is
a bad idea and drop it!
Thanks again,
Richard Ipsum
7 years, 3 months
Proposing commit policy for low level stratum
by Tristan Van Berkom
Hi all,
Don't worry, this will be a shorter proposal than the last one, there is
not too much ground to cover and I hope it's simple enough that we can
have some consensus quickly.
Problem statement
~~~~~~~~~~~~~~~~~
Firstly, the problem I want to solve, a problem I face while trying to
build and integrate the GNOME system, is that I need to split my effort
and track regressions in definitions master at the same time as
improving the GNOME system.
Typically, I start out with a fresh master definitions in the beginning
of the week and start building on that. Its too time consuming to build
and test master every day so I tend to stay with this same snapshot and
work in a branch until I have some patches ready to submit.
Most of the time I rebase my ready-to-commit branch against master and
spend 4 to 6 hours preparing one last build before pushing it upstream
via gerrit.
Sometimes at this point there are regressions in master, system is
bricked, gdm doesnt start, or gnome-shell crashes for some mysterious
reason.
The bottom line is that it is much more expensive (time consuming) for
me to bisect definitions and attempt to figure out which low level
change introduced the regression, than it would be for the original
committer to have properly validated that their low level change has not
regressed any of the systems we care about before merging to master.
Proposal
~~~~~~~~
In the interest of efficiency, I propose that we enforce a policy where
it comes to pushing definitions changes which can effect systems that
others are actively working on, where the committer is ultimately
responsible for having verified that all the systems which A.) We care
about and B.) Are in some way effected by a low level stratum change,
have been properly tested.
Note that it is not important for the committer to have themselves
booted and tested the systems their commit effects, only that they
ensure those systems were tested against the related patch by somebody.
What are the systems we care about ?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are of course way too many systems in the definitions repository
to test each one for every single low level stratum change, instead we
need to restrict this to only a couple of important strata.
I think a very logical way forward to defining which strata are blessed
for this policy is that those systems have an active maintainer who is
already regularly running the given system and using it for some
purpose.
If a given system does not have at least one active maintainer then it
becomes difficult to get the required testing done.
For the sake of this brief proposal, lets propose that our blessed
systems are both:
- genivi-demo-platform-x86_64-generic
- gnome-system-x86_64
How would this new policy work ?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Assuming we have some consensus on this, I will start by creating a wiki
page where we can present a checklist of what user stories need to pass
for each system.
The user stories checklist should take no longer than 5min to complete
once you have a booted system in a VM or on some hardware, these user
stories are maintained by the respective systems maintainers.
So for example, if a system maintainer did not mention in the user
stories for GNOME that one should open gnome control center and test the
audio settings and ensure it makes a sound, then functional audio is not
currently a requirement for pushing a new compiler into build-essential,
or whatever the change may be.
Further, a system maintainer can be called upon to perform the test
themselves by the patch submitter - as the GNOME system maintainer
currently I would like to at least be on CC for the gerrit reviews of
any patch which effects any strata that is included in the GNOME system.
We can see what kind of volume we will have over time but with the way
things are moving in master currently I expect that one GNOME maintainer
and one genivi demo platform maintainer can easily find time to test low
level patches against their respective systems at least once a week.
The expected outcome of this would be that it takes a bit more effort to
get your patches upstreamed in definitions if they are lower in the
stack, as they effect more 'blessed'/'maintained' systems, however this
effort is partly shared by those who maintain the target systems we care
about as they should be able to volunteer the time to at least give it a
test run and review some patches once a week.
While it takes a bit more time to push low level changes upstream, we
reduce the cost of bisecting definitions when a regression occurs, which
I am quite convinced is a higher cost than implementing the stricter
review process proposed here.
Thoughts ?
Cheers,
-Tristan
7 years, 3 months
[RFC] Multiple source repositories per-chunk and git submodule handling
by Richard Maw
We currently handle definitions that need to pull multiple source
repositories together in an ad-hoc way.
For gcc we import the gmp, mpc and mpfr source trees in by checking them
into our delta branch directly.
Some upstreams handle importing the sources by adding submodules.
However, we have to make them fetch sources from our Trove for
reproducibilty and latency concerns, so we end up having to add a delta
on top to change where the submodules fetch their sources from.
This works against one of our goals for minimising the delta, so we
need a way to encode in our builds how to deal with components that need
sources from multiple repositories to function.
Our current approach from submodules introduces a _lot_ of extra work
when we need to update multiple submodules recursively, so we need a
better solution.
Proposal
========
To solve this, I propose we extend the source repository information from
just (repo, ref) to be a list [(repo, ref, path?, ?submodule-commit)].
So rather than:
name: base
⋮
chunks:
- name: foo
morph: strata/base/foo.morph
repo: upstream:foo
ref: cafef00d…
unpetrify-ref: baserock/morph
⋮
build-depends: []
We extend it to be able to take a "sources" field.
⋮
chunks:
- name: foo
morph: strata/base/foo.morph
sources:
- repo: upstream:foo
ref: baserock/morph # used to validate commit is anchored
# properly and as a hint for where
# you ought to merge changes to
commit: cafef00d…
- repo: delta:gnulib
commit: deadbeef…
ref: baserock/GNULIB_0_1_2
path: .gnulib # where to put the cloned source
submodule-commit: feedbeef… # check that this matches, so you
# can be told if you need to update
# your delta
⋮
The `path` field is used to specify that this tree should be placed in
the parent tree at this position.
The `path` field is optional, defaulting to `.`
If multiple paths, after normalising, clash, then it results in a
build-failure at source creation time.
Sub-sources can be placed over an empty directory, when there's no
existing entry at that path, or when there's a git submodule there AND
the commit of the submodule matches the `submodule-commit` field.
If there's a file, symlink, non-empty directory or submodule that doesn't
match `submodule-commit` there, then it results in a build-failure
at build-time.
The `submodule-commit` check exists as a safeguard against the parent
repository being updated and requiring a new version, which your specified
commit isn't appropriate for.
If you get a build fail because the submodule isn't appropriate then
you have to check whether the version you specify works, then update
the commit.
Cache key changes
=================
This shouldn't require any cache key changes to existing definitions,
but builds that make use of multiple source repositories will also hash
the commit tree and path.
Alternative solutions for submodules
====================================
We could continue to use the current model, and deal with the pain of
having to make multiple branches in multiple repositories to satisfy
the change to the repository paths.
We could have a magic look-up table to replace repository urls when we
parse the submodules.
We could use git-replace to magically make git do the url correction, but
then we'd have to handle replacements properly, rather than dropping them.
7 years, 3 months
Baserock 15.47 is released!
by Richard Ipsum
# Baserock 15.47 is released!
For an online version of these release notes, please visit:
<http://wiki.baserock.org/releases/baserock-15.47>
## Definitions compatibility
The Baserock reference systems are defined using version 6 of
the definitions format. In 15.34, version 5 was used.
The baserock-15.47 tag of definitions can be built with Morph from 15.34.
The version of Morph in Baserock 15.47 can build definitions versions 6 and 7.
For more information about these versions, see here: <http://wiki.baserock.org/definitions>
## Changes since 15.34
This release is packed full of awesome, see below!
* GNOME!
Yes that's right we have GNOME now :D
This is still work in progress, but the basic components you need
to run a gnome desktop are there.
* Python 3 by default
(Python 2 will still be selected if you're building any components
that require it.)
* locales! All glibc supported locales are now installed by default!
* Schemas for definitions format
* Rawdisk partitioning support
* install-files: allow definition of manifests in multiple variables
* clang installed by default
* llvm 3.7
* linux-user-chroot v2015.1
* git man pages
* Morph updates:
- Add support for Baserock definitions version 7.
Build systems will soon be defined in a DEFAULTS file in definitions,
when this happens you'll be able to define your own custom build systems!
- It's now an error to define two chunks with the same name within
the same system.
* Other bugfixes and improvements: see the Git logs of [definitions.git](http://git.baserock.org/cgi-bin/cgit.cgi/baserock/basero...
and [morph.git](http://git.baserock.org/cgi-bin/cgit.cgi/baserock/baserock/mor... for a full list.
Thanks to everyone who has provided code, documentation, sponsorship or
feedback for this Baserock release!
# How do I get started?
Start with the following page: <http://wiki.baserock.org/quick-start/>
Those who are up to speed with Baserock already can make use of the
'cache.baserock.org' cache server, with the `artifact-cache-server =
http://cache.baserock.org:8080/` option. All artifacts necessary to
upgrade a 'build' system for x86_32 or x86_64 are present in this
cache. An arm rootfs is also provided for those wanting to get started
with arm boards.
# How do I get in contact?
- IRC: freenode #baserock
- Public mailing list: baserock-dev(a)baserock.org
- See also: <http://wiki.baserock.org/mailinglist/>
If you find a bug in Baserock, we'd like to hear from you using one of
the above methods.
The Baserock project welcomes new participants! We hope you enjoy
experimenting with Baserock and look forward to hearing about any cool
things you do with Baserock.
7 years, 4 months
Proposal: Axing the Stratum - Enter Runtime Dependencies and Build Flavors
by Tristan Van Berkom
Good Monday morning folks,
For some weeks now, you may have noticed an occasional grunt from me on
IRC about how strata are tightly grouped together, or, we may have had
some conversations on the subject in person.
Having random conversations and grunting malcontent however is generally
nonproductive, so I have been avoiding the subject have been working on
a more constructive approach to the subject instead, in the spare time
afforded me by extraneously time consuming rebuilds ;-)
I have put a lot of thought into this, so I am sorry for the sheer
length of this proposal, it's around 4,500 words and is best digested
with a fresh coffee.
Hope you enjoy the read :)
Regards,
-Tristan
Axing the Stratum - Enter Runtime Dependencies and Build Flavors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There seems to already be clear consensus at this point that the concept
of the "stratum" as a container holding chunks does not withstand our
use cases particularly well.
In this proposal I will start by outlining a strictly limited set of
problems which I think can be solved by the proposed refactor, present a
new model with the intention of addressing this limited problem set, and
provide some example use cases along the way in an attempt to paint a
clear picture of how the proposed model withstands our particular use
cases.
Table of Contents
~~~~~~~~~~~~~~~~~
1 Problem Statement
1.1 System Bloat
1.2 Chunk Duplication
1.3 Intentional Chunk Rebuilds
1.4 Loss of valuable dependency information
1.4.1 Less required artifact rebuilds
1.4.2 More predictable results
2 Proposed Model: Runtime Dependencies and Flavors
2.1 Stratum, Chunk and System Morphologies
2.1.1 Morphology
2.1.2 Stratum
2.1.3 Chunk
2.1.4 System
2.2 Dependency Types
2.2.1 Syntax
2.2.2 Example
2.2.3 Implications in Recursion
2.3 Declaring Build Flavors
2.3.1 Syntax
2.3.2 Example
2.4 Depending on Build Flavors
2.4.1 Syntax
2.4.2 Builders
2.4.3 Example
3 Build Planning
3.1 Choosing and Validating Build Flavors
3.2 Constructing the build graph
4 Migration Paths
1 Problem Statement
~~~~~~~~~~~~~~~~~~~
There are quite a few problems which are directly related to the current
model, in the interest of outlining a project that is realistic and
achievable; I will try to restrict this problem set as much as possible,
as we cannot solve everything at the same time.
1.1 System Bloat
~~~~~~~~~~~~~~~~
Baserock shines especially as a build tool for embedded targets, as such
it seems to be important to enable the building of strictly limited
platforms for the various systems we define. It makes no sense to deploy
loads of unused libraries and metadata onto a minimal target host.
Even in the case where disk space and memory are not an issue, it is in
the interest of embedded systems builders to strictly limit the amount
of installed software to those modules which are actually a requirement
for the end product. This eliminates the possibility of unintentionally
running third party software by way of an optional dependency in the
stack or a loaded module; thus reducing points of failure and reducing
the amount of validation required to release a reasonably bug free and
stable system/product developed with Baserock.
Our current model however demands that if for example, I required a low
level component to be compiled against an extra optional dependency for
the purpose of the GNOME system, a sibling IVI system would be impacted
by this new dependency - bloating sibling systems and consequently
compromising their stability.
1.2 Chunk Duplication
~~~~~~~~~~~~~~~~~~~~~
Sometimes, it is found that a given stratum requires a module which is
already present in another orthogonal stratum.
For various reasons, we end up up duplicating the same chunk inside
various strata. This increases the overall maintenance burden of the
project and creates problems later on.
Reasons for the duplication can generally include scenarios where
"System B requires Chunk A be compiled differently than when Chunk A is
included in System A", but more typically it occurs that the duplicate
chunk is exactly the same; and the refactoring required to avoid
duplication is simply too much work.
An example of such a refactor was the recent addition of the icu-common
stratum, which is now included by 4 separate systems; this takes
significant time in testing and consequently building. The process of
simply establishing consensus on where the chunk should live in the
stack is also time consuming in itself; should it be in it's own
separate stratum ? Or, for instance shall we force ICU onto every system
which requires "core" or "foundation" ?
A more serious problem occurs when a dependency implies that the
duplicate chunk be included in the resulting system more than once. At
this point it becomes unclear which of these compiled chunks will be
included in the resulting system. It can be possible that even with
identical chunk definitions; the resulting artifact is different due to
optional dependencies which may, or may not have been present at build
time.
1.3 Intentional Chunk Rebuilds
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the case of libxml2 for instance, we have one of the most disturbing
duplication issues that it deserves a separate point in this list.
libxml2 (along with xslt) is built as a part of the "core" stratum.
libxml2-python2 builds exactly the same module in the "python-core"
stratum, however it does so after python is built.
The reason for this is that libxml2 already sits too low in the stack at
core.morph, where python has not yet built; but libxml2 will build an
additional module only if python2 is already present.
In this case, we install the same module twice in separate artifacts,
because the refactoring work to fix the issue is just too much to ask
for, we simply dont have time to untangle these situations and also get
additional work done.
Rationally, we know that in this case libxml2 is only building something
additional in this case, but this is not strictly true; it can be that
the libxml2.so library is not bit-for-bit identical to the one which was
built with python available. Regardless of which version of the libxml2
build "wins" in the resulting system; this uncertainty is disturbing.
1.4 Loss of valuable dependency information
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When we add a new chunk to a given stratum, we usually have done our
research and we are aware of the hard and soft direct dependencies of
the given chunk.
However, because our formalism implies that one stratum depends on other
strata, much of the dependency information (maybe half ?) is no longer
encoded in the chunks in our definitions repository.
This makes much needed refactoring, like libxml2 mentioned in the
previous point, even more difficult to perform because we no longer know
which stratum depends on "core" to provide libxml2 or not, and the
process for finding out is time consuming.
Not only this, but if we had not lost this dependency information, we
would benefit in other ways:
1.4.1 Less required artifact rebuilds
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We would not have to compile every chunk in GNOME just because systemd
has been changed in some way, but only the chunks which depend directly
on systemd's support libraries.
We would not have to spend 2 hours rebuilding all of GNOME including
WebKit just because gnome-control-center requires samba and cups (which
are added strata dependencies).
1.4.2 More predictable results
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The result of building any given chunk is a combination of the
dependencies and build tools found in the sandbox, the build
instructions in the morphology, and the git repository providing the
build scripts and source code for the given chunk.
The math is simple:
sandbox + morphology + git = artifact
Currently, we need to populate a build sandbox with all dependencies of
a given stratum in order to build a given chunk.
Do we know for certain that adding a new chunk to "foundation" or "core"
does not effect the output of a build in the gnome or qttools strata ?
As the extra input has been added to the build recipe, we cannot assert
with absolute certainty that the output of a given artifact would be
identical.
While it is questionable whether this matters or not, I would feel
reassured if we had limited the build input to what was strictly
required, if only to provide a higher level of certainty in what goes
into our artifacts.
2 Proposed Model: Runtime Dependencies and Flavors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Simply put, the proposed model is to build the entire dependency graph
by defining each and every chunk separately, where each chunk declares
it's direct dependencies explicitly.
One of the goals here is to ensure that there is only ever a single
chunk definition for a given upstream git module, ever.
Instead of defining duplicate chunks for various compilation flavors,
depending on their optional dependencies of configuration/build
commands, we would encode these as semantics of the chunk definition
itself.
For convenience, we would keep the concept of "stratum" around, however
they would not conceptually "contain" chunks. Strata would instead refer
to existing chunks and strata as dependencies, as such they would be
allowed to overlap in such a way that a system may include two strata
which refer to some of the same chunks, but the build mechanics would
revolve around building an entire dependency tree of chunks, while
strata would only define which chunks and strata come together as
logical groups.
2.1 Stratum, Chunk and System Morphologies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For the sake of added clarity, let us consider an OOP approach to
defining the concepts supported by the various Morphology "kinds",
consider that a morphology "kind" is an object class.
The class hierarchy looks like this:
Morphology
|
Stratum
/ \
Chunk System
Which is to say, the Stratum Morphology type captures the common
properties of Chunk and System types.
Before continuing, lets just give a quick run down of what properties
belong to which type in the class hierarchy.
2.1.1 Morphology
~~~~~~~~~~~~~~~~
The base morphology class owns the following attributes
o kind Indicates the Morphology subclass/type
o name A unique symbol for this particular definition
o description An optional human readable description
2.1.2 Stratum
~~~~~~~~~~~~~
The Stratum class extends the Morphology with the following attributes
o build-depends Dependencies required before building this stratum
o run-depends Dependencies this stratum requires at runtime
o flavors Cascading attributes and augmented dependencies
It's important to note that any stratum subclass inherits the common
property of being able to refer to any other stratum as a dependency, a
stratum may refer to a chunk as a dependency as conceptually speaking, a
chunk is also a stratum.
It is possible to refer to a Stratum as a build dependency of a Chunk.
While this situation is generally undesirable as it encourages system
bloat, it can however be practical in some cases, I think a good policy
would be that any Chunk sitting above the build-essential Stratum should
depend on strata/build-essential.morph, but no more than this.
Conversely, it becomes possible to extend a Stratum by declaring another
Stratum which depends on the first Stratum and also depends on some
additional Chunks.
Note that this structure is already conceptually very close to what we
have, organizationally we still group Chunks in our defined Strata -
however a Strata is not said to contain Chunks, instead a Strata depends
on Chunks in exactly the same way that Chunks depend on other Chunks.
We will get into the difference of "build dependencies" and "run
dependencies", and define the meaning of "flavors" in the following
sections.
In the following sections, we will use the word "Stratum" loosely to
define any Morphology which is either a Stratum or a derived class.
2.1.3 Chunk
~~~~~~~~~~~
The Chunk class extends the Stratum class with the remaining usual chunk
attributes
o repo
o ref
o unpetrify-ref
o build-system
o [ configure, build, install -commands ]
o system-integration
o ... other build related attributes I've missed here ...
It's important to note that a chunk here is the only Morphology class
which should be adding any payload to the resulting build.
Further; it should be noted that the 'repo' attribute of the chunk is an
optional attribute. A Chunk may be defined simply for the purpose of
adding static payload to the resulting build (configuration files) or
simply to run system-integration hooks in the resulting system.
2.1.4 System
~~~~~~~~~~~~
The System class extends the Stratum class with the following attributes
o arch
o configure-extensions
Here I should add a note that it may be possible or desirable to abolish
the System type entirely. During the implementation of a refactoring
towards the proposed model, I would leave it up to the implementor to
decide if we can address this sufficiently with Chunks and "flavors"
The advantage of removing the concept of "System" morphologies are that
we could potentially use the same system definition for multiple arches,
via our newly introduced concept of flavors (a system flavor could be
the arch itself), further the configure-extensions could be collapsed
into specific Chunk definitions which could be shared across multiple
systems, and conditionally run depending on the flavor of the system
(the arch).
2.2 Dependency Types
~~~~~~~~~~~~~~~~~~~~
Currently the morphology supports only one type of dependency, but to be
extra cautious (and also to improve build times), I believe it wise to
differentiate at least between build time dependencies and runtime
dependencies.
There is also the possibility of differentiating between hard and soft
(optional) dependencies, however soft dependencies are covered
sufficiently by the "flavors" semantics, which I will discuss in the
following section.
With the proposed model there is no need to differentiate between a hard
dependency and a soft dependency, all dependencies are hard; some are
build time and others are run time.
2.2.1 Syntax
~~~~~~~~~~~~
Expanding on the current syntax, I would propose that we add a
'run-depends' keyword to compliment the existing 'build-depends'.
This would mean that instead of listing all dependencies under the
'build-depends' group, we would split the runtime dependencies from
there and add them instead to 'run-depends'.
This allows software builders to compile the runtime dependencies
orthogonally to the requiring Stratum, allowing greater parallelism at
build time, avoiding circular dependency pitfalls and avoiding
unnecessary rebuilds.
The syntax in a self contained chunk definition would simply be:
name: chunkname
kind: chunk
...
build-depends:
- chunks/build-dependency1.morph
- chunks/build-dependency2.morph
run-depends:
- chunks/run-dependency1.morph
...
2.2.2 Example
~~~~~~~~~~~~~
Geoclue uses the libsoup API directly, but requires that libsoup be TLS
enabled. This happens under the covers, via a GIO extension point which
is implemented at runtime by glib-networking.
There are a few approaches to handling this, we could either require a
"libsoup with TLS" which could depend on glib-networking, but I believe
we reduce verbosity if we only make this distinction where a module
actually has a requirement, as is the case with geoclue.
With this approach, geoclue's morph file would be written as such:
name: geoclue
kind: chunk
repo: upstream:geoclue
ref: 93241301b9c47a5e7a61978cf5154752c85bd183
unpetrify-ref: master
build-system: autotools
configure-commands:
- ./autogen.sh --disable-nmea-source
build-depends:
- strata/build-essential.morph
- chunks/json-glib.morph
- chunks/ModemManager.morph
- chunks/libsoup.morph
- chunks/avahi.morph
run-depends:
- chunks/glib-networking.morph
2.2.3 Implications in Recursion
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When constructing a dependency tree for the purpose of laying out a
build plan, it's important to note that any dependencies of a runtime
dependency are classified as runtime dependencies by the referring
Stratum, while only build dependencies of build dependencies get
classified as build time dependencies of the referring Stratum.
To illustrate this more clearly, take the following ASCII art for a
rough example of what parts of the GNOME stratum might look like;
dependencies marked with (b) are build time dependencies and those
marked with (r) are only runtime dependencies.
GNOME(b)
/ \ (g-s-d = gnome-settings-daemon)
g-s-d(b) g-o-a(b) (g-o-a = gnome-online-accounts)
| |
| WebKitGtk(b)
\ /
\ /
\ /
geoclue(b)
/ \
/ \
/ \
libsoup(b) glib-networking(r)
\ / \
\ / \
\ / \
glib(b) gnutls(b)
When preprocessing the dependency graph for a build plan, strict build
dependencies will be processed as the main build tree, while runtime
dependencies are to be pruned and processed in a second pass. Runtime
dependencies can be discarded completely when they appear as build
dependencies elsewhere in the build tree, otherwise they are added as a
build dependency of a virtual main target, orthogonal to the specified
build target; as would be the case with glib-networking and gnutls in
the above example:
Virtual Main Target
/ \
/ \
GNOME \
/ \ |
g-s-d g-o-a |
| | |
| WebKitGtk glib-networking
\ / / |
\ / / |
geoclue / gnutls
\ /
libsoup /
\ /
\ /
glib
2.3 Declaring Build Flavors
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Build flavors allow us to express multiple recipies to build the
Stratum.
Most of the time Strata will not need to declare any flavors. While
there is a potentially infinate amount of flavors which *can* be
declared, we are only interested in the flavors required to build the
systems which we support; which can in turn settle on a shared build
flavor of a given package most of the time.
The main advantage we achieve with build flavors is that we avoid ever
declaring a separate Chunk file for the same module. We allow some
flexibility so that multiple systems may live safely in the same
definitions repository, brief and concise exceptions can be made to the
build rules for a given Chunk in cases where multiple Systems cannot
agree on a single Chunk recipe.
In this section we discuss only the declaration of a chunk with flavors,
we will explore the referencing of chunks as dependencies in the
following section 2.4.
2.3.1 Syntax
~~~~~~~~~~~~
In the interest of keeping the Stratum definition brief and concise
while supporting build flavors, I propose a cascading attributes
approach.
A Stratum which declares flavors will continue to declare all common
aspects at the root of the Stratum definition, while each flavor
definition will be allowed to override and extend portions of the
definition.
In the case of "build-depends" and "run-depends" attributes; flavors
extend the common dependencies which may be specified at the root of the
chunk. In all other cases, attributes specified in a Stratum (usually a
Chunk) override whatever may have been declared at the root. This is to
say that build-commands of a given flavor are never compounded and
appended to commands specified at the root level, they replace the
build-commands at the root entirely.
The "kind", "name", "description" and "repo" attributes are exceptions,
they may not be overridden by a build flavor declaration.
Should a Stratum declare flavors, the first flavor declared in the list
will be considered the default and that flavor will be selected if no
specific flavor was mentioned by any depending Strata.
The syntax of a Chunk which declares flavors would look roughly like so:
name: chunkname
kind: chunk
repo: upstream:chunkname
ref: [ default branch ]
...
common dependencies and overridable attributes
...
flavors:
- flavor1
build-depends:
- extra-dependency1
- flavor2
ref: [ possibly override branch ]
unpetrify-ref: baserock/special-branch-with-delta
build-depends:
- extra-dependency2
configure-commands:
./autogen.sh --without-extra-dependency1 --with-extra-dependency2
The syntax of a basic Strata declaring flavors is exactly the same. As
we discussed a Chunk is a subclass of the Strata. Strata flavors simply
allow one to declare different combinations of dependencies for a given
flavor.
2.3.2 Example
~~~~~~~~~~~~~
Taking the IBus as a convenient example, it can be compiled with the XIM
service or it can be compiled for wayland. With XIM we expect to require
the GTK+ immodules - it can also be compiled with both options on an
experimental system which includes both the xorg and the wayland
Stratum.
name: ibus
kind: chunk
build-system: autotools
build-depends:
- strata/build-essential.morph
- chunks/dconf.morph
- chunks/gconf.morph
- chunks/iso-codes.morph
- chunks/libnotify.morph
flavors:
- x11
build-depends:
- chunks/xorg-lib-libXi.morph
- wayland
build-depends:
- chunks/wayland.morph
configure-commands:
- ./autogen.sh --disable-gtk2 \
--disable-gtk3 \
--disable-xim \
--enable-wayland
- all
build-depends:
- chunks/xorg-lib-libXi.morph
- chunks/wayland.morph
configure-commands:
- ./autogen.sh --enable-wayland
2.4 Depending on Build Flavors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When referring to a Stratum as a dependency, it now becomes possible to
specify a flavor for the given Stratum.
There are in fact three ways to refer to a Stratum. One can remain
ambivalent as to which flavor of the dependency is used, one can specify
a single acceptable flavor of the given dependency, or, one can list
multiple flavors of a given dependency in order of preference.
This last case where multiple flavors can be specified is a corner case
which can come in handy where multiple depending Strata depend on the
same Stratum but disagree on the expected flavor; giving a second choice
to the depending Strata allows us to find some agreement on which flavor
can satisfy both depending Strata.
I included the 'all' flavor in the example above (2.3.2) to demonstrate
this. During development it can be desirable to define a GNOME System
with both x11 and wayland installed. Some 'x11' specific Strata may
prefer an 'x11' flavored IBus, and some wayland specific Strata may
prefer a 'wayland' flavored IBus, but in the case of the IBus Chunk, it
is possible to satisfy both options simultaneously by providing an
alternative 'all' flavor.
As a policy, Strata should remain ambivalent as much as possible and
references to specific flavors should only be made when it is a hard
dependency for the referring Strata.
2.4.1 Syntax
~~~~~~~~~~~~
The syntax for specifying a dependency remains the same except that
flavors can now be specified in addition to the chunk's morph as a
parameter.
To specify a dependency where one is ambivalent of the flavor:
build-depends:
- chunks/build-dependency.morph
To specify a specific flavor:
build-depends:
- chunks/build-dependency.morph:flavor
Or using a comma separated list to specify acceptable flavors, starting
with the preferred flavor:
build-depends:
- chunks/build-dependency.morph:flavor1,flavor2
NOTE: I am assuming here that use of the colon separator is acceptable
so long as it is not followed by whitespace, as we use the same
semantics to specify the upstream repositories as values for the 'repo'
attribute.
2.4.2 Builders
~~~~~~~~~~~~~~
In addition to the semantics within the Morphology definitions, builders
need to provide a technique for specifying build flavors.
For example, the following invocation needs also to be supported:
ybd.py --flavor x11 chunks/ibus.morph x86_64
In addition to this, one could have the option of building a Chunk or
Stratum for a specific system build, and have the tooling work out the
flavor agreement in the regular way:
ybd.py --build-graph systems/gnome-system-x86_64.morph \
chunks/ibus.morph x86_64
The above would use the gnome-system-x86_64.morph System definition to
build the appropriate dependency graph in order to determine the
appropriate flavor of IBus to build (and the appropriate flavor of IBus
dependencies).
2.4.3 Example
~~~~~~~~~~~~~
Lets assume that the GNOME stratum can be built in either x11 or wayland
flavors.
The gnome-settings-daemon is ambivalent about the IBus flavor it depends
on, it only really requires the library, and so it expresses this
dependency in the regular way:
name: gnome-settings-daemon
kind: chunk
repo: upstream:gnome-settings-daemon
...
build-depends:
- strata/build-essential.morph
- chunks/colord.morph
- chunks/geoclue.morph
- chunks/geocode-glib.morph
- chunks/gnome-desktop.morph
- chunks/gsettings-desktop-schemas.morph
- chunks/ibus.morph
- chunks/libcanberra.morph
- chunks/libgweather.morph
- chunks/libnotify.morph
- chunks/lcms2.morph
- chunks/upower.morph
In the GNOME strata, we require gnome-settings-daemon as a base
dependency, but we need to specify that IBus be built in a specific
flavor:
name: gnome
kind: stratum
description: GNOME stratum
build-depends:
...
- chunks/gnome-settings-daemon.morph
...
flavors:
- x11
build-depends:
- chunks/xserver.morph
...
- chunks/ibus.morph:x11
- wayland
build-depends:
- chunks/wayland.morph
...
- chunks/ibus.morph:wayland
Note that in this case, IBus is already a dependency of
gnome-settings-daemon and so it would not normally need to be specified
as a direct dependency of GNOME.
However our semantics allow us to specify the indirect dependency only
to ensure that the correct flavor of IBus is compiled for the given
flavor.
3 Build Planning
~~~~~~~~~~~~~~~~~~
When a Baserock builder parses a build target it needs to map out the
"build plan" for all of the chunks which need building, and decide on
which flavors are required.
We've already touched on the subject of build planning lightly in
section 2.2.3 Implications in Recursion, here we will draw a vague
outline of what steps need to be taken to construct a proper build plan.
Since most of this has already been solved in the current Morphology
model, there is no need to elaborate extraneously here.
3.1 Choosing and Validating Build Flavors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before managing the dependency tree in any way, we should first decide
on which flavor of each individual Stratum and Chunk is to be selected.
To do so, we simply need to construct a flat cache of all chunks which
have been referred to in any way, and follow the rules to determine
which flavor (if any) should be selected, or to bail out with an error
message if the build makes no sense because two or more referring
stratum cannot agree on a given flavor.
Remember that:
a.) A reference without flavor is ambivalent; it does not care which
flavor of Stratum is selected
b.) A reference to a single flavor is specific, it can only live with
that specified flavor
c.) A reference with multiple flavors has a preference for a specific
flavor, but can find agreement with other referring stratum and
settle on a second choice
d.) In the case that all referring stratum have remained ambivalent,
the first declared flavor is the default and will be selected for
the build.
Provided that all referring Stratum have found some agreement on
flavors, we will keep this cache of preselected flavors around for the
remainder of the build plan.
3.2 Constructing the build graph
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Constructing the build graph will take a few passes and will result in
an output similar to the one described in the example 2.2.3.
First we need to traverse the originally loaded build tree and follow
only the build-depends branches from the top down, pruning out
run-depends branches into separate floating trees recursively. Splitting
out all runtime dependencies of other runtime dependencies into separate
orphaned branches in a single pass.
For each orphaned branch; we first check that the runtime dependency is
not in fact already a build dependency in the existing tree, in which
case we can simply discard the branch. Otherwise, we consider the
runtime dependency as an added build dependency of the virtual main
build target, as an orthogonal build dependency of the specified build
target.
4 Migration Paths
~~~~~~~~~~~~~~~~~
The proposed changes to the Morphology format represent a significant
amount of work.
Updating the tooling to understand the new format is only the fun part,
untangling build dependencies so as to regain the lost knowledge of what
really depends on what is a longer arduous process. Of course the latter
is where most of the added value will start to come to life.
To perform the migration, we could build a separate definitions
repository from scratch, which would probably be a much cleaner
approach, however, we do have the option to perform the migration "in
tree". As the Stratum concept is not completely abolished, but only
semantically changed (as highlighted in section 2.1.2); it would be
conceivable to apply the new format to our existing definitions
repository using an automated script.
If the second approach were taken, we would still be left with a tangled
mess after the initial migration, but we could then approach the
refactoring work on a Chunk by Chunk basis, moving from the bottom
towards the higher level Strata.
Fixing a given chunk would simply involve removing any dependencies on
Strata that are not build-essential and replacing those dependencies
with the appropriate individual Chunk dependencies, adding the
distinction between runtime and build time dependencies. During the
refactoring, when an occurrance of a legitimately duplicated Chunk is
identified, it would need to be replaced by a flavor definition in a
single unified Chunk.
This refactoring would could potentially be done in the same repository
orthogonally to other ongoing work.
7 years, 4 months
question about design pattern for application development
by milad hasanvand
dear all
i am completely new to the baserock environment. i want to build a complete
IVI based on it.
i have list of applications that should be developed for my project but i
don't know where to start? are there any design patterns which help some
one like me to start writting a web or native apps?
thanx!
7 years, 4 months
[PATCH 0/2] Fix ca-certs to work with Python 3
by Richard Ipsum
repo: git://git.baserock.org/delta/ca-certificates
ref: baserock/richardipsum/debian/20140325
sha: 193eb2042c6be1775f4c41f6297fe5c1521828e0
land: baserock/debian/20140325
Hi,
As mentioned in #baserock, our ca-cert installation has been broken
since we moved python3 into core, because the ca-cert installation depends
on python2. This means that Baserock users cannot currently, for example,
clone a git repo over https.
This series takes the original attempt[1], by upstream, to move the ca-cert
installation towards python3 and fixes it up a bit.
The result should function with both python2 and python3.
I have done quite a bit of reading into unicode handling in python2
and python3 and have verified that the certificate filenames produced
by the python2 version match the certificate filenames produced by the python3
version, which both in turn match the certificate filenames produced
without this patch. I have built a devel system using this patch
and the resulting system can clone over https.
If everything looks okay to folks here then I'll submit this upstream also,
but I'd prefer that we merge this branch into the land mentioned above so
that upstream doesn't block Baserock users (myself included! :) ).
Hope this helps,
Richard Ipsum
[1]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=789753
Michael Shuler (1):
Add Python 3 support to ca-certificates.
Richard Ipsum (1):
Fix unicode conversions
debian/changelog | 109 ++++++++++++++++++++++++++++++++++++++++++++++++
mozilla/certdata2pem.py | 57 ++++++++++++++++++-------
2 files changed, 150 insertions(+), 16 deletions(-)
--
2.5.2
7 years, 4 months
Change in Baserock-Ops staff
by Francisco Redondo Marchena
Good morning folks,
I've had a meeting with the Baserock-Ops team where I've communicated
them that I'm not going to be longer part of it.
I was part of the team as a back up member but given that I've not been
involved during last year in any of the Baserock-Ops works, I prefer not
to be part
of it.
Best regards,
Fran
--
Francisco Redondo Marchena
Software Developer
Codethink Ltd
302 Ducie House
Ducie Street
Manchester
M1 2JW
UK
Codethink delivers cutting edge open source design, development and
integration services - from embedded stack software to advanced ux
http://codethink.co.uk
Office: +44 161 236 5575
7 years, 4 months
Re: Proposal: Axing the Stratum - Enter Runtime Dependencies and Build Flavors
by Tristan Van Berkom
On Thu, 2015-11-05 at 18:30 +0000, Tiago Gomes wrote:
[...]
> I was just stating that there wouldn't be advantages of having
> runtime-depends for this particular case. Like you said below, it
> wouldn't achieve 100% efficiency.
>
> TBH, I don't care much about how runtime-dependencies would be handled.
> As Richard says [1]:
>
> > We can either include runtime dependencies of artifacts we build-depend
> > on, or leave runtime dependencies to being purely for system artifact
> > construction time.
> >
> > There's arguments for either.
> >
> > It makes some amount of sense for runtime dependencies of an artifact
> > you build-depend on to be included in the staging area, as it saves
> > having to define these build-only-runtime-depends as build-depends,
> > which would cause unnecessary build-time dependencies for python modules.
> >
> > However, if you don't include this, then you only need to rebuild the
> > system artifact when build-dependencies change, which has some appeal.
> >
> > The important thing is that you don't depend on _your_ runtime
> > dependencies at build-time, but people who depend on you end up depending
> > on _your_ runtime dependencies.
> >
> > This is sufficiently odd that it bakes my noodle a little.
First, please make sure that you note here that Richard's definition of
runtime dependencies is not the same as my definition, this was
clarified elsewhere in this thread.
Using the same words I used in another message[0]:
A runtime dependency is the dependency on a chunk which need not be
built or added to the staging area in order to build the depending
chunk.
Lets call Richard's alternative definition "install dependencies", I
dont really care what they are called right now but it's better to keep
our definitions separate for the sake of conversation.
Now, note that the complexity which results in the baking of noodles is
pretty much due to an attempt of handling install dependencies and
runtime dependencies in the same dependency graph - the whole picture
becomes more clear if you treat the build phase and the distribution
phase as entirely separate problem spaces.
In my last reply to you; I mentioned that I would much prefer using a
hybrid solution of Richards runtime dependency proposal (or any other
solution) for the purpose of slimming down and choosing what gets
included into a System, after the build phase is complete.
Using an install dependency graph which is orthogonal to the build
dependency graph (or perhaps built on *top* of the build dependency
graph as a fallback), would be an elegant way of handling the System
assembly problem in the distribution phase - a problem which is
intentionally left unsolved by this proposal.
> <snip>
>
> > My feeling is that build time and runtime dependency declarations at the
> > chunk level is the right balance of efficiency vs complexity.
> >
> > Further elaboration in the definitions to declare these dependencies at
> > the artifact level instead of the chunk level would bring us closer to
> > 100% efficiency (without making the reality of circular dependencies
> > disappear) - however that 100% efficiency comes at an unreasonably high
> > cost of complexity on all sides. Implementors would be left with a much
> > more complex format to digest, and more importantly, users would be left
> > with a much more complex and error prone definitions format.
> >
> > Does this sound reasonable to you ?
>
> Why do you think that it would be more complex for the user? The
> definitions wouldn't differ from the model that you proposed, it is just
> how the runtime depends on a staging area are handled. Again, I am not
> advocating one way or another.
To paint a picture:
<picture>
When I originally started brewing this idea, over a month ago, I thought
of various dependency "types". Build time vs Runtime dependencies, and
also Hard vs Soft (optional) dependencies.
Then, I quickly realized that; if we were account for Soft vs Hard
dependencies in our formalism; we would need to provide a separate build
recipe for each possible combination of Soft dependencies that a chunk
might have.
The Flavors (or Variants) approach cuts this exponential complexity off
at the stem, as "every possible configuration" is pointless, we only
care about the variants which are useful for the systems we support and
produce.
</picture>
The reason that I bring this up, is that treating dependencies for the
build phase at the artifact level brings this exponential complexity
back.
Remember that for every given chunk, there are at the very least three
resulting artifacts (this is not implemented everywhere *yet*, but to
solve the distribution problem it will have to be).
One "base" artifact, one "dev" artifact and one "docs" artifact. Perhaps
there are 4 artifacts for most chunks if you split out "locale", and
then you can also have more exotic chunks which split their artifacts in
other exotic ways.
Now, obviously we start having different dependency sets, and
consequently different build recipes for every possible way we could
build a given chunk. If we want to build docs, we require gtk-doc-utils
and we have to pass --enable-gtk-doc to ./configure. If we dont want to
build locale data we need to pass --disable-nls to ./configure and maybe
we dont require intltool.
Add flavors/variants to the mix, and you need one recipe for each
artifact of each variant of each chunk.
You cannot escape this; if you want artifact level dependencies to be
considered during the build phase, you need build recipes for each of
them.
This is why; build dependencies and runtime dependencies are only
treated at the chunk level, and are only relevant for the build phase.
As a rule, you cannot "only build the artifacts you require for your
system"; if your system does not require the docs artifacts; you must
still build the docs artifacts for each chunk that you build.
After everything is built, you can then proceed to pick and choose which
artifacts of which chunks you mean to include in the system; possibly
using a simple manifest at the system level, or, possibly by
implementing a separate and build-orthogonal "install dependency" graph
explicitly for that purpose.
Make sense ?
Regards,
-Tristan
[0]:http://listmaster.pepperfish.net/pipermail/baserock-dev-baserock.org/2...
7 years, 4 months