My current state of the GENIVI Demo Platform
by Jonathan Maw
Hi all,
Here's the current state of the GENIVI Demo Platform (which will be of
interest to Paul, and has already been cc'ed to him as I accidentally
sent this mail to the wrong list first time).
* despite my best efforts, systemd user instances still doesn't work.
* qtwayland is built twice because I was on a short timescale to
having something ready. It should be simple enough to remove the
hacked-up bit and make the normal qtwayland use that branch, but
I haven't tested it. The hacked-up qtwayland exists because I discovered
far too late in the day that we were missing another magic patch.
* gdp-hmi-launcher is very unstable and unreliable, but occasionally
works as intended, for the EGLWLMockNavigation example (and possibly
the EGLWLInputEventExample).
It has been pushed to the branch "baserock/jonathanmaw/genivi-demo-jetson"
in definitions. Some repos have a "baserock/jonathanmaw/genivi-demo-jetson"
branch as well.
I start it up with the following:
# Get the environment and dbus set up
export XDG_RUNTIME_DIR=/tmp
export QT_QPA_PLATFORM=wayland-egl
if ! test -e /tmp/dbus-session; then
dbus-launch --sh-syntax > /tmp/dbus-session
fi
source /tmp/dbus-session
# Start a systemd user instance
/lib/systemd/systemd &
sleep 1
# Start weston
weston --backend=fbdev-backend.so --tty=1 --log=/root/weston.log
# Make sure that your kernel command-line includes fbdev. This means
# that your cluster morphology must specify "KERNEL_ARGS: vga=788"
# Start the HMI launcher
systemctl --user start gdp-hmi-launcher
In a few seconds, either the launcher appears (with a selection of 6 apps)
the background (a GENIVI logo) appears, or nothing happens.
Restarting the launcher with `systemctl --user restart gdp-hmi-launcher`
usually makes the launcher come up, eventually.
Every app but the Mock Navigation and the Input Event Example doesn't work
at all. Mock Navigation and the Input Event Example ought to start when
clicked, but depending on chance, you might not have the panel beneath it
working. If it does (and isn't showing a power button), you can click the
button to return to the launcher.
Hope this is of interest,
Jonathan
7 years, 2 months
Re: [RFC] Disambiguate definitions
by Paul Sherwood
On 2015-04-06 12:27, Javier Jardón wrote:
> On 6 Apr 2015 12:12, "Paul Sherwood" <paul.sherwood(a)codethink.co.uk
> [1]> wrote:
> >
> > Hi folks,
> > it appears that we have several examples of the same name being
> used to denote a stratum, and a chunk inside it:
> >
> > ruby
> > swift
> > lorry
> > ansible
> >
> > I think that this is confusing, and therefore propose that we
> could
> standardise as
> >
> > ruby-common, ruby
> > swift-common, swift
> > lorry-common, lorry
> > ansible-common, ansible
>
> Yes please, but even I used the -common suffix before, I think it
> will
> be more descriptive to use something like "ruby-stratum" though
There was discussion on IRC, which settled on 'foo-group' as the name
for the collection that contains a given chunk foo.
I've finally submitted a patch [1] which I believe achieves that with
the minimum disruption. Sorry for the delay, my original attempt got
tripped up by a problem with my gerrit setup.
Note that this patch is a half-way house, which perhaps demonstrates
the problem I'm seeing. After applying it, a file called
strata/ruby.morph describes the renamed ruby-group stratum. We could
clarify this, by renaming the file as well, to strata/ruby-group.morph.
But that would mean additionally editing all the system definitions to
refer to the new filename. Probably clearer for users, but unnecessary
for morph.
Anyway, in the meantime
- Richard Dale and Richard Maw extended this topic to consider deeper
structural changes, such as RDF, JSON-LD and self-describing schemas
[2].
- and Sam wrote separately about a Proposed process for changing the
definitions format [3] which highlighted (for me at least) how complex
this area is and how much work may be involved to properly manage
definitions declaratively.
- and ybd (which led me to notice this in the first place) has been
fixed to warn about 'foo contains foo', but can handle it.
So maybe it would be better to dig deeper into the schema and
definition version questions, and park this patch until we have a clear
agreed approach?
br
Paul
[1] https://gerrit.baserock.org/#/c/391/
[2]
http://listmaster.pepperfish.net/pipermail/baserock-dev-baserock.org/2015...
[3]
http://listmaster.pepperfish.net/pipermail/baserock-dev-baserock.org/2015...
7 years, 2 months
[PATCH] Remove stddef.h include to fix build with musl
by Richard Dale
repo: git://git.baserock.org/delta/syslinux
branch: baserock/rdale/musl
commit: 2b71aa376b6ebf1b99116c9b0c407267e84fb036
target: baserock/morph
---
memdump/argv.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/memdump/argv.c b/memdump/argv.c
index ca41e03..2f36913 100644
--- a/memdump/argv.c
+++ b/memdump/argv.c
@@ -33,7 +33,6 @@
*/
#include <stdint.h>
-#include <stddef.h>
#include <stdio.h>
#define ALIGN_UP(p,t) ((t *)(((uintptr_t)(p) + (sizeof(t)-1)) & ~(sizeof(t)-1)))
--
1.9.1
7 years, 2 months
[PATCH 0/2] Fix build of openssh with musl
by Richard Dale
These two patches fix building openssh with musl.
repo: git://git.baserock.org/delta/openssh-git
branch: baserock/rdale/musl
commit: 84e25ab50fbcb787bd1e72de1eb920d8590c2ff5
target: baserock/morph
Richard Dale (2):
Fix incorrect use of 'ut' struct
Add a sys/param.h include to compile with musl
loginrec.c | 10 +++++-----
sshd.c | 1 +
2 files changed, 6 insertions(+), 5 deletions(-)
--
1.9.1
7 years, 2 months
[PATCH 00/18] Use OSTree when caching artifacts
by Adam Coldrick
Hi all,
This patch series first adds support for using a union filesystem when
constructing systems and also when deploying systems. It also adds an artifact
cache which uses OSTree to store chunk and system artifacts and modifies the
rest of the code to make use of this cache. System artifacts are stored as
a tree containing only the files modified at system construction time, greatly
reducing the size of the artifact.
This provides speedups wherever we were previously unpacking things from
tarballs, especially when doing `morph deploy`. I don't have any numbers to
show the size of the speedups at the moment, I will send some out tomorrow
once I've had time to run some tests.
Note that this patch series requires you to have OSTree and its dependencies
in your system, along with at least one of overlayfs or unionfs-fuse. See
my other patch series (Add overlayfs, FUSE, OSTree and dependencies) for
further information/patches to definitions.
Thanks,
Adam
Repo: git://git.baserock.org/baserock/baserock/definitions
Ref: baserock/adamcoldrick/use-ostree-for-artifact-cache
SHA: 0b2179480920e2ae1ddd1dcd8f12427c8bd4ec78
Target: master
Adam Coldrick (18):
Allow the passing of options to fsutils.mount
Use overlayfs when building systems
Use overlayfs when deploying
Add support for unionfs-fuse
Create device nodes in staging area
Move the chunk cache logic into buildcommand
bins: We no longer want chunks to be tarballs
Add a class to wrap the OSTree API
Add an artifact cache which uses OSTree
RemoteArtifactCache: Support multiple cache methods
Make morph use OSTreeArtifactCache instead of LocalArtifactCache
builder: Use the OSTree artifact cache when building
deploy: Use OSTree to checkout the system for deployment
gc: Make `morph gc` use the OSTree artifact cache
morph-cache-server: Add support for an OSTree artifact cache
yarns: Make the distbuild yarn expose the worker's artifact cache
over HTTP
yarns: Disable the cross-bootstrap yarn
cmdtests: We can no longer meaningfully look at chunks in the cache
morph-cache-server | 115 ++++++-----
morphlib/__init__.py | 2 +
morphlib/app.py | 7 +
morphlib/bins.py | 61 +++---
morphlib/bins_tests.py | 100 +---------
morphlib/buildcommand.py | 32 ++-
morphlib/builder.py | 118 ++++++-----
morphlib/builder_tests.py | 18 +-
morphlib/fsutils.py | 25 ++-
morphlib/ostree.py | 139 +++++++++++++
morphlib/ostreeartifactcache.py | 229 ++++++++++++++++++++++
morphlib/plugins/deploy_plugin.py | 76 +++++--
morphlib/plugins/gc_plugin.py | 10 +-
morphlib/remoteartifactcache.py | 27 ++-
morphlib/stagingarea.py | 58 +++---
morphlib/stagingarea_tests.py | 41 ++--
morphlib/util.py | 6 +-
ostree-repo-server | 15 ++
tests.build/build-chunk-writes-log.script | 38 ----
tests.build/build-stratum-with-submodules.script | 8 +-
tests.build/build-stratum-with-submodules.stdout | 3 -
tests.build/build-system-autotools.script | 7 +-
tests.build/build-system-autotools.stdout | 3 -
tests.build/build-system-cmake.script | 7 +-
tests.build/build-system-cmake.stdout | 2 -
tests.build/build-system-cpan.script | 7 +-
tests.build/build-system-cpan.stdout | 1 -
tests.build/build-system-python-distutils.script | 12 +-
tests.build/build-system-python-distutils.stdout | 6 -
tests.build/build-system-qmake.script | 10 +-
tests.build/build-system-qmake.stdout | 8 -
tests.build/build-system.script | 27 ---
tests.build/build-system.stdout | 5 -
tests.build/cross-bootstrap.script | 5 +-
tests.build/morphless-chunks.script | 7 +-
tests.build/prefix.script | 7 +-
tests.build/prefix.stdout | 8 -
tests.build/rebuild-cached-stratum.script | 9 +-
tests.build/rebuild-cached-stratum.stdout | 22 ---
without-test-modules | 2 +
yarns/architecture.yarn | 16 +-
yarns/implementations.yarn | 16 +-
yarns/morph.shell-lib | 3 +-
43 files changed, 814 insertions(+), 504 deletions(-)
create mode 100644 morphlib/ostree.py
create mode 100644 morphlib/ostreeartifactcache.py
create mode 100755 ostree-repo-server
delete mode 100755 tests.build/build-chunk-writes-log.script
delete mode 100644 tests.build/build-stratum-with-submodules.stdout
delete mode 100644 tests.build/build-system-autotools.stdout
delete mode 100644 tests.build/build-system-cmake.stdout
delete mode 100644 tests.build/build-system-cpan.stdout
delete mode 100644 tests.build/build-system-python-distutils.stdout
delete mode 100644 tests.build/build-system-qmake.stdout
delete mode 100755 tests.build/build-system.script
delete mode 100644 tests.build/build-system.stdout
delete mode 100644 tests.build/morphless-chunks.stdout
delete mode 100644 tests.build/prefix.stdout
delete mode 100644 tests.build/rebuild-cached-stratum.stdout
--
1.7.10.4
7 years, 2 months
Tracking ABI of Baserock build-host systems
by Sam Thursfield
Hello
We have a weakness in the way Baserock systems are bootstrapped, and
today it finally bit me!
Right now Morph ignores the question of whether a chunk built in
'bootstrap' mode will run on other systems. Morph assumes that a chunk
built on an x86_64 Baserock system will function the same on any other
x86_64 Baserock system. So for example, the cache key of stage1-binutils
will be the same on any two Baserock systems, assuming it's the same
architecture, same build instructions and same source code commit.
Anyone who has been in software more than 5 minutes will know this is
fantasy. But it's a very convenient fantasy. I'm not sure how many
people have actually found themselves on the wrong side of this
assumption so far -- it can't be many, or the mailing list would be full
of complaints.
I'll quickly sum up the problem it caused me. On ARM hard-float systems,
the upgrade from EGLIBC 2.15 to GLIBC 2.20 introduced an ABI break:
/lib/ld-linux.so.3 became /lib/ld-linux-armhf.so.3. So dynamically
linked programs built on an ARM hard-float system using EGLIBC 2.15
won't actually run on a GLIBC 2.20 system: they'll give a 'not found'
error. I had an ARM Mason set up to use a shared artifact cache which
was already being used by a much older ARM distbuild network. The older
distbuild had built and uploaded the bootstrap chunks stage1-binutils,
stage1-gcc etc. The new Mason was working fine with these until it had
to rebuild a bootstrap chunk: I think it was stage2-make. At this point
it tried to *run* the tools from stage1-binutils and stage1-gcc to build
make, and they didn't work because they were linked against the wrong
ld.so. Result: a confusing build failure!
I think we can fix this ABI break by creating a compatibility symlink in
/lib, and I'll try doing that. But it highlights a hole in our story of
'everything is reproducible and it won't randomly break for you'. Here
is a list of things we could do to tie that hole closed.
Note when I say "host system" below I'm referring to the Baserock system
that is running `morph build`. Not VM hosts.
1) Include the cache key of the host system in the cache key of each
chunk built in 'bootstrap' mode.
- assumes Morph is running on a Baserock system
- makes cache.baserock.org less useful, because unless you're running
the exact same build-system as the Mason that built the artifacts, your
Morph will come up with different cache keys for the same commit of
definitions and will build everything locally.
2) Include the cache keys of certain chunks from the host system in the
cache key of each chunk built in 'bootstrap' mode.
- assumes Morph is running on a Baserock system, and makes assumptions
about the makeup of that system
- non-trivial to implement
At present the list of chunks would be something like: glibc, binutils,
gcc. The list would have to be maintained by a human in definitions.git,
and we'd have to make a judgement call about where to draw the line.
Once you start digging you realise every component *could* affect how
something is built, so I think going down this road is a bit pointless,
but it might be worthwhile stepping stone while we try to achieve (1).
3) Always build bootstrap chunks locally
- will make getting started slower for everyone (each new user will need
to build GCC twice locally before doing anything else)
4) Statically link bootstrap chunks.
- requires some big changes to the bootstrap, I don't know if it would
actually be possible to statically link everything in stage1 and stage2
- stage1 and stage2 artifacts would be bigger
- still vulnerable to broken versions of GCC/G++/Binutils generating bad
binaries
- will only fix our current example Linux/GNU C/C++/Fortran bootstrap
(the 'build-essential' stratum), will not solve the problem for everyone
(although the principle would remain the same)
5) Make sure we never break our reference bootstrap
- this is our current approach, but it clearly doesn't always work, and
it continually requires developers and reviewers to think whether a
change will might break the bootstrap on any of our platforms
Anyone have any more suggestions for how we could solve this? Any
preferred solutions?
In writing this email I've convinced myself that (1) is really the only
option and so we should make it happen sooner rather than later, on the
assumption that the longer we leave it the more painful it will become.
If the Mason systems are always running latest master of definitions,
then as long as you are also running latest master, you'll be able to
use artifacts from cache.baserock.org. Except if you run a devel-system
instead of a build-system you won't, because it'll have a different
cache key. I have a possible idea for how to fix this if we moved to
allowing arbitrary nesting of components, but I think this email has
gone on a while already, and it's not something that is going to get
done next week.
Please let me know your thoughts! This is a bit of a mindbender so don't
be afraid to ask questions. I know that Emmet has expressed a desire for
(1) multiple times already, but I think it's only been in IRC so I
haven't tried to hunt out previous discussions from the mailing list.
Thanks
Sam
--
Sam Thursfield, Codethink Ltd.
Office telephone: +44 161 236 5575
7 years, 2 months
My experiences with adding GENIVI Demo Platform to Baserock
by Jonathan Maw
Hi all,
I've been tasked with trying to get the GENIVI Demo Platform working on
Baserock. Below are my experiences.
# Directly pertaining to Baserock:
* we do not provide a dbus session bus, requiring one to be started manually.
Can this be done via a systemd unit?
* AudioManager doesn't work. When I tried running it (with a dbus session started via command-line)
AudioManagerDaemon/src/CAmSocketHandler.cpp:283: am::am_Error_e am::CAmSocketHandler::removeTimer(am::sh_timerHandle_t): Assertion `handle!=0' failed.
* genivi-demo-platform-hmi built successfully after a small patch. It does not
run successfully. Via command-line, it failed to run because of a DBus Error,
which I think was "what(): Process org.freedesktop.systemd1 exited with
status 1"
# Other experiences
* General impression: A lot of projects have given little thought into how
they should be installed, and the yocto project would rather work around
this than engage with the project maintainers to fix this.
* AudioManagerPoC - doesn't really exist - it's a patch that meta-genivi-demo
has implemented.
- It's also an entirely separate project with different dependencies and
build systems.
- Plus, it's hard-coded to put things in /opt.
* Browser PoC - Does not have installation instructions.
* Automotive Message Broker: ambctl has runtime dependencies (because Python)
on pygobject-2.0.
Upstream is open to making it compatible with later pygobject.
* Fuel Stop Advisor - is part of 3 projects, lbs/positioning, lbs/navigation
and lbs/navigation-application.
- They have insane build systems that involve cloning code from elsewhere.
- lbs/navigation clones wayland-ivi-extension, weston-ivi-shell,
positioning, navit(svn), and wgets a map of switzerland.
- lbs/navigation-application clones navigation-service and
automotive-message-broker
- lbs/positioning thankfully does not seem to clone code from elsewhere.
- They also do not have top-level makefiles that recurse, each piece must be
built from its individual subdirs.
- Plus, no install commands.
- Either XSE or Continental are responsible for this monstrosity.
# Summary
In general, there have been three problems:
1) Lack of information on how to get them running properly (e.g. how to
connect to dbus)
2) Upstream build systems being awful.
3) How to get audio/AudioManager working.
I will be seeing if I can make any noise about the upstream build systems next,
so I suppose the main topics for discussion are:
* How do I get Audio working (Graham Finney has been enlisted to try and get
AudioManager working, but any help is appreciated)
* How should dbus and systemd be configured for all these user-facing services?
Thanks,
Jonathan
7 years, 2 months