Hi Alex, hi folks,
On Mon, Nov 14, 2016 at 05:40:23PM +0000, Alex Bradbury wrote:
On 9 November 2016 at 10:04, Reinoud Zandijk
> the title asks the question already: what do you like or dislike of the
> RISCV priv. spec 1.9.1 as posted. What parts would you do different or
> what ideas really ought to go but also what parts do you really like in
First of all, i'm pleased to see there is support for trying to make
boards/SoCs self-descriptive. We must learn from the ARM world for one!
This is certainly a very broad question. I've never been a
config-string (at least, until better motivation than device tree's
big-endian encoding is given) and have been following recent discussion. I'm
starting to understand why people are pushing for it, but given the
requirement for compatibility with the devicetree scheme I worry there is so
little room to diverge from the established device-tree standard that
config-string won't really have a significant advantage over DT.
to be honest, I dislike both. I think I get the points both camps make for
both DT or config-string; both config-string and DT are apparently to be
automatically generated at chip generation time and both have the same basic
function of telling the software where to find certain hardware and how its
configured. There are at least two levels in each description though: the SoC
level and the board level. Both levels need to be merged into one sane
description and are supposed to be fully descriptive. But what about
side-effects modelling, dependencies on clocks etc. i.e. lots of inherent
complexity for software (and thus more bugs/omissions).
For us as lowRISC, it depends on what we want to accomplish and how high we
set the bar. If we set the bar quite low, we're going to be an implementor of
a multi-core RISCV chipset. Not bad per see and still quite an achievement ,
but not really that distinguishable from other efforts.
If we however want to incorporate tagged memory, we are going to lose binary
compatibility anyway unless we go for PUMP-lite like schemes where the tag
handling is implemented as not being in the instruction stream but as a
watchdog to check for faulty or corrupted code.
Also, if we want to implement minion CPUs and want to be able to implement
low level drivers on them, the idea of both the config string and the DT are
becoming more and more meaningless.
All in all I think that if we really want to reach our goals of having minion
CPUs that can be programmed like soft-hardware, to have tagged memory support
either trough PUMP-lite or explicitly trough CHERI (IIRC), they both don't fit
well. In either way we are going to lose binary compatibility for the OS part;
userland might still be shared fine if we're going for a PUMP-lite scenario.
As for the 1.9.1 spec, I've tried to make time to read it trough and most is
well, OK, esp. the new pagetableis etc. No problems with that. What struck me
as bothersome from a lowRISC perspective is, yes, the config string/DT stuff
as well as suddenly demanding certain choices that I'd normally attribute to a
platform specification being memory maps, clocks in mmio, debug memory etc.
but also the SEE with its piggyback code. It kind of boils down to: a
mandatory use of BBL and its SEE piggyback, its mandatory use of the PLIC, its
mandatory use of the debug memory and so on.
They are also too narrow-minded IMHO about virtualisation; its not designed as
an integral part of the architecture but more as trying to make it possible to
do some level of virtualisation if you get what I mean. I won't claim to have
the definitive answer either but my proposals at least try to facilitate it
directly without the huge page table penalties as shown in the papers you sent
to me earlier; If you happen to have them still, please resend, i can't find
them here anymore :-/
My main critique about their virtualisation ideas is also more fundamental in
that they use the `classic' hierarchical interpretation of domains owning and
running subdomains whereas I behold virtualisation as equal partner domains
with interconnections by drivers if desired.
I have wondered whether it would make sense to iteratively approve
specification, so that the most important parts to e.g. run Linux can be
standardised sooner rather than later - e.g. have the Foundation standardise
the PLIC and page table format asap, then take longer to argue about the
SBI, efficient virtualisation support etc. Perhaps it just can't be split
up, but the fact it's so expansive is going to be a real test for the RISC-V
standards approval process.
Indeed though the page table stuff etc. can be seen as settled now I presume.
I am not so sure I like the PLIC. Here virtualisation is seen as an
I think it might be worth having more recommendations for
part of the platform specification, even to the point of having a
recommended (but of course not required) memory map for a typical
I miss the required vs recommended distinguish flags on the spec. All is now
set at required.
As for my own experiments, I managed to get a few hours in it lately. It now
`boots' the PV6 startup multi-cpu and is at the trap point and capable of
trapping and interrupting; both a bit rough still but the basics are done; I
might need to copy some stuff from riscv-pk about emulation etc. but I set out
to avoid that so I understand the code well enough. In my idea this machine
mode code is just some blob that can be provided to an OS to fix f.e. hardware
omissions or perks or the OS can supply its own.
OK, this email is getting too long. I'll stop now :)