What is the current status of musl on RISC-V?
This was the last time I heard about it, but it has been many months since.
> Thanks to the folks, I passed the mid-term evaluation. Now it is about
> time to publish the fifth progress report on porting musl on RISC-V.
> Last week, the toolchain itself has been built for RISC-V and running
> on Spike, and libc-test  can be executed with it now. I posted the
> result of tests on . The REPORT.txt file contains all error
> messages of failed tests, both run-time ones and compile-time ones.
> Some failures are expected since musl on x86_64 also does the same
> ones (e.g. errors in src/api/fcntl.c), but there are some unexpected
> errors too. I guess that the "warning: <the name of a header> is
> shorter than expected" warning indicates bugs in arch-dependent part
> of I/O functions or system calls (or kernel?) and it causes syntax
> errors in the same compilation unit.
> Moreover, some tests triggers a "signal 11" error (segmentation fault)
> in libc. I added some logs to . They are bugs in the port,
> obviously. I am working on them.
> The good news is, anyway, some results are *better than x86_64*,
> especially in math functions :-)
> (probably the cause is the difference in the floating-point precision,
> though. it is usual in float tests...)
> It takes long, long time to get but finally I have a (seems-to-be)
> working test suite for the port. I will continue to debug and fix the
> port using the result. Stay tuned!
> : http://nsz.repo.hu/git/?p=libc-test
> : https://gist.github.com/omasanori/ee828369aea844ac7fdfdc8362953299
> Masanori Ogino
Hi folks, hi Alex,
I've decided to split up my work in different sections. I hope that this will
ease reading and adoption. Not all is done yet, but never the less its fairly
complete if one is familiar with the basic setup. More will follow soon I
Any comments, positive or negative are most welcome.
I followed this discussion with interest since I'm in the process of designing
an extension to the RISCV to allow for easier implementation of hypervisors.
On Tue, Jan 03, 2017 at 11:05:10PM -0800, Stefan O'Rear wrote:
> On Mon, Jan 2, 2017 at 8:58 AM, ron minnich <rminnich(a)gmail.com> wrote:
> Channel designs in general address the "one time transfer of data" case, but
> not the "long-lived capabilities" case which primarily matters for
> accelerators. If you want an attached program-executing accelerator to be
> able to access memory with the precise authority of a single user process,
> you need some kind of IOMMU which can walk CPU-format page tables. From the
> documentation I've seen on the Intel-ish IOMMUs, I don't think they can be
> significantly improved for the "accelerator trusted by some processes but
> not system-wide" use case, although the number of hoops is ... high ... for
> many simpler use cases.
> Virtio also does not support grants of the form "read this block of physical
> memory repeatedly until reconfigured", which are relevant for simple video
> interfaces (without a device-side framebuffer).
In short, my current solution is a two step system with a memory segment
bitmap for each domain and a secondary page table that is queried on the
resulting physical address when a memory segmentator signals its not allowed
for the current domain to access that physical memory. This secondary page
table including the memory segmentation bitmap are exclusively maintained by a
hypervisor. It allows for each domain to have fully independent page mappings
that are prevented to access anything outside their allocated memory BUT for
the exceptions in the secondary page table and then only for read/write of
The protocol on these shared memory spaces is free to choose. A polling one
for say a framebuffer is fine but a queue of linked lists or virtio structures
is also OK. Just take note of the memory ordering semantics. As a tester I've
implemented a message based two way circular queue for memory/IO/config, a two
way console IO with circular queues and a (dumb) single issue block interface.
As extra instrumentation I've added a signaling function to notify for
processing. It now runs PV6 fine in an multi-CPU setup; writing `drivers' for
these was trivial and peanuts.
On 31 December 2016 at 10:25, Stefan Wallentowitz
<stefan at wallentowitz.de
>* On 30.12.2016 20:42, Brad Walker wrote:
*>>* I've noticed there is a port, with bitstreams, of the Low-Risc processor to
*>>* the Nexy Artix-7 FPGA board.
*>>>>* Has anyone done any work to get the processor instantiated on the Xilinx
*>>* Zynq family (i.e. Zedboard/Microzed) of FPGAs?
*>>* Hi Brad,
*>>* the first release of lowRISC was actually moving away from the Zynq
*>* platform, because the original Rocket chip was (is?) tethered to the ARM
*>* on that. For that we moved to a logic only FPGA. I think moving back to
*>* the Zynq and use the AXI (for example with Xilibus or so) to communicate
*>* with the debug interface instead of the Rocket's HTIF is a neat project.
* If you or anyone else want to undertake it, I would be happy to assist.*
I would tend to agree that using the Zynq based FPGA boards (i.e. Zedboard,
etc.) have programmable logic that might be a little bit on the low side.
So my thinking was something a little bit more beefy like the ECP5 from
Lattice or maybe even doing a custom FPGA board using a beefier FPGA from
Does anyone have a utilization report from Vivado so that I see what the
"final" bitstream usage looks like?