I want to get an FPGA board to use for lowrisc and I'm considering getting
a Nexys Video rather than an Nexys 4 DDR, partly for future proofing, since
I saw somewhere that untethered lowrisc already uses ~70% of the LUTs of
the Nexys 4 DDR. Would I be making it much harder for myself to get it
running if I go with a Nexys Video? Are there any other drawbacks?
i've started hacking on the pv6 result from the GSoC to try to get it running
since the public GIT i've found isn't as far as I had hoped and has still tons
of work on its TODO list. Thanks to the start though Jeff Rogers!
(Changed the subject to better reflect the current discussion topic)
On 19 July 2016 at 21:55, Wesley Terpstra <wesley(a)sifive.com> wrote:
> I am not saying that we should not collaborate with software
> developers. Certainly we want to reach a solution that is good for
> both the hardware and software implementations. Rather, I don't think
> we should be making a decision based on the fear of not getting
> whatever support is necessary into the kernel. If the software is
> needed for the platform, it will get merged.
Yes, I think we're in agreement here. I totally get the danger of
making decisions that are easy in the short term with regards to
software work that may turn out to bite us further down the road.
>>> Yes I saw that. It's not cumbersome to add it for any given device,
>>> but it does seem a shame if we're now going to end up with three
>>> configuration codepaths for every platform device that might want to
>>> be used on RISC-V as well as other devicetree supporting platforms.
> Right. I just question how many such devices there will be.
Yes, a quick grep of the kernel tree shows ~3k of_property_read_*
instances (and that's not been stripped for references in docs or
comments). A patch to add support for reading config-string values as
well as device-tree would probably be at least 6k-9k lines. Not
insanely huge, but also big enough you might consider if there's a
better way of doing this that doesn't require adding lots of
admittedly trivial but likely untested code-paths to the kernel tree.
As you point out, it may be that only a subset of devices are
interesting which would reduce the amount of necessary work - though
the fact device-tree is used on a number of SPI or I2C devices means
the number isn't going to be tiny.
> When I first saw the config string, my response was also: why did we
> not use an existing standard. All I want is that we don't end up with
> two orthogonal description languages in a new platform.
I think this is one of the key points. The major criticism of
devicetree in the privileged spec seems to be the binary encoding -
the fact it's big-endian and 32-bit. This doesn't seem like a big deal
on any system capable of running Linux and I suspect that on smaller
systems runtime config parsing is one of the first things developers
will strip out when they need to save ROM size, but I can understand
not wanting to standardise a format seen as ugly. If RISC-V decides to
reject the flattened device tree encoding, does it really need to
reject the device-tree schema as well? If the schema is the same (or
has a simple mapping) and the formats trivially convertible, then it
just becomes an implementation detail whether the bootloader or the
Linux kernel itself converts the config-string to a flattened device
tree (arguably the preferred in-memory config representation in the
> Ultimately, though, you need to convince the actors who advocate for
> the config string. They argue that DT does not include the fields
> needed for RISC-V and would need to be extended, anyway.
I hadn't seen any arguments that weren't based on the FDT encoding,
I'd certainly be interested in them.
Although my emails might read otherwise, I'd like to be clear I'm not
really advocating that config-string be dropped in favour of
device-tree. My main concern is maximum compatibility with existing
device drivers in the Linux kernel. I'd argue any new system has a
certain complexity/weirdness budget before people start to really push
back. I'd personally much rather spend this budget on more interesting
user-visible features rather than something that seems more of an
implementation detail, which is why converting config-string to
devicetree and exposing that to the kernel seems promising as an
approach that requires minimal driver changes.
> I wonder if one could address most of the concerns about the config
> string by implementing DT API-compatible methods for the config
Yes, I've thought this too. Arguably it would be better if there were
greater abstraction for devices querying config values - i.e.
get_config_val rather than of_property_read. This would handle the
case of FEX on Allwinner devices (provided the same config keys were
used!). Is it the case that a RISC-V system would never want the
device-tree versions of these symbols though? If not, just providing
the same API with duplicate symbol names seems insufficient.
Perhaps someone can comment on the config-string schema and how/if/why
it differs to the device-tree spec?
I'm using the untethered version 0.2 of the lowrisc-chip.
I'm wondering, how can I access (write and read) PCR register within a
Linux application? (for example to remap the address space or read the
timer. In bare metal you use a syscall (in syscall.c) to have access to
those register in a ''machine mode'' (in crt.S). I don't see anything
similar in the bbl code.
Thanks for your help,
| Francesco Viggiano, Columbia University, Staff associate researcher at
Computer Science Building |
| Phone: +1 646-9829-535, Skype ID: francesco.vgg |
| Department of Computer Science 1214 Amsterdam Avenue |
| Columbia University Mail Code 0401 | New York, NY 10027-7003 |
I understand. There is a lot to do and I guess one of the primary goals with FPGA is verification so that would kind of focus the effort. The same question applies to integration for other Xilinx IP (thinking about Ethernet).
I would be interested to try myself but my time is also limited and I need time to learn Chisel.
I'm currently trying to get a PoC of /another/ software simulation of RISC-V for security mitigation research. I was thinking of basing it on Spike but decided to go down another path with an alternative design. I'm working on the new 1.9 priv spec and am implementing the tagged TLB, caches and page walker. Interrupts are a way off... and the spec is getting bigger...
Sent from my iPhone
> On 1/08/2016, at 9:23 PM, Alex Bradbury <asb(a)asbradbury.org> wrote:
> We discussed this idea with Ron Minnich some while back, and I really
> like it. For now it's not a high priority, and as Wei says we have to
> be very selective about where we put our development effort - it's
> definitely something I'd like to see though, and we'd be happy to
> advise anyone who wanted to have a go at implementing it.
Hello lowRISC developers,
currently the SPI bus on the Nexys 4 DDR board is exposed through an
explicit controller block, that software can talk (Q)SPI over.
I think that lowRISC could do the following instead:
- After reset, the SPI flash (whose primary purpose is to store a
bitstream) is mapped into the physical address space. Reads from this
area are translated to reads from the flash, writes are either
discarded or cause a trap.
- The existing SPI controller block is made available after a special
control register is written, potentially only after the memory-mapped
flash controller has been disabled. As I understand it, it would be
risky to access the SPI bus through two different controllers more or
less at the same time, because the commands and responses might mix
- An alternate boot ROM could simply jump into the SPI flash, without
performing any hardware initialization.
The flash part on the Nexys 4 DDR is 16MiB big, of which about 4 are
used for the bitstream, leaving plenty of space for boot firmware and,
for example, a kernel and an initrd.
Separating the boot firmware from the FPGA bitstream has the advantage
that the bitstream doesn't need to be recompiled during boot firmware
development, even if early initialization code is changed. A
disadvantage of using the SPI flash is that it isn't as easily
accessible as the µSD card slot. I also haven't tested whether Vivado
can program the high 12MiB of the flash.
What do you think about memory-mapping the flash?