Handling RISC-V config-string in the Linux kernel (Re: [sw-dev] Is a de facto standard memory map helpful or harmful?)
by Alex Bradbury
(Changed the subject to better reflect the current discussion topic)
On 19 July 2016 at 21:55, Wesley Terpstra <wesley(a)sifive.com> wrote:
> I am not saying that we should not collaborate with software
> developers. Certainly we want to reach a solution that is good for
> both the hardware and software implementations. Rather, I don't think
> we should be making a decision based on the fear of not getting
> whatever support is necessary into the kernel. If the software is
> needed for the platform, it will get merged.
Yes, I think we're in agreement here. I totally get the danger of
making decisions that are easy in the short term with regards to
software work that may turn out to bite us further down the road.
>>> Yes I saw that. It's not cumbersome to add it for any given device,
>>> but it does seem a shame if we're now going to end up with three
>>> configuration codepaths for every platform device that might want to
>>> be used on RISC-V as well as other devicetree supporting platforms.
>
> Right. I just question how many such devices there will be.
Yes, a quick grep of the kernel tree shows ~3k of_property_read_*
instances (and that's not been stripped for references in docs or
comments). A patch to add support for reading config-string values as
well as device-tree would probably be at least 6k-9k lines. Not
insanely huge, but also big enough you might consider if there's a
better way of doing this that doesn't require adding lots of
admittedly trivial but likely untested code-paths to the kernel tree.
As you point out, it may be that only a subset of devices are
interesting which would reduce the amount of necessary work - though
the fact device-tree is used on a number of SPI or I2C devices means
the number isn't going to be tiny.
> When I first saw the config string, my response was also: why did we
> not use an existing standard. All I want is that we don't end up with
> two orthogonal description languages in a new platform.
I think this is one of the key points. The major criticism of
devicetree in the privileged spec seems to be the binary encoding -
the fact it's big-endian and 32-bit. This doesn't seem like a big deal
on any system capable of running Linux and I suspect that on smaller
systems runtime config parsing is one of the first things developers
will strip out when they need to save ROM size, but I can understand
not wanting to standardise a format seen as ugly. If RISC-V decides to
reject the flattened device tree encoding, does it really need to
reject the device-tree schema as well? If the schema is the same (or
has a simple mapping) and the formats trivially convertible, then it
just becomes an implementation detail whether the bootloader or the
Linux kernel itself converts the config-string to a flattened device
tree (arguably the preferred in-memory config representation in the
kernel).
> Ultimately, though, you need to convince the actors who advocate for
> the config string. They argue that DT does not include the fields
> needed for RISC-V and would need to be extended, anyway.
I hadn't seen any arguments that weren't based on the FDT encoding,
I'd certainly be interested in them.
Although my emails might read otherwise, I'd like to be clear I'm not
really advocating that config-string be dropped in favour of
device-tree. My main concern is maximum compatibility with existing
device drivers in the Linux kernel. I'd argue any new system has a
certain complexity/weirdness budget before people start to really push
back. I'd personally much rather spend this budget on more interesting
user-visible features rather than something that seems more of an
implementation detail, which is why converting config-string to
devicetree and exposing that to the kernel seems promising as an
approach that requires minimal driver changes.
> I wonder if one could address most of the concerns about the config
> string by implementing DT API-compatible methods for the config
> string.
Yes, I've thought this too. Arguably it would be better if there were
greater abstraction for devices querying config values - i.e.
get_config_val rather than of_property_read. This would handle the
case of FEX on Allwinner devices (provided the same config keys were
used!). Is it the case that a RISC-V system would never want the
device-tree versions of these symbols though? If not, just providing
the same API with duplicate symbol names seems insufficient.
Perhaps someone can comment on the config-string schema and how/if/why
it differs to the device-tree spec?
http://www.devicetree.org/specifications-pdf
Best,
Alex
7 years
Memory-mapped SPI flash on Nexys4?
by Jonathan Neuschäfer
Hello lowRISC developers,
currently the SPI bus on the Nexys 4 DDR board is exposed through an
explicit controller block, that software can talk (Q)SPI over.
I think that lowRISC could do the following instead:
- After reset, the SPI flash (whose primary purpose is to store a
bitstream) is mapped into the physical address space. Reads from this
area are translated to reads from the flash, writes are either
discarded or cause a trap.
- The existing SPI controller block is made available after a special
control register is written, potentially only after the memory-mapped
flash controller has been disabled. As I understand it, it would be
risky to access the SPI bus through two different controllers more or
less at the same time, because the commands and responses might mix
up.
- An alternate boot ROM could simply jump into the SPI flash, without
performing any hardware initialization.
The flash part on the Nexys 4 DDR is 16MiB big, of which about 4 are
used for the bitstream, leaving plenty of space for boot firmware and,
for example, a kernel and an initrd.
Separating the boot firmware from the FPGA bitstream has the advantage
that the bitstream doesn't need to be recompiled during boot firmware
development, even if early initialization code is changed. A
disadvantage of using the SPI flash is that it isn't as easily
accessible as the µSD card slot. I also haven't tested whether Vivado
can program the high 12MiB of the flash.
What do you think about memory-mapping the flash?
Best regards,
Jonathan Neuschäfer
7 years, 1 month
Re: Is a de facto standard memory map helpful or harmful?
by Richard W.M. Jones
My 2c on this. This is mostly obvious & some points are covered in
the thread already.
(1) Having a standard serial port that is present on all hardware,
located at the same address everywhere, same model everywhere, easy to
drive, available as soon as possible after boot -- just that one thing
would be a massive improvement over ARM.
(2) A single OS image[*] should be able to boot on every piece of
hardware. Like on PCs. I'm not going to say how you should achieve
this, but again taking ideas from PCs may not be a bad idea. I'm
thinking: some tables describing memory (like E820), discoverable and
self-describing hardware (PCIe, USB, etc), ACPI tables[**]. For the
server case, a UEFI bootloader that loads the kernel and provides
services, probably rendering the whole memory map problem moot.
(3) It's a nice idea to reuse infrastructure that Linux, BSD, Windows
already have. Those OSes can read ACPI tables, the code already
exists to parse PCI config space, etc. Reusing that code means you
don't need to write something else and argue why your thing should go
upstream instead of the thing which is already upstream.
There was some discussion about a "config string", but that sounds
like a terrible idea - see point #3. The kernel developers absolutely
do have a say in this when the architecture is brand new and you can
choose not to do something.
Rich.
[*] I mean a single OS image compiled for a specific bit width variant
of RISC-V.
[**] I would say device tree, but ACPI is an actual standard, cross
platform, and also includes a way to run interpreted code which is
useful. MSFT and Red Hat have pushed hard for ACPI in the ARM server
space -- and implemented it -- with at least some (limited) degree of
success.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-top is 'top' for virtual machines. Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top
7 years, 1 month
PCR Interrupts
by Francesco Viggiano
Hello guys,
I'm using the Unthetered version of the lowrisc-chip. Are the interrupt
sources going to the PCR edge-sensitive or level-sensitive? Will the actual
interrupt be sampled and reimain pending even if the signal goes down? Or
it needs to stay high until the CPU resolves it?
Thank you for your support,
Francesco
--
| Francesco Viggiano, Columbia University, Staff associate researcher at
Computer Science Building |
| Phone: +1 646-9829-535, Skype ID: francesco.vgg |
| Department of Computer Science 1214 Amsterdam Avenue |
| Columbia University Mail Code 0401 | New York, NY 10027-7003 |
7 years, 1 month
Is a de facto standard memory map helpful or harmful?
by Alex Bradbury
One issue that's come up while chatting to people here at the RISC-V
workshop in Boston is avoiding needless arbitrary differences between
platforms. Having different memory maps is an example of this (e.g.
having the PLIC instantiated with a different base offset). However,
there also seems to be the view that some RISC-V implementers will be
unable or unwilling to conform to a shared standard memory layout.
Given this is the case, do we risk actually increasing fragmentation
in the community by exposing memory map details to OS porters and
low-level system programmers as they may come to rely on them? Are
there any other advantages in retaining full flexibility for hardware
implementers to modify their memory map at will?
Should if be RISC-V best practice that any OS porting work doesn't
rely on a fixed memory map, and instead finds it at boot time through
a device tree or device tree-like description? In cases where this
doesn't make case an appropriate C header could be generated from that
description, but the principle that the map isn't hardcoded in the OS
codebase remains. I'm of course thinking beyond Linux here - it would
seem a shame if it ended up that the seL4, FreeRTOS, ... ports did not
work out of the box with any RISC-V implementation.
Best,
Alex
7 years, 1 month
ERROR: [Synth 8-439] module 'axi_quad_spi_0' not found [/home/rjones/d/lowrisc-chip/src/main/verilog/chip_top.sv:295]
by Richard W.M. Jones
Probably an elementary problem, but when compiling the FPGA
demo ("make bitstream") for Nexys 4, I get:
ERROR: [Synth 8-439] module 'axi_quad_spi_0' not found [/home/rjones/d/lowrisc-chip/src/main/verilog/chip_top.sv:295]
ERROR: [Synth 8-285] failed synthesizing module 'chip_top' [/home/rjones/d/lowrisc-chip/src/main/verilog/chip_top.sv:6]
axi_quad_spi is a piece of Xilinx IP? I'm afraid this is where my
knowledge of what's going on gets hazy.
I'm using:
- Fedora 24 on x86_64 host
- Vivado v2016.2 (64 bit of course)
- untether-v0.2 (not for any particular reason, I could try 0.3)
Everything else up to this point has worked pretty much fine.
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
libguestfs lets you edit virtual machines. Supports shell scripting,
bindings from many languages. http://libguestfs.org
7 years, 1 month
Uncachable data
by Francesco Viggiano
Hello guys,
I need to write some data directly to main memory, without the risk that
some parts remain into caches. Is there a way to invalidate caches with an
ASM instruction? Or to set a space as uncacheable? Unfortunately, I cannot
map ddr into I/O space as you do for copying bbl from sdcard to main memory.
Thank you so much,
Francesco
--
| Francesco Viggiano, Columbia University, Staff associate researcher at
Computer Science Building |
| Phone: +1 646-9829-535, Skype ID: francesco.vgg |
| Department of Computer Science 1214 Amsterdam Avenue |
| Columbia University Mail Code 0401 | New York, NY 10027-7003 |
7 years, 1 month
lowRISC versus the rest of the RISCV world (rant?)
by Reinoud Zandijk
Dear folks,
with horror I've read the current discussion on DRAM/MMIO/devtree/etc
discussion. We're really missing the point of what lowRISC can be and in a
kind of sense, what RISCV can become.
The current discussion boils down to defining a new mode of transportation
complete with infrastructure but at the same time demanding it needs to have 4
wheels, run on petrol, has a customized Ford engine and needs 4 seats. If you
get my drift, thats not really designing a new mode of transportation but
defining yet another car. If its again just another car, why bother developing
it if the only thing it distinguishes from the competition is the fact that it
has a good service manual.
We are now in the unique position to be able to create a new software and
hardware ecosystem so why settle for the fluke of the day. Now I can't speak
for the rest of the RISCV community and they might have good reasons to do it
their way, who am I to question that, but for lowRISC we ought to aim a LOT
higher.
(shameless plug here :)
Hence my proposal I posted earlier; it deals with virtualisation, device
discovery, extended debuging, hypervisor, security, strict memory isolation,
minions, software devices, etc. etc. Could very well be extended with NOMA
etc.
Less drastic solutions might be fine too, and lots of variants are possible
too so feel free to comment on it or try to shoot it down, but at least its an
effort to create something new and not follow the beaten track of say ARM.
With regards,
Reinoud
7 years, 2 months