It looks like you missed specifying the KERNEL=hw argument.
For a quick start, I recommend getting you toe wet by using Qemu. E.g., build and run the simple log scenario at (repos/base/run/log.run) via
build/riscv$ make run/log KERNEL=hw BOARD=virt_qemu_riscv
To enable Genode on a new hardware platform, please consider giving the Genode Platforms book a read. Even though it might not be directly applicable to RISC-V, it will point you to the right places and give you a guiding rail of steps to follow.
I can build core and bootstrap for virt_qemu_riscv, as well as migv successfully using the referenced genodelabs/genode-riscv repository that is enabled within my build configuration. Maybe you’ve used incompatible branches in between genode and genode-riscv? Or what exactly do you meant by:
@bog_dan_ro: Yes, RISC-V support needs a little love. When I started this project ten years ago RISC-V looked very promising. As of today, we still have not come across a real life use case were RISC-V has been required or asked for. So in case you want to move RISC-V forward on Genode, I am all ears and willing to help where I can.
@bog_dan_ro: Okay, I have installed a Debian 13 VM and tried your genode-docker-build. For me there were two show stoppers in your build_genode.sh file
I could not clone https://github.com/genodelabs/genode.git into the root directory (/) of the docker image, there was always some corruption. Therefore, I cloned everything to /home/test (this might work for you though)
https://github.com/ssumpf/genode-riscv.git is my development branch from 5 years ago and it will not work. You need to change that to the official Genode repository https://github.com/genodelabs/genode-riscv.git which is where all the results of the development work go to.
After 1. and 2. make core bootstrap worked for me.
I probably did something very stupid, but I can’t figure out what, any help will be highly appreciated.
BTW, on riscv, u-boot sets in a0 register the hart id and in a1 register the FDT(DTB) ptr.
Isn’t better to parse the FDT in bootstrap and get all the needed info (e.g. ram base, ram size, cpu count, interrupt controller, etc.) instead of hard-coding them?
Some riscv boards even supports ram sticks (e.g. Milk-V Titan ) which can’t be hard-coded…
congratulations for already passing the bootstrap part, and finally executing within the kernel already .
That address is of minor interest. But the ip (instruction-pointer) of the above kernel panic’s register dump is probably interesting (0xffffffc000029b8e), as it lead to the page-fault.
When skimming through your enabling commit, I noticed that your RAM description in src/include/hw/spec/riscv/milkv_duo_board.h does not equal to the boards RAM definition in the device tree sources that you’ve commited too. I think it should be:
RAM_BASE = 0x80000000,
RAM_SIZE = 0x3f40000,
...
Or, are there other restriction that forced you to use this small chunk of ~28MiB?
It would be obviously more convenient to parse FDTs in bootstrap. Until now, we decided against it. For the very few information that we need in the kernel (Ram, Timer, Irq controller, Cpus), and which where always fixed (till now), the tradeoff in between having additional parsing routines or few lines of address definitions was answered with following the second path.
I wouldn’t be reluctant to support it in general, as long as the FDT parser is very simple, no moving target, and doesn’t affect the TCB much. Anyway, when comparing this convenience enhancement with other open issues, I’m hesitant to do any promises yet.
Autoboot in 5 seconds
=> run genode
ethernet@50400000 Waiting for PHY auto negotiation to complete..... done
BOOTP broadcast 1
DHCP client bound to address 192.168.0.159 (7 ms)
Using ethernet@50400000 device
TFTP from server 192.168.0.31; our IP address is 192.168.0.159
Filename '/image-hw.elf'.
Load address: 0x84000000
Loading: #################################################################
#################################################################
###############
9.6 MiB/s
done
Bytes transferred = 2114608 (204430 hex)
Automatic boot of image at addr 0x84000000 ...
Wrong Image Format for bootp command
ERROR: can't get kernel image!
## Starting application at 0x81000000 ...
Kernel: Hello from boostrap !
void Kernel::main_initialize_and_handle_kernel_entry()
kernel initialized
Genode 25.11-56-g8e6112c491 <local changes>
32760 MiB RAM and 64536 caps assigned to init
[init -> test-log] hex range: [0e00,1680)
[init -> test-log] empty hex range: [0abc0000,0abc0000) (empty!)
[init -> test-log] hex range to limit: [f8,ff]
[init -> test-log] invalid hex range: [f8,08) (overflow!)
[init -> test-log] negative hex char: 0xfe
[init -> test-log] positive hex char: 0x02
[init -> test-log] Alloc_error value: OUT_OF_RAM
[init -> test-log] floating point: 1.70
[init -> test-log] multiarg string: "parent -> child.7"
[init -> test-log] String(Hex(3)): 0x3
[init -> test-log] Very long messages:
[init -> test-log -> log] 1.................................................................................................................................................................................................................................
....2
[init -> test-log] 3.....................................................................................................................................................................................................................................4
[init -> test-log] 5.....................................................................................................................................................................................................................................6
[init -> test-log]
[init -> test-log] Test done.
There are a few problems which I’m going to investigate this weekend:
It works with only one CPU. My hunch is it hangs because it assumes that the boot hart is always hart 0 which is not true, it can be any of the available harts.
I need to figure out where TIMER_HZ is used and how important it is . timer_accuracy hangs at startup same test-init
## Starting application at 0x81000000 ...
Kernel: Hello from boostrap !
void Kernel::main_initialize_and_handle_kernel_entry()
kernel initialized
Genode 25.11-56-g8e6112c491 <local changes>
32760 MiB RAM and 64536 caps assigned to init
[init -> test-timer_accuracy]
## Starting application at 0x81000000 ...
Kernel: Hello from boostrap !
void Kernel::main_initialize_and_handle_kernel_entry()
kernel initialized
Genode 25.11-56-g8e6112c491 <local changes>
32760 MiB RAM and 64536 caps assigned to init
[init -> test -> test-init] step 0 (sleep)
Regarding the FDT, IMHO parsing FDT in bootstrap will make the “core” part super portable, it will not need anything, it will also save people like me from a lot of pain and tears …
FDT should be quite simple, I’ll give it a try and come back to you with a Merge Request.
Last but not least, is it possible to create a more complex image, with some command line tools (e.g.shell) ?
Yes, supporting several cpus was not addressed yet when doing the initial port to this architecture. At that time, if I remember correctly the FPGA-based hardware only had one cpu core.
The TIMER_HZ is used in the kernel timer implementation for riscv, which uses the rdtime instruction. The timer frequency is hardcoded using the TIMER_HZ variable. If its value is wrong, kernel time will be misleading, which is used via syscall for the userland timer too.
Wow, more memory reserved than available
Well, in general yes. Our libc and lwIP library ports are riscv-ready and tested nightly. For the qemu riscv “board” we also have a bunch of drivers ready to run interactive and network scenarios. But for your board target you will need either some drivers for input and framebuffer, or at least a network driver to support more sophisticated scenarios.
I just tried the bash.run and vim.run scripts using qemu, which in general work. (Although I noticed that the “cap” quota for the terminal is to low in both examples using this platform) For both examples you need a drivers_interactive-[board] depot package, which spawns a sub-system running input and framebuffer driver. You can have a look at similarily labeled packages as blueprint.
I thought it’s the CPU freq but it’s the timer freq (/cpus/timebase-frequency from DTS/FDT). I set it to the correct value and all the tests are working now! Thanks!
AFAIK qemu has multicore support for years.
Last weekend I create a simple FDT parser which extracts all the needed info (boot hart id, CPUs count, memory regions, timer hz & interrupt-controler mem reg) which is now hard-coded. I need help with two things:
do you have any .clang-format for your code style? I’d like to run it over my code before I submit the patch as my ide (qt-creator) doesn’t know the code style used in genode so it broke most of the new/changed lines.
I need some help to pass the extracted infoto the core part and make use of them.
Probably, I was refering to the FPGA-based hardware we used as when initially supported RiscV the first time. But anyway, we just did not address multi-core on RiscV yet.
I do not want to slowdown your enthusiasm, but want to prevent you from being disappointed when expecting too much out of this.
To be clear: I think it would be good to provide the possibility to do DTB parsing when enabling new hardware in general. There are probably good targets for this, like configureable qemu targets, or unknown RAM configurations on the other hand. Therefore, I like your idea to provide a simple parser - as a library/tool - and potentially your board target(s) as blueprint for using it. Then everybody, who wants to benefit from it when enabling new hardware, can borrow it from there. And maybe, we’ll use it in mainline at some point, e.g. to ease the adaptation of the Qemu target.
On the other hand, it is enough for a bunch of hardware and scenarios to keep the most simple, low-complex variant of having 3-4 constant values compiled in. We also have scenarios, where the board’s peripherals and memory is hard split into a (TrustZone) secure Genode enclave and “non-secure” other OS side, where using derived memory values from the DTB would even stay in the way. We might have booting environments without DTB at all. Providing the ability of configuration (here in form of DTB), can also mislead the user, and result in frustration, when adaption of configuration does not lead to the expected result. Remember the example from above. When you add additional cpu-cores, although the implementation doesn’t support that. Users will likely think that all other definition within the DTB are evaluated as well. We already have a bunch of ARM hardware, where we would need to test and adapt the codebase, when the DTB evaluation gets a mandatory part of the Genode framework. For all of these reasons, we will not make it a mandatory part of the framework at least for now.
Not that I know.
Well, memory can be add within the bootstrap code in a dynamic fashion already, as well as CPUs, please have a look at the x86 architecture as a potential blueprint. Here, the memory ranges are provided via multiboot information, and the concrete CPU cores are parsed from ACPI information. Regarding the timer frequency, you might have a look at the ARM generic_timer in repos/base-hw/src/core/spec/arm/generic_timer.*, which also reads out the frequency at runtime.