Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Radxa Rock 5C #41

Open
geerlingguy opened this issue Apr 4, 2024 · 33 comments
Open

Radxa Rock 5C #41

geerlingguy opened this issue Apr 4, 2024 · 33 comments

Comments

@geerlingguy
Copy link
Owner

geerlingguy commented Apr 4, 2024

radxa-rock-5c-sbc

Basic information

  • Board URL (official): https://radxa.com/products/rock5/5c/
  • Board purchased from: Arace
  • Board purchase date: April 4, 2024
  • Board specs (as tested): 4GB RAM
  • Board price (as tested): $59.90

Linux/system information

# output of `neofetch`
       _,met$$$$$gg.          radxa@rock-5c 
    ,g$$$$$$$$$$$$$$$P.       ------------- 
  ,g$$P"     """Y$$.".        OS: Debian GNU/Linux 12 (bookworm) aarch64 
 ,$$P'              `$$$.     Host: Radxa ROCK 5C 
',$$P       ,ggs.     `$$b:   Kernel: 6.1.43-7-rk2312 
`d$$'     ,$P"'   .    $$$    Uptime: 1 min 
 $$P      d$'     ,    $$P    Packages: 1830 (dpkg) 
 $$:      $$.   -    ,d$$'    Shell: bash 5.2.15 
 $$;      Y$b._   _,d$P'      Terminal: /dev/pts/0 
 Y$$.    `.`"Y$$$$P"'         CPU: (8) @ 1.800GHz 
 `$$b      "-.__              Memory: 341MiB / 3921MiB 
  `Y$$
   `Y$$.                                              
     `$$b.                                            
       `Y$$b.
          `"Y$b._
              `"""

# output of `uname -a`
Linux rock-5c 6.1.43-7-rk2312 #a5661faa3 SMP Thu May 16 08:29:55 UTC 2024 aarch64 GNU/Linux

Benchmark results

CPU

Power

  • Idle power draw (at wall): 1.6 W (2W with HDMI and keyboard plugged in)
  • Maximum simulated power draw (stress-ng --matrix 0): 9.5 W
  • During Geekbench multicore benchmark: 10 W
  • During top500 HPL benchmark: 12.4 W

Disk

Samsung Pro Plus 512GB A2 microSD card

Benchmark Result
iozone 4K random read 23.62 MB/s
iozone 4K random write 7.24 MB/s
iozone 1M random read 88.42 MB/s
iozone 1M random write 63.65 MB/s
iozone 1M sequential read 88.39 MB/s
iozone 1M sequential write 63.66 MB/s

Pinedrive 256GB 2242 NVMe via Pi 5 PCIe HAT

Benchmark Result
iozone 4K random read 51.84 MB/s
iozone 4K random write 160.88 MB/s
iozone 1M random read 371.43 MB/s
iozone 1M random write 366.07 MB/s
iozone 1M sequential read 352.27 MB/s
iozone 1M sequential write 365.62 MB/s

curl https://raw.githubusercontent.com/geerlingguy/pi-cluster/master/benchmarks/disk-benchmark.sh | sudo bash

Run benchmark on any attached storage device (e.g. eMMC, microSD, NVMe, SATA) and add results under an additional heading. Download the script with curl -o disk-benchmark.sh [URL_HERE] and run sudo DEVICE_UNDER_TEST=/dev/sda DEVICE_MOUNT_PATH=/mnt/sda1 ./disk-benchmark.sh (assuming the device is sda).

Also consider running PiBenchmarks.com script.

Network

iperf3 results:

  • iperf3 -c $SERVER_IP: 937 Mbps
  • iperf3 --reverse -c $SERVER_IP: 881 Mbps
  • iperf3 --bidir -c $SERVER_IP: 930 Mbps up, 419 Mbps down

(Be sure to test all interfaces, noting any that are non-functional.)

GPU

glmark2-es2 results:

arm_release_ver: g13p0-01eac0, rk_so_ver: 10
=======================================================
    glmark2 2023.01
=======================================================
    OpenGL Information
    GL_VENDOR:      ARM
    GL_RENDERER:    Mali-G610
    GL_VERSION:     OpenGL ES 3.2 v1.g13p0-01eac0.68603db295fbf2c59ac6b927fdfb1c32
    Surface Config: buf=32 r=8 g=8 b=8 a=8 depth=24 stencil=0 samples=0
    Surface Size:   800x600 windowed
=======================================================
[build] use-vbo=false: FPS: 287 FrameTime: 3.488 ms
[build] use-vbo=true: FPS: 321 FrameTime: 3.116 ms
[texture] texture-filter=nearest: FPS: 332 FrameTime: 3.014 ms
[texture] texture-filter=linear: FPS: 324 FrameTime: 3.087 ms
[texture] texture-filter=mipmap: FPS: 325 FrameTime: 3.081 ms
[shading] shading=gouraud: FPS: 296 FrameTime: 3.380 ms
[shading] shading=blinn-phong-inf: FPS: 298 FrameTime: 3.360 ms
[shading] shading=phong: FPS: 280 FrameTime: 3.582 ms
[shading] shading=cel: FPS: 291 FrameTime: 3.442 ms
[bump] bump-render=high-poly: FPS: 223 FrameTime: 4.490 ms
[bump] bump-render=normals: FPS: 336 FrameTime: 2.985 ms
[bump] bump-render=height: FPS: 332 FrameTime: 3.018 ms
[effect2d] kernel=0,1,0;1,-4,1;0,1,0;: FPS: 311 FrameTime: 3.219 ms
[effect2d] kernel=1,1,1,1,1;1,1,1,1,1;1,1,1,1,1;: FPS: 251 FrameTime: 3.987 ms
[pulsar] light=false:quads=5:texture=false: FPS: 328 FrameTime: 3.050 ms
[desktop] blur-radius=5:effect=blur:passes=1:separable=true:windows=4: FPS: 190 FrameTime: 5.285 ms
[desktop] effect=shadow:windows=4: FPS: 282 FrameTime: 3.558 ms
[buffer] columns=200:interleave=false:update-dispersion=0.9:update-fraction=0.5:update-method=map: FPS: 165 FrameTime: 6.069 ms
[buffer] columns=200:interleave=false:update-dispersion=0.9:update-fraction=0.5:update-method=subdata: FPS: 156 FrameTime: 6.452 ms
[buffer] columns=200:interleave=true:update-dispersion=0.9:update-fraction=0.5:update-method=map: FPS: 182 FrameTime: 5.519 ms
[ideas] speed=duration: FPS: 245 FrameTime: 4.092 ms
[jellyfish] <default>: FPS: 247 FrameTime: 4.062 ms
[terrain] <default>: FPS: 87 FrameTime: 11.625 ms
[shadow] <default>: FPS: 272 FrameTime: 3.687 ms
[refract] <default>: FPS: 124 FrameTime: 8.117 ms
[conditionals] fragment-steps=0:vertex-steps=0: FPS: 312 FrameTime: 3.210 ms
[conditionals] fragment-steps=5:vertex-steps=0: FPS: 302 FrameTime: 3.321 ms
[conditionals] fragment-steps=0:vertex-steps=5: FPS: 309 FrameTime: 3.244 ms
[function] fragment-complexity=low:fragment-steps=5: FPS: 314 FrameTime: 3.186 ms
[function] fragment-complexity=medium:fragment-steps=5: FPS: 281 FrameTime: 3.565 ms
[loop] fragment-loop=false:fragment-steps=5:vertex-steps=5: FPS: 484 FrameTime: 2.070 ms
[loop] fragment-steps=5:fragment-uniform=false:vertex-steps=5: FPS: 318 FrameTime: 3.146 ms
[loop] fragment-steps=5:fragment-uniform=true:vertex-steps=5: FPS: 307 FrameTime: 3.263 ms
=======================================================
                                  glmark2 Score: 275 
=======================================================

IMPORTANT NOTE: This test was run using the newest test release of Debian from Rockchip after the board was released. Apparently the GPU is better supported right now in Armbian builds with some custom patches applied. I have not run the board with that configuration yet, but it may be better to run Armbian if you want GPU acceleration.

TODO: See this issue for discussion about a full suite of standardized GPU benchmarks.

Memory

tinymembench results:

Click to expand memory benchmark result
tinymembench v0.4.10 (simple benchmark for memory throughput and latency)

==========================================================================
== Memory bandwidth tests                                               ==
==                                                                      ==
== Note 1: 1MB = 1000000 bytes                                          ==
== Note 2: Results for 'copy' tests show how many bytes can be          ==
==         copied per second (adding together read and writen           ==
==         bytes would have provided twice higher numbers)              ==
== Note 3: 2-pass copy means that we are using a small temporary buffer ==
==         to first fetch data into it, and only then write it to the   ==
==         destination (source -> L1 cache, L1 cache -> destination)    ==
== Note 4: If sample standard deviation exceeds 0.1%, it is shown in    ==
==         brackets                                                     ==
==========================================================================

 C copy backwards                                     :  12007.0 MB/s (2.9%)
 C copy backwards (32 byte blocks)                    :  11959.5 MB/s
 C copy backwards (64 byte blocks)                    :  11977.8 MB/s
 C copy                                               :  12239.5 MB/s
 C copy prefetched (32 bytes step)                    :  12564.9 MB/s
 C copy prefetched (64 bytes step)                    :  12588.2 MB/s
 C 2-pass copy                                        :   5405.8 MB/s (0.3%)
 C 2-pass copy prefetched (32 bytes step)             :   9733.2 MB/s
 C 2-pass copy prefetched (64 bytes step)             :  10402.0 MB/s
 C fill                                               :  29606.2 MB/s (0.2%)
 C fill (shuffle within 16 byte blocks)               :  29573.4 MB/s (0.2%)
 C fill (shuffle within 32 byte blocks)               :  29646.4 MB/s (0.2%)
 C fill (shuffle within 64 byte blocks)               :  29508.5 MB/s (0.2%)
 NEON 64x2 COPY                                       :  12423.9 MB/s
 NEON 64x2x4 COPY                                     :  12449.7 MB/s
 NEON 64x1x4_x2 COPY                                  :  12495.1 MB/s
 NEON 64x2 COPY prefetch x2                           :  11621.7 MB/s
 NEON 64x2x4 COPY prefetch x1                         :  11957.2 MB/s
 NEON 64x2 COPY prefetch x1                           :  11673.7 MB/s
 NEON 64x2x4 COPY prefetch x1                         :  11954.6 MB/s
 ---
 standard memcpy                                      :  12507.5 MB/s
 standard memset                                      :  29477.5 MB/s
 ---
 NEON LDP/STP copy                                    :  12537.3 MB/s
 NEON LDP/STP copy pldl2strm (32 bytes step)          :  12434.3 MB/s
 NEON LDP/STP copy pldl2strm (64 bytes step)          :  12462.2 MB/s
 NEON LDP/STP copy pldl1keep (32 bytes step)          :  12568.9 MB/s
 NEON LDP/STP copy pldl1keep (64 bytes step)          :  12571.8 MB/s
 NEON LD1/ST1 copy                                    :  12451.5 MB/s
 NEON STP fill                                        :  29455.0 MB/s (0.2%)
 NEON STNP fill                                       :  29431.4 MB/s
 ARM LDP/STP copy                                     :  12513.3 MB/s
 ARM STP fill                                         :  29506.0 MB/s (0.2%)
 ARM STNP fill                                        :  29458.7 MB/s

==========================================================================
== Memory latency test                                                  ==
==                                                                      ==
== Average time is measured for random memory accesses in the buffers   ==
== of different sizes. The larger is the buffer, the more significant   ==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM      ==
== accesses. For extremely large buffer sizes we are expecting to see   ==
== page table walk with several requests to SDRAM for almost every      ==
== memory access (though 64MiB is not nearly large enough to experience ==
== this effect to its fullest).                                         ==
==                                                                      ==
== Note 1: All the numbers are representing extra time, which needs to  ==
==         be added to L1 cache latency. The cycle timings for L1 cache ==
==         latency can be usually found in the processor documentation. ==
== Note 2: Dual random read means that we are simultaneously performing ==
==         two independent memory accesses at a time. In the case if    ==
==         the memory subsystem can't handle multiple outstanding       ==
==         requests, dual random read has the same timings as two       ==
==         single reads performed one after another.                    ==
==========================================================================

block size : single random read / dual random read
      1024 :    0.0 ns          /     0.0 ns 
      2048 :    0.0 ns          /     0.0 ns 
      4096 :    0.0 ns          /     0.0 ns 
      8192 :    0.0 ns          /     0.0 ns 
     16384 :    0.0 ns          /     0.0 ns 
     32768 :    0.0 ns          /     0.0 ns 
     65536 :    0.0 ns          /     0.0 ns 
    131072 :    1.1 ns          /     1.5 ns 
    262144 :    2.2 ns          /     2.8 ns 
    524288 :    3.5 ns          /     4.0 ns 
   1048576 :   10.2 ns          /    13.2 ns 
   2097152 :   14.0 ns          /    15.9 ns 
   4194304 :   58.6 ns          /    92.5 ns 
   8388608 :  142.7 ns          /   198.5 ns 
  16777216 :  188.1 ns          /   233.3 ns 
  33554432 :  212.1 ns          /   245.3 ns 
  67108864 :  225.3 ns          /   251.5 ns 

sbc-bench results

https://sprunge.us/5pv8oh

Phoronix Test Suite

Results from pi-general-benchmark.sh:

  • pts/encode-mp3: 12.496 sec
  • pts/x264 4K: 4.22 fps
  • pts/x264 1080p: 20.90 fps
  • pts/phpbench: 413840
  • pts/build-linux-kernel (defconfig): 1787.702 sec
@ThomasKaiser
Copy link

Radxa put a Rock-5-ITX and a Rock 5C Lite (RK3582 with some SoC cores disabled) in the mail. Interestingly idle consumption with 5C Lite is slightly higher than on my 5B (with same RPi USB-C power brick on same Netio powermeter):

root@rock-5c:/home/radxa# Netio=192.168.83.72/4 sbc-bench.sh -m
Power monitoring on socket 4 of powerbox-1 (Netio 4KF, FW v3.2.0, XML API v2.4, 235.23V @ 50.02Hz)

Rockchip RK3582 / 35 82 12 fe 21 41  32 47 41 31 00 00 00 00, Kernel: aarch64, Userland: arm64

CPU sysfs topology (clusters, cpufreq members, clockspeeds)
                 cpufreq   min    max
 CPU    cluster  policy   speed  speed   core type
  0        0        0      408    1800   Cortex-A55 / r2p0
  1        0        0      408    1800   Cortex-A55 / r2p0
  2        0        0      408    1800   Cortex-A55 / r2p0
  3        0        0      408    1800   Cortex-A55 / r2p0
  4        1        4      408    2400   Cortex-A76 / r4p0
  5        1        4      408    2400   Cortex-A76 / r4p0

Thermal source: /sys/devices/virtual/thermal/thermal_zone0/ (soc-thermal)

Time       big.LITTLE   load %cpu %sys %usr %nice %io %irq   Temp      mW
20:28:01:  408/ 600MHz  0.06   1%   1%   0%   0%   0%   0%  34.2°C     1640
20:28:06:  408/1800MHz  0.06   2%   1%   0%   0%   0%   0%  35.2°C     1640
20:28:11:  408/ 600MHz  0.05   0%   0%   0%   0%   0%   0%  35.2°C     1630
20:28:16:  408/ 600MHz  0.05   1%   1%   0%   0%   0%   0%  35.2°C     1630
20:28:21:  408/ 600MHz  0.04   1%   1%   0%   0%   0%   0%  34.2°C     1640
20:28:26:  600/ 600MHz  0.12   2%   1%   0%   0%   0%   0%  35.2°C     1660
20:28:31:  408/ 600MHz  0.11   1%   0%   0%   0%   0%   0%  35.2°C     1670
20:28:37:  408/ 600MHz  0.10   1%   1%   0%   0%   0%   0%  35.2°C     1660
20:28:42:  408/ 600MHz  0.09   2%   1%   0%   0%   0%   0%  35.2°C     1680
20:28:47:  408/1800MHz  0.08   1%   0%   0%   0%   0%   0%  35.2°C     1690
20:28:52:  408/1800MHz  0.08   1%   0%   0%   0%   0%   0%  35.2°C     1700
20:28:57:  408/ 600MHz  0.07   1%   1%   0%   0%   0%   0%  35.2°C     1710
20:29:02:  408/ 408MHz  0.23   1%   1%   0%   0%   0%   0%  35.2°C     1720
20:29:07:  408/1800MHz  0.21   1%   1%   0%   0%   0%   0%  35.2°C     1720
20:29:13:  408/ 600MHz  0.19   1%   1%   0%   0%   0%   0%  35.2°C     1720

And since you seem to have the RK3588S2 variant can you please post NVMEM contents from your board once you start testing?

For my RK3582 it looks like this:

root@rock-5c:/home/radxa# RK_NVMEM_FILE="$(find /sys/bus/nvmem/devices/rockchip*/* -name nvmem 2>/dev/null | head -n1)"
root@rock-5c:/home/radxa# hexdump -C <"${RK_NVMEM_FILE}"
00000000  52 4b 35 82 12 fe 21 41  32 47 41 31 00 00 00 00  |RK5...!A2GA1....|
00000010  00 00 00 00 18 15 04 07  07 09 20 0d 08 00 04 00  |.......... .....|
00000020  00 00 00 00 00 00 00 00  07 0a 00 00 00 c1 37 9b  |..............7.|
00000030  4e e6 07 00 00 a2 06 b3  06 00 00 91 03 00 00 00  |N...............|
00000040  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|

@geerlingguy
Copy link
Owner Author

@ThomasKaiser - Just got shipment notification today, so I'll test when it gets here!

@geerlingguy
Copy link
Owner Author

geerlingguy commented May 16, 2024

...and it finally arrived today. I have RS131-D4R26, Radxa ROCK 5C 4GB, and I went to the Getting Started link on the box (https://rock.sh/5c), but can't find any place to download an image for the board. The downloads section of the page just has:

Screenshot 2024-05-16 at 9 02 00 AM

I've asked over on the Radxa forums where I can find an image to boot the board and test it. I also don't see anything Armbian-wise for the Rock 5C yet, not sure where to go to get something to boot!

The downloads section of Radxa's site also doesn't have any images available, just an SPL Loader.

@geerlingguy
Copy link
Owner Author

It's also interesting the packaging says "Radxa Compute Module", and "ROCKPI 5C" whereas the board itself says "Radxa ROCK 5C V1.1" and it seems the official name is Rock 5C. Maybe an older packaging design got through to production, but it did make me do a double take, like I had ordered a weird model or something.

@geerlingguy
Copy link
Owner Author

@ThomasKaiser - Thanks! It'd be nice if they linked to those anywhere on the Radxa site / Wiki :)

@ThomasKaiser
Copy link

Well, to arrive at Radxa's correct download locations it's quite a journey through various Github/wiki pages that contain deprecation notices, further links and so on. Also a bit worrying that the device is shipped to customers now and still only images labeled 'for internal testing' are available :)

BTW: I would either choose the t3 build from https://github.com/radxa-build/rock-5c-6_1/releases or the Armbian flavour of your choice. But no idea whether the 6.1 builds work correctly asides headless use (that's all I tried to far with my 5C Lite)

@geerlingguy
Copy link
Owner Author

geerlingguy commented May 16, 2024

Also a bit worrying that the device is shipped to customers now and still only images labeled 'for internal testing' are available.

Radxa seem to still be in the 'ship hardware, then customers help us get software side running' stage. I like their hardware, but it's impossible to recommend them over something like Pi until they get out of that mindset. At minium, a full tested build should be ready before the first board is shipped (and it should be linked from the product page, downloads, wiki, etc.), or they should very prominently mark the board as 'devkit' or 'beta/alpha' (kinda like what Lichee has done with the RISC-V boards).

The hardware > software side works better for accessories, like their Penta SATA HAT, than full SBCs/computers.

@geerlingguy
Copy link
Owner Author

@ThomasKaiser - That image download did work, I'm running kernel 6.1.x and things work okay; it seems like there's no GPU acceleration though; things are a little choppy, UI-wise.

HDMI output works fine, though, unlike on CM5 with the CM4 IO Board.

@geerlingguy
Copy link
Owner Author

Just a quick test of the UI, it's a little stuttery (enough to be a little jarring), like when resizing windows or dragging windows around (I'd rather that decoration be turned off if it's not buttery smooth). YouTube plays back okay in 1080p, with a little stuttering at 4K.

When I shut down the board, it actually powers off to < 1W (near 0W) poweroff state.

Pressing the power button, it boots right back up, and consumes around 2W idle (with or without HDMI plugged in).

@geerlingguy
Copy link
Owner Author

Geekbench 6 power draw:

Screenshot 2024-05-16 at 4 02 12 PM

@ginkage
Copy link

ginkage commented May 16, 2024

My experience is a bit different. I have full HW acceleration, and I'm successfully using an NVMe hat created for Pi 5...
Screenshot from 2024-05-16 18-30-26
But, I'm used to building my own system images and kernels, so I'm really non-representative. I'm the kind of person this board was made for. Happy to try and assist you in achieving the best possible experience with the board (although I'm not affiliated with Radxa in any way, I'm afraid, so I'm just doing some work for free, as you've rightly pointed out).

@geerlingguy
Copy link
Owner Author

@ginkage - What I don't get is how, if you can get that going as an individual in the community, Radxa can't get that going on their own OS image? Where is it even documented? Not sure if maybe Armbian has a better default image for the Rock 5C or if you have a custom concoction of kernel patches?

@ginkage
Copy link

ginkage commented May 16, 2024

I'm using a custom kernel, yes. And a custom image. And then I put some more magic on top, so I have this board running perfectly just a couple hours after having it delivered...
But, I'm a wizard, I've been preparing in advance for a few weeks, ordering the right accessories and figuring out the kernel quirks, and I had my own custom build scripts for rk3588 family for a while.
That's what I'm saying: I'm non-representative. Radxa works mostly with vanilla Rockchip kernel (which is a bad baseline from the start), and I'm using a slightly more recent Rockchip kernel, with Armbian patches on top, plus some of my bleeding edge findings to boot. I will do my best to try and figure out the minimal set of changes to make these things work out of the box with the next Armbian release (that's roughly two weeks from now), but all in all, it's indeed really tricky, as we all depend on the upstream work from Rockchip, and they're themselves lagging behind heavily.
If it weren't for Rockchip, we'd probably have Vulkan support on rk3588 in Linux a year ago.

@geerlingguy
Copy link
Owner Author

If it weren't for Rockchip, we'd probably have Vulkan support on rk3588 in Linux a year ago.

😭 why can't we have nice things! :D

Thanks for your work though—is there anywhere I could follow along, like do you keep build logs or a repo with patches or anything like that? It seems like it would be handy if they're not centralized anywhere else.

@ginkage
Copy link

ginkage commented May 16, 2024

I've only received my 5C a few hours ago, so no real repo yet. :)
The kernel, I keep it at: https://github.com/ginkage/mirrors/
And I build Armbian images from trunk with a bunch of customizations we've gathered up here: https://github.com/StonedEdge/Retro-Lite-CM5/tree/main/armbian — it's for a DIY portable console build, but we're using desktop Armbian as a base, plus just a few custom scripts and blobs to have GPU acceleration working out of the box (much like "amazingfated" builds).
So, with all those custom patches, there's just one extra thing you need to build the image with my kernel (should you want to go with that one; that would make two of us):

diff --git a/config/sources/families/rockchip-rk3588.conf b/config/sources/families/rockchip-rk3588.conf
index 25c669c74..cbd3059b3 100644
--- a/config/sources/families/rockchip-rk3588.conf
+++ b/config/sources/families/rockchip-rk3588.conf
@@ -32,8 +32,8 @@ case $BRANCH in
                BOOTDIR='u-boot-rockchip64'
                declare -g KERNEL_MAJOR_MINOR="6.1"    # Major and minor versions of this kernel.
                declare -g -i KERNEL_GIT_CACHE_TTL=120 # 2 minutes; this is a high-traffic repo
-               KERNELSOURCE='https://github.com/armbian/linux-rockchip.git'
-               KERNELBRANCH='branch:rk-6.1-rkr1'
+               KERNELSOURCE='https://github.com/ginkage/mirrors.git'
+               KERNELBRANCH='branch:kernel-6.1'
                KERNELPATCHDIR='rk35xx-vendor-6.1'
                LINUXFAMILY=rk35xx
                ;;

One thing I don't have is a readily built and uploaded image you can flash though, sorry about that. And, I'm having difficulties with the wireless module. As in, I made it work, but I had to compile some extra stuff from https://github.com/radxa-pkg/aic8800
There is probably a better way to do all of this, but again, I'm only a few hours in, give it a few days or a couple weeks (look out for Armbian 24.05 release!), and it's going to be way different.

@geerlingguy
Copy link
Owner Author

Using the Raspberry Pi M.2 HAT+ with the Pinedrive 2242 256GB NVMe SSD:

0004:41:00.0 Non-Volatile memory controller: Phison Electronics Corporation PS5013 E13 NVMe Controller (rev 01) (prog-if 02 [NVM Express])
	Subsystem: Phison Electronics Corporation PS5013-E13 PCIe3 NVMe Controller (DRAM-less)
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 102
	Region 0: Memory at f4200000 (64-bit, non-prefetchable) [size=16K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0W
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #1, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 unlimited
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 5GT/s (downgraded), Width x1 (downgraded)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR+ 10BitTagReq- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
		LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [d0] MSI-X: Enable+ Count=9 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00003000
	Capabilities: [e0] MSI: Enable- Count=1/8 Maskable+ 64bit+
		Address: 0000000000000000  Data: 0000
		Masking: 00000000  Pending: 00000000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Latency Tolerance Reporting
		Max snoop latency: 0ns
		Max no snoop latency: 0ns
	Capabilities: [110 v1] L1 PM Substates
		L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
			  PortCommonModeRestoreTime=10us PortTPowerOnTime=220us
		L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
			   T_CommonMode=0us LTR1.2_Threshold=0ns
		L1SubCtl2: T_PwrOn=10us
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Kernel driver in use: nvme

It's running at PCIe Gen 2 x1 speed. Benchmarking now and I'll update the post above.

@ThomasKaiser
Copy link

@geerlingguy could you please provide output from cat /etc/radxa_image_fingerprint of the Radxa image?

@geerlingguy
Copy link
Owner Author

radxa@rock-5c:~$ cat /etc/radxa_image_fingerprint
FINGERPRINT_VERSION='2'
RSDK_BUILD_DATE='Thu, 16 May 2024 10:21:34 +0000'
RSDK_REVISION=''
RSDK_CONFIG='/etc/rsdk/config.yaml'

@ThomasKaiser
Copy link

Thank you. So they changed their metadata format. After three weeks in rural France with only MacBooks as ARM thingies around I'm about to get back to the SBC zoo soon so I'll investigate later on my own :)

geerlingguy added a commit that referenced this issue May 21, 2024
@geerlingguy
Copy link
Owner Author

The Rock 5c gets a couple mentions in today's video on the LattePanda Mu.

@cweickhmann
Copy link

A small question on the side: Did you have a look at using the eMMC on your Rock 5Cs? I've obtained the 5C Lite and I'm sort of lost because I cannot find an SPI loader for it.

@geerlingguy
Copy link
Owner Author

@cweickhmann - I didn't, sorry!

@ginkage
Copy link

ginkage commented May 28, 2024

Do you really need a special SPI loader? I've simply flashed an Armbian image to my eMMC on the 5C, and it just worked.

@cweickhmann
Copy link

No worries. The wiki says you have to and for the 5A and B there are instructions and images. And so far I've only had a blank screen when starting it up without a working µSD card (same on the Zero 3E I got in parallel, btw).

I'll check if your idea just works, @ginkage . Have you had success with this on other 5s? That would be great and I'll let you know what I got. But I have the hunch that changing the SPI loader is necessary to tell it to boot from eMMC instead of µSD.

@ThomasKaiser
Copy link

ThomasKaiser commented May 29, 2024

SPI loader is necessary to tell it to boot from eMMC instead of µSD.

No, it's not (at least when TF card is missing the SoC will boot from eMMC) and you'll have a hard time flashing an 'SPI loader' to a device lacking any SPI flash anyway. Wrt Radxa's 'documentation': welcome to this funny world called 'Linux on ARM'.

@cweickhmann
Copy link

If that is the case, it's unclear to me why the docs describe the process like that. rkdeveloptool does more than just flashing data to the eMMC, afaik.

I think this is not a Linux on ARM issue. It's a unclear docs issue. In particular, for whatever reason there are two things, it seems: U-Boot files labelled "... SPI ..." and one labelled "...spl..." (yeah, lower case and actually not i but L).
I'll go through the process and give you an update.

@ThomasKaiser
Copy link

ThomasKaiser commented May 29, 2024

for whatever reason there are two things, it seems: U-Boot files labelled "... SPI ..." and one labelled "...spl..."

Yeah, that's two different things: Serial Peripheral Interface (SPI) (totally irrelevant here since SPI NOR flash is missing on the Rock 5C) and secondary program loader (SPL). On some devices the latter can be accessed via the former (protocol). But not on Rock 5C unless you buy an adapter and then can't use eMMC any more.

And of course this is a 'Linux on ARM' issue since documentation over here sucks a lot (or to be more precise this affects the 'Linux on Android e-waste' world where all the SoCs originate from we have on these cheap ARM thingies. Devices relying on those ARM SoCs from vendors who take Linux seriously (NXP, TI, Renesas and a few others) come with proper documentation and are at least three times as expensive.

@cweickhmann
Copy link

All right. Let's not get into this debate here. Maybe elsewhere ;-)

So, here's my feedback so far:

  • My configuration
    • Rock 5C Lite 16GB
    • additional 32GB eMMC module
    • intended: Use the Penta Hat to make it a NAS

Radxa's Wiki is a bit weird* as when you get to Installing The Operating System and select eMMC it guides you through a process where flashing the eMMC is done using rkdeveloptool through the OTG USB port (top-right, when facing ports and LAN port is on the right). Without explaining, it asks the user to use rkdeveloptool to flash the secondary program loader and a system image. Turns out, at least for me this doesn't work. rkdeveloptool quits with the famous and likewise not further explained error message "Creating Comm Object failed!".

So, I've done what many others have: Flash a small system on a µSD, put the image file on there, boot into the system and dd the desired image onto the eMMC.

Now, it does in fact boot. At least with the Radxa-provided Debian Bullseye.

*) Others may find this choice of words amusing. I know.

@ginkage
Copy link

ginkage commented May 29, 2024

rkdevtool is useful when you don't have an eMMC to USB adapter, otherwise you can simply flash eMMC directly.

@cweickhmann
Copy link

cweickhmann commented May 29, 2024 via email

@Altirix
Copy link

Altirix commented Jul 13, 2024

Btw rockchip are not fusing off the disabled cores. if you are lucky youll only have some of the things disabled faulty, they can be reenabled with a uboot patch.

Armbian rolling release currently has the patch https://github.com/armbian/build/blob/main/patch/u-boot/legacy/u-boot-radxa-rk35xx/board_rock-5c/reopen_disabled_nodes.patch

I have two Rock 5c Lite 4gb. one only had a bad GPU and the other only a bad encoder core.

you can also check what ends up being reenabled with this script provided by a user on radax forums https://forum.radxa.com/t/rk3582-soc-broken-ip-node-check/21562

its output is in chinese but its:
CPU
GPU
Enc0
Enc1
Dec0
Dec1

@cweickhmann
@ThomasKaiser
as you both mentioned having lites and might want to try this.

also in case anyone doesnt know the orangepi emmc module is compatiable with the rock 5c and also ime was cheaper over half the price for the same storage.

@ThomasKaiser
Copy link

as you both mentioned having lites and might want to try this.

My Lite runs with all CPU cores available since months thanks to Jianfeng's u-boot patch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants