Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StarFive VisionFive 2 #10

Open
geerlingguy opened this issue Jan 26, 2023 · 43 comments
Open

StarFive VisionFive 2 #10

geerlingguy opened this issue Jan 26, 2023 · 43 comments

Comments

@geerlingguy
Copy link
Owner

geerlingguy commented Jan 26, 2023

DSC00545

Basic information

Linux/system information

# output of `neofetch`
       _,met$$$$$gg.          user@starfive 
    ,g$$$$$$$$$$$$$$$P.       ------------- 
  ,g$$P"     """Y$$.".        OS: Debian GNU/Linux bookworm/sid riscv64 
 ,$$P'              `$$$.     Host: StarFive VisionFive V2 
',$$P       ,ggs.     `$$b:   Kernel: 5.15.0-starfive 
`d$$'     ,$P"'   .    $$$    Uptime: 6 mins 
 $$P      d$'     ,    $$P    Packages: 1035 (dpkg) 
 $$:      $$.   -    ,d$$'    Shell: bash 5.1.16 
 $$;      Y$b._   _,d$P'      Resolution: 1920x1080 
 Y$$.    `.`"Y$$$$P"'         Terminal: /dev/pts/0 
 `$$b      "-.__              CPU: (4) @ 1.500GHz 
  `Y$$                        Memory: 262MiB / 7927MiB 
   `Y$$.
     `$$b.                                            
       `Y$$b.                                         
          `"Y$b._
              `"""

# output of `uname -a`
Linux starfive 5.15.0-starfive #1 SMP Mon Dec 19 07:56:37 EST 2022 riscv64 GNU/Linux

Benchmark results

CPU

Power

  • Idle power draw (at wall): 3.1 W
  • Maximum simulated power draw (stress-ng --matrix 0): 5.3 W
  • During Geekbench multicore benchmark: 5.2 W
  • During top500 HPL benchmark: TODO W

Disk

SanDisk Extreme 128GB microSD

Benchmark Result
fio 1M sequential read 23.6 MB/s
iozone 1M random read 21.02 MB/s
iozone 1M random write 19.40 MB/s
iozone 4K random read 5.82 MB/s
iozone 4K random write 2.83 MB/s

KIOXIA XG6 1GB NVMe SSD

Benchmark Result
fio 1M sequential read 149 MB/s
iozone 1M random read 237.35 MB/s
iozone 1M random write 242.66 MB/s
iozone 4K random read 27.87 MB/s
iozone 4K random write 72.73 MB/s

curl https://raw.githubusercontent.com/geerlingguy/pi-cluster/master/benchmarks/disk-benchmark.sh | sudo bash

Run benchmark on any attached storage device (e.g. eMMC, microSD, NVMe, SATA) and add results under an additional heading. Download the script with curl -o disk-benchmark.sh [URL_HERE] and run sudo DEVICE_UNDER_TEST=/dev/sda DEVICE_MOUNT_PATH=/mnt/sda1 ./disk-benchmark.sh (assuming the device is sda).

Also consider running PiBenchmarks.com script.

Network

iperf3 results:

  • iperf3 -c $SERVER_IP: 937 Mbps
  • iperf3 --reverse -c $SERVER_IP: 774 Mbps
  • iperf3 --bidir -c $SERVER_IP: 941 Mbps up / 262 Mbps down

I tested both 1 Gbps interfaces, and they both worked with similar results, and I could connect to two different IPs at once.

GPU

  • TODO: Haven't determined standardized benchmark yet. See Issue #2.

Memory

  • TODO: Haven't determined standardized benchmark yet. See Issue #2.
@ThomasKaiser
Copy link

ThomasKaiser commented Feb 7, 2023

A few thoughts on methodology with a new platform like RISC-V:

Crypto performance:

This is two times a JH7110 board (Star64 and StarFive VisionFive V2) running exactly same OS image but different kernel config / device-tree settings:

Clockspeed Kernel Distro 7-zip multi 7-zip single AES memcpy memset
1750 MHz 5.15 Sid riscv64 4820 1396 28970 1170 1120
1500 MHz 5.15 Sid riscv64 4040 1182 6830 1150 860

The 1st is slightly overclocked which ends up with better 7-zip scores, better memset numbers and way better AES scores (openssl speed -elapsed -evp aes-${bytelength}-cbc). At 1750 MHz (17% higher cpufreq) the crypto score gain is an impressive 424%! Or is it the hardware Star64 vs. StarFive VisionFive V2 (both boards being almost the same)?

Is that even possible? Of course not, it's 'OpenSSL 3.0.5, built on 5 Jul 2022' vs. 'OpenSSL 1.1.1f, built on 31 Mar 2020' (and that's why sbc-bench -j will always print the OpenSSL version since it might explain differing numbers).

On every 'new' platform software needs to be optimized. Even if the platform isn't that new any more – see this refreshing bit about a few lines NEON optimizations needed on ARM to really outperform x86 where this level of optimization was standard for a decade already: link to cnx-software)

Same with crypto stuff. On x86 and ARM we have optimized assembler routines since ages or even AES-NI and ARMv8 Crypto Extensions.

While on RISC-V we had only generic C routines but "scalar crypto" extension ratified in the meantime (of course not part of JH7110's silicon since basing on old U74 21G1 cores that predate the ratification). So my assumption is OpenSSL 3.0.5 now making use of optimized (assembler) code and/or the SHA and AES hardware accelerators that these U74 cores provide.

Still being a RISC-V noob I'm trying to ask @brucehoult for clarification (since your review will get a massive target audience compared to other RISC-V reviews).

While your target audience wrt 'SBC performance' most probably just wants a single score and 'less is better' or 'more is better' as only information they deserve better. IMO it's important when reviewing stuff to explain that new hardware usually gets faster over time since software matures and gradually makes use of hardware features. That's why listing a crypto score of 6830 is a bit unfair when comparing with ARM or x86 since those platforms are way older and the software side of things already received much more love. Even if JH7110 will never compete with recent x86 or ARMv8/ARMv9 CPUs due to lack of real crypto extensions.

Benchmarking the benchmark

For whatever reason people blindly trust into the numbers spitten out by a specific benchmark tool. And consumers as usually don't care about individual scores but want one single score or at least one for single-threaded and another for multi-threaded (IMO a horrible choice since those combined scores hide too much and are pretty much worthless).

Let's check Geekbench vs. another benchmark. When measuring the JH7110 at 1.5 GHz with 7-zip then it achieves a ~4000 score. 7-zip is all about Integer performance and memory (latency). When choosing a quad-core ARM board with similar score we end up at a Khadas VIM1S for example (based on Amlogic S905Y4: quad A35 @ 2.0 GHz).

Geekbench when looking at total scores tells us JH7110 is only at 60% the S905Y4's performance: https://browser.geekbench.com/v5/cpu/compare/19322441?baseline=17155111

When looking at the individual results view, the board's 'Integer Score' difference is 96 vs. 140 (68%) and 'Crypto Score' is listed as 11 vs. 167 (6.5%). I'm fine with the crypto score but highly doubt the others especially when looking through the individual scores.

Do these benchmark scores represent real-world performance? Or does the rule 'software needs to mature on a new hardware platform' also apply to benchmark software? Most probably yes.

Maybe this is also a hint in the same direction: GB not running on a bunch of RISC-V CPUs due to a problematic instruction.

IMO your target audience deserves these explanations: software needs time and this also applies to benchmarks (maybe once the Primatelabs guy got a bit more familiar with RISC-V the benchmark scores magically improve?). And this fact makes maintaining a 'table of benchmark scores' so problematic since software optimization efforts happening in the meantime automagically outdate such a table...

@geerlingguy
Copy link
Owner Author

IMO it's important when reviewing stuff to explain that new hardware usually gets faster over time since software matures and gradually makes use of hardware features.

This is quite true—and any RISC-V board will get a lot more of that treatment / extra caveats in my reviews, so don't worry about that! Even with the Pi, every few months some random bug gets squashed that can boost a number here or there (like kernel PTP timing... few people had even tested it until a few months ago), though most of the basics are pretty stable and well-tested at this point.

With the VisionFive 2, very few people have really built anything with it outside of the hardcore tinkerers, so it's going to need a lot of caveats.

@geerlingguy
Copy link
Owner Author

geerlingguy commented Feb 7, 2023

For bringup, I used these instructions to flash the simpler (800 MB) buildroot image, scp over the appropriate firmware files, flash them using flashcp, then I could run the latest (-69) version of the official OS release.

I should note that for the -69 version, I was going to follow the getting started guide, which says to SSH into the user root:

Screen Shot 2023-02-26 at 4 08 16 PM

But that with the password starfive never worked. I got "Permission denied" every time:

Screen Shot 2023-02-26 at 4 08 42 PM

Instead, I used user for the username and starfive for the password... and that actually did work. Thanks to this blog post for helping me figure that out!

@geerlingguy
Copy link
Owner Author

For my Kioxia XG6 NVMe SSD, I see it appear as a x1 device at 5.0 GT/sec (so presumably PCIe gen 2):

0001:01:00.0 Non-Volatile memory controller: Toshiba Corporation XG6 NVMe SSD Controller (prog-if 02 [NVM Express])
	Subsystem: Toshiba Corporation XG6 NVMe SSD Controller
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 59
	Region 0: Memory at 38000000 (64-bit, non-prefetchable) [size=16K]
	Capabilities: [40] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
			ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset-
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <32us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 5GT/s (downgraded), Width x1 (downgraded)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range AB, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis+ LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
		LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [80] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [90] MSI: Enable- Count=1/32 Maskable+ 64bit+
		Address: 0000000000000000  Data: 0000
		Masking: 00000000  Pending: 00000000
	Capabilities: [b0] MSI-X: Enable+ Count=33 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00003000
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [260 v1] Latency Tolerance Reporting
		Max snoop latency: 0ns
		Max no snoop latency: 0ns
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [400 v1] L1 PM Substates
		L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+
			  PortCommonModeRestoreTime=60us PortTPowerOnTime=100us
		L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
			   T_CommonMode=0us LTR1.2_Threshold=0ns
		L1SubCtl2: T_PwrOn=10us
	Kernel driver in use: nvme

But in benchmarks (see OP, and also: https://pibenchmarks.com/benchmark/66979/), I'm only seeing about 200 MB/sec sequential read/write, much less than the 350-400 I'd expect.

@geerlingguy
Copy link
Owner Author

First attempt at running the top500 benchmark (HPL) I ran into this error during pip3 install ansible:

Collecting cryptography
  Downloading cryptography-39.0.1.tar.gz (603 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 603.6/603.6 kB 5.7 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error
  
  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [29 lines of output]
      
              =============================DEBUG ASSISTANCE==========================
              If you are seeing an error here please try the following to
              successfully install cryptography:
      
              Upgrade to the latest pip and try again. This will fix errors for most
              users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
              =============================DEBUG ASSISTANCE==========================
      
      Traceback (most recent call last):
        File "/usr/lib/python3/dist-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module>
          main()
        File "/usr/lib/python3/dist-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
        File "/usr/lib/python3/dist-packages/pip/_vendor/pep517/in_process/_in_process.py", line 130, in get_requires_for_build_wheel
          return hook(config_settings)
        File "/usr/lib/python3/dist-packages/setuptools/build_meta.py", line 162, in get_requires_for_build_wheel
          return self._get_build_requires(
        File "/usr/lib/python3/dist-packages/setuptools/build_meta.py", line 143, in _get_build_requires
          self.run_setup()
        File "/usr/lib/python3/dist-packages/setuptools/build_meta.py", line 158, in run_setup
          exec(compile(code, __file__, 'exec'), locals())
        File "setup.py", line 18, in <module>
          from setuptools_rust import RustExtension
        File "/tmp/pip-build-env-o89jx04a/overlay/local/lib/python3.10/dist-packages/setuptools_rust/__init__.py", line 1, in <module>
          from .build import build_rust
        File "/tmp/pip-build-env-o89jx04a/overlay/local/lib/python3.10/dist-packages/setuptools_rust/build.py", line 23, in <module>
          from setuptools.command.build import build as CommandBuild  # type: ignore[import]
      ModuleNotFoundError: No module named 'setuptools.command.build'
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

I tried pip3 install --upgrade pip, and installed Ansible again... now getting another error about missing a rust compiler:

      running build_rust
      
          =============================DEBUG ASSISTANCE=============================
          If you are seeing a compilation error please try the following steps to
          successfully install cryptography:
          1) Upgrade to the latest pip and try again. This will fix errors for most
             users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
          2) Read https://cryptography.io/en/latest/installation/ for specific
             instructions for your platform.
          3) Check our frequently asked questions for more information:
             https://cryptography.io/en/latest/faq/
          4) Ensure you have a recent Rust toolchain installed:
             https://cryptography.io/en/latest/installation/#rust
      
          Python: 3.10.5
          platform: Linux-5.15.0-starfive-riscv64-with-glibc2.33
          pip: n/a
          setuptools: 67.2.0
          setuptools_rust: 1.5.2
          rustc: n/a
          =============================DEBUG ASSISTANCE=============================
      
      error: can't find Rust compiler
      
      If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler.
      
      To update pip, run:
      
          pip install --upgrade pip
      
      and then retry package installation.
      
      If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain.
      
      This package requires Rust >=1.48.0.
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for cryptography
  Building wheel for MarkupSafe (setup.py) ... done
  Created wheel for MarkupSafe: filename=MarkupSafe-2.1.2-cp310-cp310-linux_riscv64.whl size=24451 sha256=7fbd78192601d02b42ecc28c0b76d24ecfcefac99a36accc953ec7249179a3e0
  Stored in directory: /home/user/.cache/pip/wheels/54/8e/98/3c8a462676f35ac84e7e2a886d2e90c50c796b73f4cd1f351f
Successfully built PyYAML MarkupSafe
Failed to build cryptography
ERROR: Could not build wheels for cryptography, which is required to install pyproject.toml-based project

Putting that on pause for now.

@geerlingguy
Copy link
Owner Author

@ThomasKaiser - I ran your sbc-bench script (downloaded lastest revision from GitHub) and it looks like it's not able to pull the CPU temps:

user@starfive:~$ sudo /bin/bash ./sbc-bench.sh -j 
sudo: unable to resolve host starfive: Name or service not known
Starting to examine hardware/software for review purposes...

Average load and/or CPU utilization too high (too much background activity). Waiting...

Too busy for benchmarking: 22:36:25 up 19 min,  2 users,  load average: 0.13, 0.36, 0.31,  cpu: 8%
Too busy for benchmarking: 22:36:30 up 19 min,  2 users,  load average: 0.12, 0.36, 0.30,  cpu: 0%
Too busy for benchmarking: 22:36:35 up 20 min,  2 users,  load average: 0.11, 0.35, 0.30,  cpu: 0%
Too busy for benchmarking: 22:36:40 up 20 min,  2 users,  load average: 0.10, 0.34, 0.30,  cpu: 0%
Too busy for benchmarking: 22:36:45 up 20 min,  2 users,  load average: 0.09, 0.34, 0.30,  cpu: 0%

sbc-bench v0.9.13

Installing needed tools: apt -f -qq -y install lm-sensors sysstat mbw p7zip, tinymembench, ramlat, mh (can't build cpuminer) Done.
Checking cpufreq OPP. Done.
Executing RAM latency tester. Done.
Executing OpenSSL benchmark. Done.
Checking cpufreq OPP again. Done (2 minutes elapsed).

It seems neither throttling occured nor too much background activity.

Full results uploaded to http://ix.io/4npK





# StarFive VisionFive V2

Tested on Tue, 07 Feb 2023 22:40:33 +0000. Full info: [http://ix.io/4npK](http://ix.io/4npK)

## General information:

    StarFive JH7110, Kernel: riscv64, Userland: riscv64
    
    CPU sysfs topology (clusters, cpufreq members, clockspeeds)
                     cpufreq   min    max
     CPU    cluster  policy   speed  speed   core type
      0       -1        0      375    1500   sifive,u74-mc
      1       -1        0      375    1500   sifive,u74-mc
      2       -1        0      375    1500   sifive,u74-mc
      3       -1        0      375    1500   sifive,u74-mc

## Governors (tradeoff between performance and idle consumption):

Original settings:

    cpufreq-policy0: ondemand / 1500 MHz (conservative ondemand userspace powersave performance schedutil)

Tuned settings:

    cpufreq-policy0: performance / 1500 MHz

## Clockspeeds:

Before:

    cpu0 (sifive,u74-mc): OPP: 1500, Measured: 1499 

After:

    cpu0 (sifive,u74-mc): OPP: 1500, Measured: 1499 

## Software versions:

  * Debian GNU/Linux bookworm/sid
  * Compiler: /usr/bin/gcc (Debian 11.3.0-3) 11.3.0 / riscv64-linux-gnu
  * OpenSSL 1.1.1f, built on 31 Mar 2020
  * Kernel 5.15.0-starfive / CONFIG_HZ=100

Kernel 5.15.0 is not latest 5.15.91 LTS that was released on 2023-02-01.

Please check https://endoflife.date/linux for details. It is somewhat likely
that a lot of exploitable vulnerabilities exist for this kernel as well as
many unfixed bugs. Better upgrade to a supported version ASAP.

All known settings adjusted for performance. System now ready for benchmarking.
Once finished stop with [ctrl]-[c] to get info about throttling, frequency cap
and too high background activity all potentially invalidating benchmark scores.

Time        CPU    load %cpu %sys %usr %nice %io %irq   Temp
22:40:33: 1500MHz  1.10  10%   1%   8%   0%   1%   0%      °C
22:41:33: 1500MHz  0.45   0%   0%   0%   0%   0%   0%      °C
22:42:33: 1500MHz  0.16   0%   0%   0%   0%   0%   0%      °C
22:43:33: 1500MHz  0.60  13%   0%  11%   0%   1%   0%      °C
22:44:33: 1500MHz  0.80  24%   0%  24%   0%   0%   0%      °C
22:45:33: 1500MHz  0.93  23%   0%  23%   0%   0%   0%      °C
22:46:33: 1500MHz  0.94  23%   0%  23%   0%   0%   0%      °C
22:47:33: 1500MHz  0.98  24%   0%  24%   0%   0%   0%      °C
22:48:33: 1500MHz  1.06  25%   0%  25%   0%   0%   0%      °C
22:49:33: 1500MHz  1.02  25%   0%  25%   0%   0%   0%      °C
22:50:34: 1500MHz  1.00  25%   0%  25%   0%   0%   0%      °C
22:51:34: 1500MHz  1.00  24%   0%  23%   0%   0%   0%      °C
22:52:34: 1500MHz  1.04  23%   0%  23%   0%   0%   0%      °C
22:53:34: 1500MHz  1.04  25%   0%  25%   0%   0%   0%      °C
22:54:34: 1500MHz  1.01  25%   0%  25%   0%   0%   0%      °C
22:55:34: 1500MHz  1.00  24%   0%  24%   0%   0%   0%      °C
22:56:34: 1500MHz  1.00  24%   0%  24%   0%   0%   0%      °C
22:57:35: 1500MHz  2.69  92%   0%  92%   0%   0%   0%      °C
22:58:37: 1500MHz  3.01  85%   0%  85%   0%   0%   0%      °C
22:59:37: 1500MHz  3.80  88%   0%  87%   0%   0%   0%      °C
23:00:40: 1500MHz  3.80  93%   0%  93%   0%   0%   0%      °C
23:01:41: 1500MHz  2.84  51%   0%  50%   0%   0%   0%      °C
23:02:42: 1500MHz  3.37  91%   0%  91%   0%   0%   0%      °C
23:03:43: 1500MHz  3.77  98%   0%  97%   0%   0%   0%      °C
23:04:46: 1500MHz  3.92  99%   0%  99%   0%   0%   0%      °C
23:05:49: 1500MHz  3.72  86%   0%  85%   0%   0%   0%      °C
23:06:52: 1500MHz  3.89  96%   0%  96%   0%   0%   0%      °C
23:07:55: 1500MHz  3.99  96%   0%  96%   0%   0%   0%      °C
23:08:55: 1500MHz  1.73  14%   0%  13%   0%   0%   0%      °C
23:09:55: 1500MHz  0.67   0%   0%   0%   0%   0%   0%      °C
23:10:55: 1500MHz  0.28   0%   0%   0%   0%   0%   0%      °C
23:11:55: 1500MHz  0.14   0%   0%   0%   0%   0%   0%      °C
^C

Cleaning up. Done.
Checking cpufreq OPP again. Done.

Clockspeeds now:

    cpu0 (sifive,u74-mc): OPP: 1500, Measured: 1499 

I also re-ran Geekbench 5 after the script was up and running:

(Just as a point of reference)

@ThomasKaiser
Copy link

ThomasKaiser commented Feb 7, 2023

I ran your sbc-bench script (downloaded lastest revision from GitHub) and it looks like it's not able to pull the CPU temps

Can you please provide output of cat /sys/devices/virtual/thermal/thermal_zone?/type?

BTW: I've 15 JH7110 results collected but the only one where a thermal sensor below /sys/devices/virtual/thermal/ could be determined was @Icenowy's submission: Thermal source: /sys/devices/virtual/thermal/thermal_zone0/ (cpu-thermal). But since she's a wizard I would assume she hacked DT or kernel prior to do any testings.

Though on all of them sensors listed the same 120e0000.tmon-isa-0000 node with something like temp1: +45.2 C

As such please also provide output from grep . /sys/class/hwmon/hwmon?/* 2>/dev/null

(based on some guesswork sbc-bench 0.9.14 might already contain a fix for the thermal readouts with StarVision's kernel. When running mainline kernel in the future the problem shouldn't exist any more since the SoC temperature will appear as the standard cpu-thermal node below /sys/devices/virtual/thermal/)

@ThomasKaiser
Copy link

As for the identical Geekbench scores 'before/after'... in general the GB scores since Nov 2022 look all pretty similar: https://browser.geekbench.com/search?q=5.15.0-starfive

That could be an indication this kernel keeping the CPU cores all the time up at highest clockspeed regardless of the governor (something a sbc-bench -m running in idle could maybe confirm – that's a simple monitoring mode reporting sysfs clockspeeds without changing anything). If that's the case consumption (monitoring) in idle is flawed.

Talking about clockspeeds we need to keep in mind that early benchmark results were done with settings limiting the CPU cores to 1250 MHz, the 1.5GHz were only enabled later. Unfortunately Geekbench doesn't measure clockspeeds but reports only sysfs values that might be off by a little or a lot.

That's all the results collected in sbc-bench standard mode (ignore 1st entry since this was @Icenowy doing reliability testing)

Date Clockspeed 7-zip multi memcpy memset
28 Aug 2022 1750 MHz 4820 1170 1120
18 Oct 2022 ~1250 MHz 3490 860 790
18 Oct 2022 ~1250 MHz 3470 830 790
09 Jan 2023 1500 MHz 4040 1150 860
05 Jan 2023 1500 MHz 4040 900 830
05 Jan 2023 1500 MHz 4060 910 790
30 Jan 2023 1500 MHz 4040 900 780
30 Jan 2023 1500 MHz 4040 880 770
31 Jan 2023 1500 MHz 4140 1140 850
31 Jan 2023 1500 MHz 4080 1020 820
31 Jan 2023 1500 MHz 4110 970 780
31 Jan 2023 1500 MHz 4040 990 780

@Icenowy
Copy link

Icenowy commented Feb 8, 2023 via email

@brucehoult
Copy link

Still being a RISC-V noob I'm trying to ask @brucehoult for clarification (since your review will get a massive target audience compared to other RISC-V reviews).

I don't really know exactly what is implemented in this version of the U74 core. The B extension I'm pretty is there. Scalar crypto I don't know.

Unlike prominent youtube reviewers such as Jeff, Christopher, Gary I don't yet have a board. I put in my order on Kickstarter in the first hour or so it was open, back in August. As backer #14 I have a "Super Early Bird 4 GB" coming (supposedly in November but it wasn't), and as backer #18 (I made a new account with a different email) I have an "Early Bird 8 GB coming (supposed to be in February). I don't have either yet, but they both shipped together (I mean the same day, different packages) on .. well, tracking was created on Jan 6, "shipment arrived at facility on Jan 21". They are both now in NZ with status "in transit to local depot". All timestamps are within 5 minutes of each other (usually the same minute) for both packages. Maybe I'll have them tomorrow, but surely sometime next week.

And then I can try to get them going and do some probing. I'm probably going to have to write boot loader code to give them a proper probe in their back doors and see what is really there. No promises it will be right away, especially as I'm preparing for a business trip 18-26.

As for the point about individual Geekbench benchmarks. I agree. With early boards and early CPU cores such as these, you get far more insight into looking at each result individually, not the headline number.

If you don't have AES and SHA instructions then, yes, those benchmarks are going to really really suck. RISC-V as an ISA has those instructions, but they won't be in chips on boards until probably next year.

Is it fair to include AES and SHA benchmarks in your evaluation of this board? Well, if what you do all day is SHA because you're mining bitcoin, then yes, sure. Otherwise, it's only really fair to look at the effect that slow AES or whatever has on things that normal people actually do, such as ssh/scp/https. Which might well be quite minor. Why not benchmark those?

The same for benchmarks that use SIMD, which again we need to wait 12 months or so for RISC-V chips to arrive with. If your use-case is heavy media processing then, yeah, buy something else. (although maybe there will eventually be support for the GPU doing that ... does PowerVR even support GPGPU stuff? I don't know.)

The benchmark I personally care about is basically ... compiling code. autoconf -> cmake -> ninja -> gcc -> as -> ld etc.

My two year old HiFive Unmatched does pretty well on that against a Pi 4, and I expect this board to also.

Also, I have my own stupid little primes benchmark, which I created to compare various x86 and Arm machines before I'd heard of RISC-V and before any RISC-V chips existed. Like all micro-benchmarks it's kind of crappy, but it's I think just as representative of a CPU core's performance (out to L1 cache, not including RAM or disk etc) as the similar Dhrystone and Coremark. But with the benefit of being smaller and simpler code, easy to build, deliberately quite resistant to compiler optimisation tricks. I've collected results for quite a few machines:

https://hoult.org/primes.txt

Note, a few results:

 11.190 sec Pi4 Cortex A72 @ 1.5 GHz T32          232 bytes  16.8 billion clocks
 11.445 sec Odroid XU4 A15 @ 2 GHz T32            204 bytes  22.9 billion clocks
 12.115 sec Pi4 Cortex A72 @ 1.5 GHz A64          300 bytes  18.2 billion clocks
 12.605 sec Pi4 Cortex A72 @ 1.5 GHz A32          300 bytes  18.9 billion clocks
 14.111 sec Beagle-X15 A15 @ 1.5 GHz A32          348 bytes  21.2 billion clocks
 14.341 sec Beagle-X15 A15 @ 1.5 GHz T32          224 bytes  21.5 billion clocks
 15.298 sec HiFive Unmatched RISC-V U74 @ 1.5 GHz 250 bytes  22.9 billion clocks
 19.500 sec Odroid C2 A53 @ 1.536 GHz A64         276 bytes  30.0 billion clocks
 23.940 sec Odroid C2 A53 @ 1.536 GHz T32         204 bytes  36.8 billion clocks
 27.196 sec Teensy 4.0 Cortex M7 @ 960 MHz        228 bytes  26.1 billion clocks
 27.480 sec HiFive Unleashed RISCV U54 @ 1.45 GHz 228 bytes  39.8 billion clocks
 30.420 sec Pi3 Cortex A53 @ 1.2 GHz T32          204 bytes  36.5 billion clocks
 36.652 sec Allwinner D1 C906 RV64 @ 1.008 GHz    224 bytes  36.9 billion clocks
 39.840 sec HiFive Unl RISCV U54 @ 1.0 GHz        228 bytes  39.8 billion clocks
 43.516 sec Teensy 4.0 Cortex M7 @ 600 MHz        228 bytes  26.1 billion clocks
 47.910 sec Pi2 Cortex A7 @ 900 MHz T32           204 bytes  42.1 billion clocks
112.163 sec HiFive1 RISCV E31 @ 320 MHz           178 bytes  35.9 billion clocks

It's quite interesting to me how divergent the three different Arm ISAs are on the same chip. But looking at A64 as the thing you want to compare to RV64, the HiFive Unmatched is 0.792 as fast as a Pi 4, and 1.99 as fast as a Pi 3 (sadly I don't have an A64 time for Pi 3). It's 1.27 times as fast as an Odroid C2 (a much better board than Pi 3).

I'd love to have figures for an A55 board, which is the most similar Arm microarchitecture, but don't have one.

@ThomasKaiser
Copy link

But in benchmarks (see OP, and also: https://pibenchmarks.com/benchmark/66979/), I'm only seeing about 200 MB/sec sequential read/write

How does /sys/module/pcie_aspm/parameters/policy look like?

@brucehoult
Copy link

But in benchmarks (see OP, and also: https://pibenchmarks.com/benchmark/66979/), I'm only seeing about 200 MB/sec sequential read/write

How does /sys/module/pcie_aspm/parameters/policy look like?

Note that the RAM speed might not be much more than 200 MB/sec. It's not on the HiFive Unmatched. L2 cache stream detection & prefetch were implemented (from memory) in the December 2021 version of the U74, while we believe this uses the March 2021 version.

The AllWinner D1 is a slower CPU, but actually has a decent DRAM interface, and manages 1100 MB/s on large RAM to RAM memcpy().

Well, hopefully I can try it soon. Current status "With courier for delivery 08:07am, 10 February 2023 , Whangarei". The rural delivery usually comes by here around 4 PM, so ... (11 AM now)

@ThomasKaiser
Copy link

Note that the RAM speed might not be much more than 200 MB/sec.

At least tinymembench shows this wrt memory bandwidth on this board at 1.5 GHz (see above):

standard memcpy                                      :   1154.1 MB/s (0.8%)
standard memset                                      :    858.7 MB/s (1.2%)

Unfortunately not a single execution for the HiFive Unmatched has been submitted for my sbc-bench results list.

@brucehoult
Copy link

I can't figure out from http://ix.io/4kHv what the buffer size is for those copies.

@ThomasKaiser
Copy link

ThomasKaiser commented Feb 10, 2023

I can't figure out from http://ix.io/4kHv what the buffer size is for those copies.

Me neither, can only link to project page / sources: https://github.com/ssvb/tinymembench

Do you have a tool recommendation for quick and reliable bandwidth measurements? tinymembench is orphaned and takes ages...

@geerlingguy
Copy link
Owner Author

@brucehoult - Have you gotten your board yet? After my initial stab, I've been following some of the progress for support for hardware video encode/decode among other things, and I may pull it out for some more testing next week.

I necessarily have to unplug and stash away the board when I am trying to get some other work done, otherwise I get sucked in for a day or two at a time :D

@geerlingguy
Copy link
Owner Author

I'm also trying to get HDMI working. Right now my monitor doesn't show anything at all (and doesn't even flash to indicate something was being output or tested), and I get the following with modetest:

user@starfive:~$ sudo modetest -M starfive -c
sudo: unable to resolve host starfive: Name or service not known
Connectors:
id	encoder	status		name		size (mm)	modes	encoders
116	115	connected	HDMI-A-1       	510x290		3	115
  modes:
	index name refresh (Hz) hdisp hss hse htot vdisp vss vse vtot
  #0 1920x1080 60.00 1920 2008 2052 2200 1080 1084 1089 1125 148500 flags: phsync, pvsync; type: preferred, driver
  #1 1280x720 60.00 1280 1390 1430 1650 720 725 730 750 74250 flags: phsync, pvsync; type: driver
  #2 640x480 59.94 640 656 752 800 480 490 492 525 25175 flags: nhsync, nvsync; type: driver
  props:
	1 EDID:
		flags: immutable blob
		blobs:

		value:
			00ffffffffffff0022f0643001010101
			0918010380331d782edd45a3554fa027
			125054a1080081c081809500a9c0b300
			d1c001010101023a801871382d40582c
			4500fd1e1100001e000000fd00324c18
			5e14000a202020202020000000fc0048
			5020453233310a2020202020000000ff
			00334351343039315959430a202000d2
	2 DPMS:
		flags: enum
		enums: On=0 Standby=1 Suspend=2 Off=3
		value: 3
	5 link-status:
		flags: enum
		enums: Good=0 Bad=1
		value: 0
	6 non-desktop:
		flags: immutable range
		values: 0 1
		value: 0
	4 TILE:
		flags: immutable blob
		blobs:

		value:

Reading through this post I will try to debug it and get HDMI.

@geerlingguy
Copy link
Owner Author

Created a forum post on RVSpace: Can't get HDMI Out on Debian image (modetest either)

@geerlingguy
Copy link
Owner Author

Re-testing things with the -69 (latest) image, it seems there are more annoyances, like lspci is not even present. I had to run sudo apt install pciutils to get it.

@geerlingguy
Copy link
Owner Author

geerlingguy commented Feb 26, 2023

Going to attempt to get the AMD Radeon HD 7470 working on this board using these instructions. Supposedly no patches required.

Neofetch shows it if I have it connected via my PCIe 1x to 16x M.2 to slot adapter:

GPU: AMD ATI Radeon HD 7470/8470 / R5 235/310 OEM

Recompiling the kernel

# Install necessary tools and clone linux fork at devel
user@starfive:~$ sudo apt install -y git build-essential libncurses-dev flex bison libssl-dev bc
user@starfive:~$ git clone -b JH7110_VisionFive2_devel --depth 1 https://github.com/starfive-tech/linux.git

# Configure the linux build
user@starfive:~$ cd linux
user@starfive:~/linux$ cp /boot/boot/config-5.15.0-starfive .config
user@starfive:~/linux$ make menuconfig

# In menuconfig, select:
# Device Drivers > Graphics support > ATI Radeon

# Build kernel packages:
user@starfive:~/linux$ make -j4 bindeb-pkg  # Takes a long time (1 hour+)

# Unpack built .deb image package:
user@starfive:~/linux$ cd ..
user@starfive:~$ mkdir tmp
user@starfive:~$ dpkg-deb -R linux-image-5.15.0_5.15.0-1_riscv64.deb tmp

# Replace vmlinuz file and copy built modules (NOTE: kernel name is `5.15.0-starfive`, not `5.15.0`):
user@starfive:~$ sudo cp /boot/boot/vmlinuz-5.15.0-starfive /boot/boot/vmlinuz-5.15.0-starfive.bak
user@starfive:~$ sudo cp tmp/boot/vmlinuz-5.15.0 /boot/boot/vmlinuz-5.15.0-starfive
user@starfive:~$ cd linux
user@starfive:~/linux$ sudo make KERNELRELEASE=$(uname -r) modules_install

# Blacklist the radeon driver (so we can load it as needed for testing for now).
user@starfive:~/linux$ sudo nano /etc/modprobe.d/blacklist-radeon.conf
# Put the text 'blacklist radeon' inside and save

# Install AMD radeon graphics firmware:
user@starfive:~/linux$ sudo apt install -y firmware-amd-graphics  # DIDN'T WORK - Mesa needed?

# Reboot and see what happens:
user@starfive:~/linux$ sudo reboot

@geerlingguy
Copy link
Owner Author

lspci now shows Kernel modules: radeon, so I ran sudo modprobe radeon and got an error (failed to load firmware):

[  256.739177] [drm] radeon kernel modesetting enabled.
[  256.744687] pci 0001:00:00.0: enabling device (0000 -> 0002)
[  256.750477] radeon 0001:01:00.0: enabling device (0000 -> 0002)
[  256.757305] [drm] initializing kernel modesetting (CAICOS 0x1002:0x6778 0x1028:0x2120 0x00).
[  256.765940] [drm:radeon_device_init [radeon]] *ERROR* Unable to find PCI I/O BAR
[  257.018200] [drm:radeon_atombios_init [radeon]] *ERROR* Unable to find PCI I/O BAR; using MMIO for ATOM IIO
[  257.031557] ATOM BIOS: C26411
[  257.034668] [drm] GPU not posted. posting now...
[  257.049191] radeon 0001:01:00.0: VRAM: 1024M 0x0000000000000000 - 0x000000003FFFFFFF (1024M used)
[  257.058129] radeon 0001:01:00.0: GTT: 1024M 0x0000000040000000 - 0x000000007FFFFFFF
[  257.065873] [drm] Detected VRAM RAM=1024M, BAR=256M
[  257.070785] [drm] RAM width 64bits DDR
[  257.074712] [drm] radeon: 1024M of VRAM memory ready
[  257.079772] [drm] radeon: 1024M of GTT memory ready.
[  257.084867] [drm] Loading CAICOS Microcode
[  257.089208] radeon 0001:01:00.0: Direct firmware load for radeon/CAICOS_pfp.bin failed with error -2
[  257.098443] ni_cp: Failed to load firmware "radeon/CAICOS_pfp.bin"
[  257.104668] [drm:evergreen_init [radeon]] *ERROR* Failed to load firmware!
[  257.115135] radeon 0001:01:00.0: Fatal error during GPU init
[  257.120863] [drm] radeon: finishing device.
[  257.132377] [drm] radeon: ttm finalized
[  257.136664] radeon: probe of 0001:01:00.0 failed with error -2
[  257.136678] Unable to handle kernel access to user memory without uaccess routines at virtual address 0000000000000190
[  257.153238] Oops [#1]
[  257.155516] Modules linked in: radeon drm_ttm_helper ttm
[  257.160843] CPU: 2 PID: 408 Comm: X Not tainted 5.15.0 #1
[  257.166248] Hardware name: StarFive VisionFive V2 (DT)
[  257.171390] epc : radeon_driver_open_kms+0x34/0x13c [radeon]
[  257.178159]  ra : radeon_driver_open_kms+0x2e/0x13c [radeon]
[  257.184904] epc : ffffffff01594700 ra : ffffffff015946fa sp : ffffffd00451ba70
[  257.192132]  gp : ffffffff81503480 tp : ffffffe0c1e9d000 t0 : ffffffe0c12b5800
[  257.199359]  t1 : 0000000000000002 t2 : 0000000000000000 s0 : ffffffd00451bac0
[  257.206585]  s1 : ffffffe0c7580000 a0 : 0000000000000001 a1 : 0000000200000022
[  257.213812]  a2 : ffffffff815132c8 a3 : 0000000000000000 a4 : 0000000000000002
[  257.221039]  a5 : 0000000000000000 a6 : ffffffe0c12b5400 a7 : 00000000000004d5
[  257.228266]  s2 : 0000000000000001 s3 : ffffffe0c12b5400 s4 : 0000000000000000
[  257.235493]  s5 : 0000000000000000 s6 : ffffffe0c7580000 s7 : ffffffffc4c85028
[  257.242719]  s8 : ffffffffdead5000 s9 : 00000000001e8498 s10: 0000000000000000
[  257.249946]  s11: ffffffd00451bd38 t3 : ffffffe0c12b5800 t4 : 0000000000000001
[  257.257173]  t5 : 0000000000000000 t6 : 0000000000000000
[  257.262482] status: 0000000200000120 badaddr: 0000000000000190 cause: 000000000000000d
[  257.270406] [<ffffffff01594700>] radeon_driver_open_kms+0x34/0x13c [radeon]
[  257.278400] [<ffffffff8041b81c>] drm_file_alloc+0x14e/0x1f0
[  257.283981] [<ffffffff8041b96c>] drm_open+0xae/0x1e8
[  257.288945] [<ffffffff8041eff2>] drm_stub_open+0x82/0xf0
[  257.294256] [<ffffffff8012cbb4>] chrdev_open+0x94/0x1f0
[  257.299484] [<ffffffff80125114>] do_dentry_open+0xda/0x27c
[  257.304971] [<ffffffff80126504>] vfs_open+0x1e/0x26
[  257.309850] [<ffffffff80135f2c>] path_openat+0x71e/0xa7c
[  257.315162] [<ffffffff80136c7c>] do_filp_open+0x68/0xe2
[  257.320389] [<ffffffff80126758>] do_sys_openat2+0x7e/0x112
[  257.325875] [<ffffffff80126ace>] sys_openat+0x3a/0x7c
[  257.330924] [<ffffffff800030a2>] ret_from_syscall+0x0/0x2
[  257.336381] ---[ end trace c0514edde63c53d9 ]---

Message from syslogd@starfive at Feb 27 00:37:07 ...
 kernel:[  257.153238] Oops [#1]

That forum post says:

The tricky part is mostly over. You now need to install Mesa packages for userspace GPU drivers, connect your GPU using an M.2 → PCIe riser and it should work.

Didn't see any instructions and I'm less familiar with Mesa than I'd like to admit, so I'll do a little digging.

@geerlingguy
Copy link
Owner Author

geerlingguy commented Feb 27, 2023

Trying to manually copy the firmware in place:

git clone --depth 1 --filter=blob:none --sparse https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
cd linux-firmware
git sparse-checkout set radeon
sudo cp -R radeon /lib/firmware

And after a reboot (and sudo modprobe radeon, or just removing the blacklist entry), I got the console!

IMG_4561

I'm not sure why, but I can't get any window manager. Looks like this might be a known issue:

There are some issues concerning the preinstalled distro, namely the PVR GPU driver conflicts in one way or another, so the guy who tested my approach simply replaced the Debian userland with Arch. Any further research is welcome.

@geerlingguy
Copy link
Owner Author

I switched from my HP EliteDisplay E231 to an Atomos Ninja V, and now I can get the internal GPU to output to the display, but it's quite slow, and almost all frames are dropped trying to watch a YouTube video at 1080p.

It seems like OpenGL is not supported at all, and Vulkan may be but I haven't been able to get any demos running at all yet.

@geerlingguy
Copy link
Owner Author

geerlingguy commented Feb 27, 2023

I also compiled nouveau and tried running a GTX 750 Ti, but got:

[   56.255130] pci 0001:00:00.0: enabling device (0000 -> 0002)
[   56.260897] nouveau 0001:01:00.0: enabling device (0000 -> 0002)
[   56.267413] nouveau 0001:01:00.0: NVIDIA GM107 (117000a2)
[   56.273024] pcie_plda 2c000000.pcie: msi#0 address_hi 0x0 address_lo 0x190
[   56.555215] nouveau 0001:01:00.0: bios: version 82.07.55.00.29
[   56.705497] pcie_plda 2c000000.pcie: AXI fetch error
[   56.967109] pcie_plda 2c000000.pcie: AXI post error
[   57.227714] pcie_plda 2c000000.pcie: AXI post error
[   57.358036] pcie_plda 2c000000.pcie: AXI fetch error
[   57.618622] pcie_plda 2c000000.pcie: AXI post error
[   57.623499] pcie_plda 2c000000.pcie: AXI fetch error
[   57.748937] pcie_plda 2c000000.pcie: AXI fetch error
[   58.009532] pcie_plda 2c000000.pcie: AXI post error
[   58.270138] pcie_plda 2c000000.pcie: AXI post error
[   58.530744] pcie_plda 2c000000.pcie: AXI post error
[   58.791350] pcie_plda 2c000000.pcie: AXI post error
... [repeating, sometimes with 'fetch error' instead of 'post error']
[   77.685312] rcu: INFO: rcu_sched self-detected stall on CPU
[   77.695761] rcu: 	2-....: (171 ticks this GP) idle=399/1/0x4000000000000002 softirq=4399/4399 fqs=77 
[   77.704988] 	(t=2105 jiffies g=2173 q=3919)
[   77.709176] Task dump for CPU 2:
[   77.712403] task:modprobe        state:R  running task     stack:    0 pid:  590 ppid:   589 flags:0x00000008
[   77.722328] Call Trace:
[   77.724775] [<ffffffff800048ac>] dump_backtrace+0x1c/0x24
[   77.730183] [<ffffffff8002c826>] sched_show_task+0x152/0x172
[   77.735848] [<ffffffff8099375e>] dump_cpu_task+0x42/0x4c
[   77.741167] [<ffffffff8099436e>] rcu_dump_cpu_stacks+0xd2/0x10e
[   77.747097] [<ffffffff800663a6>] rcu_sched_clock_irq+0x4c8/0x6ba
[   77.753109] [<ffffffff8006c866>] update_process_times+0xa2/0xca
[   77.759039] [<ffffffff8007a4ba>] tick_sched_timer+0x74/0xd6
[   77.764619] [<ffffffff8006cefa>] __hrtimer_run_queues+0x126/0x18a
[   77.770721] [<ffffffff8006db8a>] hrtimer_interrupt+0xce/0x1da
[   77.776471] [<ffffffff8072051e>] riscv_timer_interrupt+0x30/0x3a
[   77.782485] [<ffffffff8005a69e>] handle_percpu_devid_irq+0x80/0x108
[   77.788761] [<ffffffff8005592c>] handle_domain_irq+0x58/0x88
[   77.794423] [<ffffffff803675a6>] riscv_intc_irq+0x36/0x5e
[   77.799827] [<ffffffff800030b0>] ret_from_exception+0x0/0xc
[   77.805407] [<ffffffff015a6c14>] init_wr32+0x20/0x8e [nouveau]
[   77.942870] pcie_plda 2c000000.pcie: AXI post error
[   78.203473] pcie_plda 2c000000.pcie: AXI post error
... [repeating again]

The system went into some sort of lockup loop at this point—over on the main screen if I would move the mouse it would move for a fraction of a second every three seconds or so. Typing was impossible, as it would erratically accept a few letters here then discard a few later, resulting in typing "starfive" outputting to screen things like "sterve" and "sf" and "ve" and such.

@geerlingguy
Copy link
Owner Author

Here's a detailed guide from @Opvolger on how he got a Radeon 5450 running Quake II! https://github.com/Opvolger/Opvolger/blob/master/starfiveVisionFive2/FedoraATIRadeon5450.md

@Opvolger
Copy link

I had the same error "pcie_plda 2c000000.pcie: AXI post error". My problem was the power supply to the PCIe board. It was not powerful enough... So put a complete ATX power supply on it.

@Opvolger
Copy link

For the errors for the AMD video card. You need to add the firmware blobs to the kernel (see my readme from GitHub). Took my minimum 2 hours to figure that out.

@geerlingguy
Copy link
Owner Author

pcie_plda 2c000000.pcie: AXI post error

Ah... that makes sense—I only had my little spare 20W adapter hooked up, and the 750 Ti can draw up to 60W I think. I'll have to have another go.

And for AMD, I did put the firmware in place (see comment a few above), but I am running into conflicts with the default xorg setup in the -69 Debian image. Apparently others have just switched to another userland to get around it (like you did with Fedora!). I just didn't have time to dig in further yet.

Plenty to test in the future :D

@brucehoult
Copy link

@brucehoult - Have you gotten your board yet?

Yes! On Friday 10th. And then on the 12th Tropical Cyclone Gabrielle arrived and I was without electricity or internet for four days. And then on the 19th-26th I was away from home at a company "all hands" conference. Now trying to catch up on everything. I'll hope to have some time to look at the VF2 in the weekend. And a new image dropped today...

This probably doesn't match your schedule :-(

@geerlingguy
Copy link
Owner Author

@brucehoult - Ah, just noticed the forum post (https://forum.rvspace.org/t/visionfive-2-debian-image-202302-released/2132/12) when I glanced at the forums. Looks like a few little annoyances are fixed (like /boot/boot being /boot now, and shipping minimal instead of an 8 GB (!) ISO). But still not getting hw acceleration in Firefox and having some GPU issues it looks like.

Also, the fact the little boot mode switch now works could throw a lot of people off guard (having to switch the dip switches to the right configuration for SD, eMMC, etc. boot).

@brucehoult
Copy link

Yes, this image is under a 700 MB download.

A lot of people will be happy about the "bonus" WIFI dongle they sold us for 8 (?) SGD now working.

The "What's Next - WIP" section in https://rvspace.org/en/project/VisionFive2_Debian_Wiki_202302_Release lists Firefox hardware acceleration and other GUI / GPU improvements.

Give them a month for the next image with improvements in those areas? That would be quick work.

@Opvolger
Copy link

Opvolger commented Mar 1, 2023

The status of patches for mainline Linux support: https://rvspace.org/en/project/JH7110_Upstream_Plan

Give them some time and it will be all mainline.

@geerlingguy
Copy link
Owner Author

Today I posted a video and blog post about the VisionFive 2.

@morphykuffour
Copy link

Check out https://github.com/zhaofengli/nixos-riscv64, He is able to run more software on nixos

@igorpecovnik
Copy link

Armbian on VF2 https://www.armbian.com/visionfive2/ Currently only two variants with Ubuntu user-land. HW support wise, similar to stock.

@Opvolger
Copy link

Opvolger commented Mar 9, 2023

I see that my solution with a PCIe VGA is not working on the latest JH7110_VisionFive2_devel branch. I have findout why... commit 59cf9af678dbfa3d73f6cb86ed1ae7219da9f5c9 is still working (before last week update).

@Opvolger
Copy link

It is working again on the latest commit.
Nvidia GTX 950: text working, no x11
Nvidia GTX 770 an 580: working but less frames than with my Ati Radeon 5450 in Quake2. X11 feels a bit better. Eduke32 gives a black screen... Minetest is flickering and very slow on the 580 and 5 fps on the 770. Firefox crashed after a couple of minutes on YouTube. (Browsing is a little better then the Ati Radeon but less stable).
AMD Radeon Rx 6600: nothing.

So maybe find an AMD Radeon 200 Series, the last chipset with the old Ati kernel drivers? I keep a eye open :)

@Opvolger
Copy link

Opvolger commented Apr 1, 2023

                                     ......            [email protected] 
     .,cdxxxoc,.               .:kKMMMNWMMMNk:.        -------------------- 
    cKMMN0OOOKWMMXo. ;        ;0MWk:.      .:OMMk.     OS: openSUSE Tumbleweed riscv64 
  ;WMK;.       .lKMMNM,     :NMK,             .OMW;    Host: StarFive VisionFive V2 
 cMW;            'WMMMN   ,XMK,                 oMM'   Kernel: 5.15.0-dirty 
.MMc               ..;l. xMN:                    KM0   Uptime: 1 hour, 33 mins 
'MM.                   'NMO                      oMM   Packages: 2279 (rpm) 
.MM,                 .kMMl                       xMN   Shell: bash 5.2.15 
 KM0               .kMM0. .dl:,..               .WMd   Resolution: 2560x1440 
 .XM0.           ,OMMK,    OMMMK.              .XMK    DE: Plasma 5.27.2 
   oWMO:.    .;xNMMk,       NNNMKl.          .xWMx     WM: kwin 
     :ONMMNXMMMKx;          .  ,xNMWKkxllox0NMWk,      Theme: [Plasma], Breeze [GTK2/3] 
         .....                    .:dOOXXKOxl,         Icons: [Plasma], breeze [GTK2/3] 
                                                       Terminal: konsole 
                                                       CPU: (4) @ 1.500GHz 
                                                       GPU: AMD ATI Radeon R9 290/390 
                                                       Memory: 1524MiB / 7896MiB 

                                                                               
                                                                               

The hostname is ubuntu, but I am running openSUSE, with an AMD ATI Radeon R9 290/390. using the amdgpu drivers no the radeon (radeon drivers gives errors). Browsing (Chromium) and Quake2 are running much better now. This comment is written on a RISC-V :)

@github-actions
Copy link

github-actions bot commented Aug 3, 2023

This issue has been marked 'stale' due to lack of recent activity. If there is no further activity, the issue will be closed in another 30 days. Thank you for your contribution!

Please read this blog post to see the reasons why I mark issues as stale.

@IOOI-SqAR
Copy link

You should do a retest (and maybe a new video) once this is all green (e.g. everything has been upstreamed):

https://rvspace.org/en/project/JH7110_Upstream_Plan

Also, watch out for this series of patches, as those enable the GPU of the JH7110:

https://lore.kernel.org/lkml/?q=%40imgtec.com

@Headcrabed
Copy link

Headcrabed commented Sep 18, 2023

Also, watch out for this series of patches, as those enable the GPU of the JH7110:

https://lore.kernel.org/lkml/?q=%40imgtec.com

That series would support a series first, then b series that used on JH7110. Besides, retest should be performed on GCC13 or newer. A great performance boost was noticed (1/7 improvement on coremark score)

@platima
Copy link
Contributor

platima commented Feb 3, 2024

@geerlingguy Gb v6 results if you want to add: https://browser.geekbench.com/v6/cpu/3784701

@geerlingguy
Copy link
Owner Author

@platima - Excellent, thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants