-
Notifications
You must be signed in to change notification settings - Fork 834
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JsonRpcExecutorHandler | Error streaming JSON-RPC response #4398
Comments
also seeing this execution_1 | 2022-09-15 14:10:39.575+00:00 | vert.x-worker-thread-0 | INFO | EngineForkchoiceUpdated | VALID for fork-choice-update: head: 0x3033e6cdbafa6907b10fc704d38b4a2177b34926b15a79af4ced9165ff799b8b, finalized: 0x9aa91ed7cbb4327905efde1f89d8b86e77f1db3a4ac99236748dd28abf1cf25d, safeBlockHash: 0xcdf9ed89b0c43cda17398dc4da9cfc505e5ccd19f7c39e3b43474180f1051e01 |
The distribution is not even:
At the same time the BC (Lighthouse) complains:
|
Experience exact same issue with Besu + Lighthouse combo running on a server (Debian 11, no VM, no docker). |
+1 Besu + Lighthouse |
Seeing this a lot on Rocket Pool nodes. Affects all CLs. Powerful servers and small machines alike. |
+1 Besu/Lighthouse on RocketPool. Seems to happen every few hours. |
Same with Besu + Lighthouse in a RP node using latest opensdk Docker image |
+1 Besu/Lighthouse - been experiencing this since Merge, with occasional missed attestations, potentially when they happen at the same time as the issue. |
Besu/Nimbus seeing this as well. I was synchronizing fine this morning and then suddenly it began spitting out the fork-choice-update logs. The variables within that log change with each iteration. I have restarted Besu multiple times without any change in behavior. After restarting both Besu and Nimbus, I'm now seeing the following on start
...and it cycles through the fork-choice-update logs never making any progress. |
Same problem here - Besu with Lighthouse. 1 missed attestation slot so far (slot 4701535 at block 15538896). But VC and BN dont show any errors, Besu had the JSON error a few mins before the missed attestation. Didnt have any issue all this while on Besu, until a few hours after the merge when the JSON streaming errors started popping up in Besu. Restart didn't help preventing more errors. No more missed attestations yet. CONFIG: NUC11PAHi5 with 32GB RAM, 2TB Samsung 970 Evo Plus nVME, 1 Gbps high quality fibre internet. Besu at the time:
BN at the time:
VC at the time:
ADDDITONAL ERRORS:
Lighthouse sometimes complains about "Error Execution engine call failed" and "Error during execution engine upcheck":
|
From a Discord message: Note: No attestation was actually missed at this time by validator xxxxxx, but it had a possibly-related invalid head/target vote which resulted in effectively a missed attestation. As for env details: i7 NUC with 32GB RAM and 2TB Samsung EVO 970 SSD. All metrics were well in the green at the time of the error, with no suspicious bumps of any kind. |
https://discord.com/channels/905194001349627914/938504958909747250/1019905431054864434
No negative impcat on metrics or missed attestations |
@SomerEsat please can you explain how you came to the conclusion that Json RPC error leads to missed attestations please? Some users have this but without missed attestations |
@ibootstrapper @JanKalin @ronaldjmaas @lfinbob @risasoft @gamell Please can you confirm if you are getting frequent missed attestations and how your chains are progressing generally? |
No missed attestations on my side, but switched to a geth/lighthouse fallback node as precaution for now, so I only have ~10h of logs/attesting experience with this issue. |
I went from 99% to 83% effectiveness pre to post Merge. No changes to setup. Software latest versions. Fully synced prior to the merge. Also, Beaconcha.in confirmed it via alerting. I'm not sure what actual cause is, but it seems to happen when I see this (Besu):
And then this (lighthouse beacon):
Note the timing: Around that time, if an attestation is scheduled, it tends to fail. Having said that, the last attestation to fail (across multiple validators) is approx. 1 hour ago from now. Edit: Just got alerts for 2 missed attestations (1 each from 2 separate validators). |
I've personally only missed 1 attestation per validator after the merge, despite 126 JSON-RPC errors:
It's a fast server, Ryzen 5950X, 128GB RAM, 2x4 TB NVMe. Many other Besu users in Rocket Pool discord #support see more missed attestations or no attestations at all, especially on weaker devices. My rated.network effectiveness seems to be still fine as well: |
Just after the merge I had to restart besu as many others. I had one missed attestation today, but that was before the restart. From the merge to now (3:27 CEST) I've had 257 JSON-RPC errors and 9 errors Except for this error both the execution and consesus layers are following the chain in real time. HW is AMD Ryzen 5 5600G, 32GB RAM 3.2GHz and Samsung SSD 980 Pro 2TB, with load hovering a bit below 1.0, and loads of free RAM. I added EDIT: the consensus layer client is Lighthouse |
Yes, missing many today . I'm running in a HP server 512gb ram and 4x10 cores (80 threads) |
Chain is progressing, but this results in one missed attestation every few hours. Here is an example miss:
|
mainnet-nightly-fast-sync: |
@silado Since merge I missed 13 attestations. Although I see a lot of RPC JSON related errors in both Besu and Lighthouse BN log, I do not see any errors in the Lighthouse VC. Lighthouse VC log always shows 'Successfully published attestations' regardless if attestations were missed or not. Besu + Lighthouse clients are successfully synced to chain. No issues there. Note before merge I only missed 1 or 2 attestations during monthly upgrade / system reboot. Node has been running rock solid for many months until today. |
same |
We added logging and see that lighthouse (which I think most users on this issue are using) sending |
There is something odd in the Lighthouse VC logs for the published attestations that miss. Not sure if it is relevant or not: Missed attestations: There is a mismatch between the slot # and head_block. So if I understand correctly, the head_block values should have been: |
Here some update from the team. |
Still an issue with new update |
I started receiving the below error after the merge
execution_1 | 2022-09-15 14:03:47.422+00:00 | vert.x-worker-thread-1 | ERROR | JsonRpcExecutorHandler | Error streaming JSON-RPC response
execution_1 | io.netty.channel.StacklessClosedChannelException
execution_1 | at io.netty.channel.AbstractChannel$AbstractUnsafe.write(Object, ChannelPromise)(Unknown Source)
Frequency: every few minutes
besu besu/v22.7.2/linux-x86_64/openjdk-java-11 via eth-docker
ubuntu 22.04
docker : 20.10.12
The text was updated successfully, but these errors were encountered: