Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QBFT Bonsai 24.3.3 private network halted #7642

Open
carlossrp opened this issue Sep 19, 2024 · 0 comments
Open

QBFT Bonsai 24.3.3 private network halted #7642

carlossrp opened this issue Sep 19, 2024 · 0 comments

Comments

@carlossrp
Copy link

Description

Hi guys, we have a production private QBFT PoA network running Besu 24.3.3 that halted this week. The network architecture is:

  • 4 RPC Bonsai Full nodes
  • 1 Archive Forest Full node
  • 4 Validator Bonsai Full nodes
  • Other 4 RPC Bonsai Full nodes, that act as “standby validators”.

The nodes are distributed in different AZs and regions. We have each validator in one AZ, two of them in one region and the other two in another. It’s the same for “standby validators” and RPC node.
This week, the network just halted. The validators, which were in INFO log level, stopped producing blocks with no error message, and even stopped logging anything.
To quick recover from this, we manually restarted the 4 validators, bringing the network online again. Validator logs are attached.
The network was running for +- 4 months without any issues.
Something curious is that some RPC nodes came back, some not. We restarted 2 of the main RPC nodes, which applications connect, and they started to sync blocks, but the other 2 recovered themselves, without intervention.
But there’s still “standby validators” that we didn’t restart and are halted. We changed the log level of them to debug and trace but didn’t have much luck with the information. Both logs are attached as well.
One last important information is that near the time the network froze, we’re deploying an application contract to the network. Smart contract is solidity 0.8.24 version, with hardhat.
This contract was deployed before in one of our development/QA networks without any issues, and after the crash of the network and the restart, we’re able to deploy it to the network, but it’s the major event that happened near the crash time besides the “business as usual” load in other contracts. The applications were running as usual among this deploy.
Our guess here is maybe the deploy triggered a bug, or it could be a bug alone in the 24.3.3. We already planning to upgrade to 24.9.1, but we would like to share this case with the community because this could be a bug, and we’re afraid it could be something related to Bonsai and QBFT.
Thank you

Steps to Reproduce

We’re unable to reproduce the issue in our test/QA environments. We have a QA network running for about the same time with no issues.

Versions

  • Software Version: Besu 24.3.3
  • Docker image: official hyperledger/besu:24.3.3, arm image
  • AWS EKS: 1.29
  • EC2: m6g.xlarge

Smart contract information

  • Solidity version: 0.8.24
  • ethers v6
  • hardhat 2.22.3

Additional information

validator-1_-_Copy
validator-2_-_Copy
validator-4_-_Copy

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant