-
Notifications
You must be signed in to change notification settings - Fork 161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
getHistoryInterpreter: AcquireFailurePointTooOld #434
Comments
Currently, when we hit this issue, |
@erikd I am using: db-sync: git log -1
commit e9474143ba6ce07c2f6d498ee80178c007d68a28 (HEAD -> release/7.1.x, origin/release/7.1.x)
Author: Duncan Coutts <[email protected]>
Date: Fri Dec 11 16:07:07 2020 +0000
Version 7.1.2 node: Steps for building/running: node: git clone https://github.com/input-output-hk/cardano-node
cd cardano-node
git checkout tags/1.24.2
nix-build -A scripts.mainnet.node -o mainnet-node-local
# Start node:
./mainnet-node-local db-sync: git clone https://github.com/input-output-hk/cardano-db-sync
cd cardano-db-sync
git checkout release/7.1.x
nix-build -A cardano-db-sync-extended -o db-sync-node-extended
# Create DB:
PGPASSFILE=config/pgpass-mainnet scripts/postgresql-setup.sh --createdb
All good!
# Start db-sync:
PGPASSFILE=config/pgpass-mainnet db-sync-node-extended/bin/cardano-db-sync-extended \
--config config/mainnet-config.yaml \
--socket-path ../cardano-node/state-node-mainnet/node.socket \
--schema-dir schema/ \
--state-dir ledger-state/mainnet This is how it ends for me after running [db-sync-node:Info:43] [2020-12-14 10:15:48.36 UTC] insertByronBlock: slot 380000, block 379948, hash d32a3695c15ff1a87fccae2100948f4ff19423136f98cc1a6aeaaf206c5f069b
[db-sync-node:Info:43] [2020-12-14 10:16:17.93 UTC] insertByronBlock: slot 385000, block 384948, hash 5266a9ca0a5195dcb4368968f35900bcf8d1f81a6fb53a93a115bd1e87eefe1b
[db-sync-node:Error:46] [2020-12-14 10:16:19.52 UTC] recvMsgRollForward: FatalError {fatalErrorMessage = "getHistoryInterpreter: AcquireFailurePointTooOld"}
[db-sync-node:Error:43] [2020-12-14 10:16:19.53 UTC] runDBThread: AsyncCancelled
[db-sync-node:Error:38] [2020-12-14 10:16:19.53 UTC] ChainSyncWithBlocksPtcl: FatalError {fatalErrorMessage = "getHistoryInterpreter: AcquireFailurePointTooOld"}
[db-sync-node.Mux:Info:35] [2020-12-14 10:16:19.53 UTC] Bearer on ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "state-node-mainnet/node.socket"} event: State: Dead
[db-sync-node.Mux:Notice:35] [2020-12-14 10:16:19.53 UTC] Bearer on ConnectionId {localAddress = LocalAddress "", remoteAddress = LocalAddress "state-node-mainnet/node.socket"} event: Miniprotocol MiniProtocolNum 5 InitiatorDir terminated with exception FatalError {fatalErrorMessage = "getHistoryInterpreter: AcquireFailurePointTooOld"}
[db-sync-node.Subscription:Error:33] [2020-12-14 10:16:19.53 UTC] [String "Application Exception: LocalAddress \"../cardano-node/state-node-mainnet/node.socket\" FatalError {fatalErrorMessage = \"getHistoryInterpreter: AcquireFailurePointTooOld\"}",String "SubscriptionTrace"]
cardano-db-sync-extended: FatalError {fatalErrorMessage = "getHistoryInterpreter: AcquireFailurePointTooOld"}
[db-sync-node.ErrorPolicy:Error:4] [2020-12-14 10:16:19.53 UTC] [String "ErrorPolicyLocalNodeError (ApplicationExceptionTrace (FatalError {fatalErrorMessage = \"getHistoryInterpreter: AcquireFailurePointTooOld\"}))",String "ErrorPolicyTrace",String "LocalAddress \"../cardano-node/state-node-mainnet/node.socket\""]
artur@artur-desktop:~/Projects/db-sync-7-1/cardano-db-sync$ However when I built locally docker image using this version of |
Looking at the logs above
I'm not sure if there is any way to actually fix the race condition without changes at
It seems there is another race condition here: First it checks if the queue is full and then writes on it. But These are not atomic, so the queue may get full in between, so the thread will sleep. I need to have a better look on exactly how these issues trigger the error. |
If the node state is deleted and then node and db-sync started together then sometimes we get a error:
Would be nice to find a neat work around for this.
The text was updated successfully, but these errors were encountered: