-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Archive node often resync when restarting on docker #10248
Comments
I observe the same problem with the ETH Ropsten network. When updating from version
Edit: Configuration:
|
@grittibaenz Do you mean that your problem happens only when you try to upgrade the Parity version? |
@yxliang01: That is indeed a good question. I am currently resyncing the blockchain and will do a snapshot once done. After that I will do some further testing and report back. It might take another 1-2 days. |
@grittibaenz Hmm Okay :) |
I have the same problem with the latest 2.2.8 version: I had deployed and fully synced a Parity Ropsten node version 2.1.10 with the It finished syncing a few days ago and is currently at the latest Ropsten block - the same block that ropsten.etherscan.io shows (currently 4936368). When I upgrade the node to version 2.2.8, it starts syncing from the genesis block instead of reusing the existing state. In the past when I upgraded the Parity version it would reuse the old state and keep syncing - in the worst case it would re-org the bad blocks, but with the latest version it seems to create a new state trie and starts syncing from the beginning. |
@nostdm @joshua-mir Actually no, this is not a duplicate of #10275 I think. The problem that I had is on a node that was synced from the very beginning using 2.2.7, no upgrading was performed. While #10275 is trying to describe a similar outcome but then it happened right after upgrading from older version. The main difference is that for my case, the originally synced head come back after some number of restarts, and reset to genesis again after some number of restarts. @nostdm @grittibaenz Would you like to try restarting your node multiple times to see whether the synced head can come back? If yes, then it's a duplicate. If not, it's not a duplicate. |
Someone on Gitter suggested to rename the |
@nostdm solved the issue as in, after renaming, you can continue syncing where you were at? |
(issue #10160 for reference, it might be what @grittibaenz is experiencing, but not what you are @yxliang01) |
@joshua-mir I think you wrongly mentioned @grittibaenz (should be @nostdm I guess) |
Both actually. |
@joshua-mir Hmm just wondering, is this issue being investigated? Or, is this backlogged? |
I don't believe there is anyone actively working on this issue at the moment, I haven't been able to reproduce this locally, are you still seeing this often? |
@joshua-mir Yes. This is still reproducible. Are you using Parity in docker container? I believe this is a bug because this happens on two different and independent nodes with the same version. |
Whoops. Haven't tried in docker yet actually, let me try that now 🙏 are you mounting any volumes? |
I am mounting the /home/parity/.local/share/io.parity.ethereum read-write. I terminate Parity via Ctrl+C. Parity received SIGINT and saying "finishing work". File permissions look fine. |
Sorry for never getting back to you. I had busy week(s) and reporting + testing issues with almost every Geth, Parity and IOTA IRI release is tedious. As I didn't update our ETH Ropsten nodes because of this bug, we obviously ran into the wrong direction of the fork. This means I am currently again resyncing the full Ropsten blockchain with one node. The second node has now a garbage-chain, but I can do some testing with it:
When updating from
Please let me know what I can test for you. This problem is(!) reproducible. |
@grittibaenz Thanks for your update. However, from what you said, looks like this problem occurs right after an upgrade. Would you like to try restarting the upgraded node to see whether a resync is performed? Since my problem occurs every time I restart it without any upgrading. |
I just upgraded Parity from v2.3.0 to v2.3.2. Upon restart, it began resyncing the Ropsten chain from genesis. I took a look, and this is occurring because Parity changed the Ropsten directory name:
By renaming |
@brandoncurtis The change of directory name, is it happening upon exiting Parity or upon starting Parity? Also, if you restarting for multiple times, does it recover automatically? |
Upon restarting parity, it creates the new directory To fix the problem, I deleted this new directory and renamed the existing |
We're getting alot of reports that should be in #10160 on this issue, sorry @yxliang01 |
@joshua-mir Yeah. Confirm that there is no |
Haven't been able to. I suspect it's something to do with the blocks nearer to the chain head, so I'll try again with a node I'll keep running for longer. |
@joshua-mir hmm as stated in the first thread, it happened ever since at 2700000. So, maybe you want to try to reach that height. |
Is there any latest update on above? We had a fully synced archive node(mainnet) on 2.0.7. We upgraded it to 2.2.7 and had the same issue of node syncing from genesis block after upgrade instead of catching up from latest block, we left the node for two weeks and the sync was observed to be very slow about 40000 blocks per day. We again updated the node from 2.2.7 to 2.2.11 and the sync started from genesis block, the sync was fast till 2.6 Million blocks and after that it became very slow, about 11000 blocks per day. Any help is much appreciated, it's been three to four weeks with no solution! |
@sammisetty I think slow down after 2.6 million blocks is related to the DOS on Ethereum (FYI, https://www.reddit.com/r/ethereum/comments/5es5g4/a_state_clearing_faq/ ). (well, my node is also syncing very slowly both around 2.6 million blocks and 4 million blocks and now stuck.) As far as I know, there's no progress on this issue since they can't reproduce. As this is not major to me at this moment, I am working on something having higher priority now and try not to shut it down... If you have time, maybe you can start your node with |
@yxliang01 I will start the node with -l trace and post the log soon. I also have another question, db/chains/ethereum/db/uniqueId/snapshot folder has two sub-folders
Is there any reason that you know for two folders? |
Hmm I no longer encounter this problem recently and this issue is stale (no other new people reported encountering same issue recently). I suggest closing this issue. If anyone or me counter this again, we can reopen or create a new issue :) . @joshua-mir Sounds good? |
This happens on two separated nodes with the same setup. The nodes are archive node, tracing on, on HDD, fat-db one on one off. Currently one at around 5000000 block one at around 2700000.
actual behavior
After shutdown the node whether gracefully or not, starting the node will often start syncing from 0. However, since at times it can sync from the highest head, the db doesn't seem corrupted.
expected behavior
Every time, after a restart, the node should sync from the highest block it has synced so far.
Thanks
The text was updated successfully, but these errors were encountered: