Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add "ArchivalNode" capability #2346

Open
roman-khimov opened this issue Feb 16, 2021 · 6 comments
Open

Add "ArchivalNode" capability #2346

roman-khimov opened this issue Feb 16, 2021 · 6 comments
Labels
Discussion Initial issue state - proposed but not yet accepted
Milestone

Comments

@roman-khimov
Copy link
Contributor

Summary or problem description
We have MaxTraceableBlocks protocol parameter that controls how deep a smart contract can go with its requests for blocks and transactions and subsequently this allows nodes to drop blocks/transactions older than this. I expect nodes to implement this tail cutting option in some form (we have RemoveUntraceableBlocks for that already in neo-go), so in the end we'll have a network with some nodes storing all block history and some not (while still being full nodes).

The problem is that if a new node joins this network and tries to synchronize from the genesis it might only have partial-history nodes in its neighbors and even though it might think that it has enough connections it won't be able to synchronize just because no neighbor could provide it with block number 1. Even if we're to store old blocks on NeoFS in future or even if we're to add P2P state exchange mechanism some nodes should still keep whole archive available and accessible via regular P2P exchange. But we should be able to find them.

Do you have any solution you want to propose?
We can introduce new NodeCapability, ArchivalNode that will signify that this node has whole block archive available for synchronization (so neo-go with RemoveUntraceableBlocks set to false will announce ArchivalNode and with it set to true won't). Nodes can then take this capability data into account when synchronizing.

Neo Version

  • Neo 3

Where in the software does this update applies to?

  • P2P (TCP)
@roman-khimov roman-khimov added the Discussion Initial issue state - proposed but not yet accepted label Feb 16, 2021
@erikzhang
Copy link
Member

I think that when we cut the blocks, the data is already very large. Synchronization via p2p network is no longer realistic. If you really need to synchronize the complete history, you should download the offline synchronization package.

@roman-khimov
Copy link
Contributor Author

  1. It depends on MaxTraceableBlocks setting, we plan to use relatively low values for NeoFS network.
  2. P2P synchronization is always realistic, it's just a matter of time, but IMO it is important to have all data available on the network. Chain dumps are nice, but the network should still keep this data one way of another (in proper decentralized manner).
  3. At the moment the default MaxTraceableBlocks is ~2M (though Lower MaxValidUntilBlockIncrement limit #2026 suggests that it can be lower), 2M is not a lot of blocks, one can get to 5M blocks on Neo 2 in less than 12 hours.

@doubiliu
Copy link
Contributor

It can be understood as the concept of reduced branch nodes.Ordinary nodes only need to keep the contract status area, and only need to synchronize the contract status area during synchronization, which can reduce the synchronization time and improve efficiency.
Historical block transactions can be stored in the storage of fs, and storage nodes can be encouraged to store historical block data by giving them incentives. This is equivalent to introducing the concept of storage as mining in NEO. This is a very interesting experiment. Can attract more users with a certain amount of capital to join neo

@roman-khimov
Copy link
Contributor Author

BTW, we're past our current MaxTraceableBlocks setting (even though #2026 is still relevant) on mainnet now which means that there could already be nodes that strip old data.

@shargon
Copy link
Member

shargon commented Sep 6, 2022

Maybe we should download the offline package automatically when node starts from 0.

@cschuchardt88
Copy link
Member

cschuchardt88 commented Dec 23, 2023

Maybe we should download the offline package automatically when node starts from 0.

That would be centralizing the network.

I believe after reviewing the core code for the P2P network. That one big problem lies in not responding fast enough to requests. Due to HeaderCache, TaskManger and queueing tasks. I been doing testing with my own node, talking to core node logic. And with the right responses at the time of request will speed up the network download of blocks; just as fast if you downloaded the offline archive. Now the processing of the blocks and HeaderCache can slow it down a bit. But all this queueing and Task stuff slows down the network where you see it today.

@roman-khimov roman-khimov added this to the v3.8.0 milestone Feb 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Discussion Initial issue state - proposed but not yet accepted
Projects
None yet
Development

No branches or pull requests

5 participants