-
Notifications
You must be signed in to change notification settings - Fork 248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Falsely reported invalid transaction? #1668
Comments
Hmm interesting, do you know which transaction status message that subxt decodes as invalid? Maybe we removed those subscription messages logs from the jsonrpsee client but would be interesting to know here... (it should quite trivial to find out with wireshark) EDIT: I haven't been able to re-produce this myself but if you can apply my PR and re-run it again then we will able to see the transaction status submitted by the node. Also, another thingy to assert is that the nonce is unique whether the same account is used to send transactions concurrently.... |
Thanks for the debug patch! Caught it in the very first run:
I also have my suspicions about the culprit. I noticed that in the node logs, there's a reorg around block 73 where the "invalid" transaction was included (most probably due to paritytech/polkadot-sdk#3205, which is still affecting the parachains running with lookahead collator, otherwise I cannot explain forks on the single-collator chain):
Now, I don't know how fork-aware the subxt (or its backends) are. If they are not, that may be a natural failure when subxt sees the block been thrown away but the transaction gets finalized on another fork. WDYT? |
Gotcha, so yes would be nice to support for sure. For subxt it depends which rpc backend that is used:
So, you try to the unstable backend meanwhile but no guarantees. use subxt::backend::unstable::UnstableBackend;
use subxt::OnlineClient;
let (backend, mut driver) = UnstableBackend::builder().build(RpcClient::new(rpc.clone()));
tokio::spawn(async move {
while let Some(val) = driver.next().await {
if let Err(e) = val {
eprintln!("Error driving unstable backend: {e}; terminating client");
}
}
});
let api: OnlineClient<PolkadotConfig> = OnlineClient::from_backend(Arc::new(backend)).await?; //cc @lexnv @jsdw Do you know whether the unstable backend is fork-aware currently? |
IIUC, the issue happens with the legacy Backend (although I expect a similar behavior to happen with the "unstable" Backend):
In subxt, we rely on the RPCs exposed by the node: subxt/subxt/src/backend/legacy/mod.rs Lines 331 to 346 in b076f4c
For the legacy backend: we rely entirely on If the I expect the issue is coming from the substrate's transaction pool and this is happening because the tx pool is not fork aware currently. Offhand, I don't think there's something we can do from the subxt perspective to guard against this case:
let sub = subscribe_finalized_block_hasesh();
let mut blocks = HashMap<Block::Hash, Block::Hash>::new(); // parent hash to block hash
while let Ok(Some(hash)) = sub.next().await? {
let parent_hash = hash.parent_hash();
if blocks.entry(parent_hash) {
Occupied(oc) => {
println!("Fork detected on parent: {parent_hash}");
}
...
} And then users can wrap the subxt enum BlockEvent {
Produced(block)
Forked(parent, block)
} TLDR; we need paritytech/polkadot-sdk#4775 |
I'd say: paritytech/polkadot-sdk#4639. |
I'm trying the unstable backend, and it fails with Full log just in case: revenue-test2.log |
Weird, you don't which method call that caused that? I couldn't find it from the logs but I would guess that the polkadot parachain may not have the rpc v2 impl enabled? |
I'm not totally sure as well, but according to the INFO logs, the only possible place where it could happen is
Oh I'm no expert here :) How do you enable/disable it in a cumulus-based node? Anyway, I'm developing a workaround behavior for tests, and it seems to work with the legacy RPC, so I hope it's enough for now and we all are waiting for the fork-aware txpool! 🤞 |
The chain spec RPC stuff is implemented as a rpc extension and if it's not enabled such as https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/rpc/src/lib.rs#L146 then it's not enabled on the node. I checked locally right now and the polkadot-parachain (cumulus) doesn't provide these RPCs: ➜ jsonrpsee (with-client-sub-logs) ✗ diff polkadot.json polkadot_parachain.json
15,18d14
< "babe_epochAuthorship",
< "beefy_getFinalizedHead",
< "beefy_subscribeJustifications",
< "beefy_unsubscribeJustifications",
28,30d23
< "chainSpec_v1_chainName",
< "chainSpec_v1_genesisHash",
< "chainSpec_v1_properties",
57,64d49
< "grandpa_proveFinality",
< "grandpa_roundState",
< "grandpa_subscribeJustifications",
< "grandpa_unsubscribeJustifications",
< "mmr_generateProof",
< "mmr_root",
< "mmr_verifyProof",
< "mmr_verifyProofStateless",
95d79
< "sync_state_genSyncSpec", Thus, chainSpec_v1 needs to added here for it to work but no idea why it wasn't enabled... Maybe @lexnv knows or simply just missed ^^ |
I think I've missed that, indeed It might be a good idea to also enable |
Holy s***, I've been working on an issue for two days and it's the exact same thing! I had a bash script that worked and I'm currently porting it to a proper test and was suffering precisely the same issue:
I didn't check the parachain page but did check all logs multiple times and nothing struck me as odd, I see now that it was because there was nothing wrong! This happened consistently around the same blocks, I'd submit 3 extrinsics and the third almost always failed (>90%), with a sleep of 4 seconds or longer, it worked! I did try the unstable backend and it happened anyways. Does anyone have ideas for a workaround? Asides from sleeping between transactions 😅 |
I ended up having a separate finality watcher. So basically I created a separate task that follows finalized parablocks (with |
As I understand it, this issue occurs because we subsctribe to the transaction status, and when a re-org occurs, we get back a We close the subscription on such an event, because it is specified to do so the "new" APIs (ie UnstableBackend): https://paritytech.github.io/json-rpc-interface-spec/api/transactionWatch_v1_submitAndWatch.html#invalid. It's less clear via the "old" APIs (ie LegacyBackend) but eg PJS also closes on seeing an Invalid event (ie https://github.com/polkadot-js/api/blob/87bee8ac29bc4b1d882307db556032a1860b9de6/packages/api/src/submittable/Result.ts#L66). Hopefully paritytech/polkadot-sdk#4639 will resolve the issue (it would be great to test this if possible!). The only other thing on our side that would resolve it is by implementing the suggested approach (2) in #1769, which we will probably do eventually but isn't currently a high priority. |
I'm working on a test that uses zombients v2 and subxt. Basically, it submits some transaction, waits for its finalization, and sends the next transaction based on the events generated by the previous one, thus proving the correctness of the expected behavior.
Sometimes, the entire test runs through successfully, but it's rare. More often, some transaction fails with
TxStatus::Invalid { message: "Transaction is invalid (eg because of a bad nonce, signature etc)" }
. Most often, this one fails although sometimes it happens with the other transactions. Here's what that looks like in the logs:The interesting thing is that the transaction indeed gets successfully finalized, as can be seen in PJS:
Moreover, if I leave the network running after the failure, my event watcher sees the events I was expecting in the finalized block:
I'm using subxt 0.37. Any ideas on what's happening here would be very appreciated.
The text was updated successfully, but these errors were encountered: