Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confirmation of transaction each second #1239

Closed
hameleonick opened this issue Dec 20, 2017 · 26 comments
Closed

Confirmation of transaction each second #1239

hameleonick opened this issue Dec 20, 2017 · 26 comments
Assignees

Comments

@hameleonick
Copy link

Hi all, have next problem with version 1.0.0-beta.26

When I use "methods.myMethod.send" from the contract I have subscriptions for next events: transactionHash, confirmation, error etc.
And for some reason, I received confirmation each second with a new confirmationNumber from 0 to 24. But in fact, I see that block number was not changed properly. Could you help me with this?

Please let me know if you need more details.

@frozeman
Copy link
Contributor

You are using an HTTP connection?

@hameleonick
Copy link
Author

This problem I have on the client side. I'm using additionally MetaMask Plugin for browser and I have something like re-initialisation of web3 instance:

window.web3 = new Web3Local(window.web3.currentProvider || Web3Local.givenProvider);

But it also checked with: new Web3Local(Web3Local.givenProvider)
And I had the same situation.

Thx for the response.

@kazaff
Copy link

kazaff commented Jan 14, 2018

@hameleonick me too..

In my environment(private chain), Even i had stopped miner, this confirmation event still emitted, So what this event mean?

@frozeman yep, I based on http connection.

@marwand
Copy link

marwand commented Feb 18, 2018

I can confirm this issue occurs only with http rpc.
websocket works normally (returns only one confirmation)

Setup:
[email protected]
[email protected] (Local private rpc server, one node)

JS Code:

return MyContract.methods.MyMethod().send({from: this.accounts[1]})
.on('confirmation', (confirmationNumber, receipt) =>{
    console.log("%c Confirmation #"+confirmationNumber+" Receipt: ","color: #0000ff");
    console.log(receipt);
})
.on('error', (error) =>{
    console.log("%c Error:: "+error.message,"color: #ff0000");
    return false;
});

Output:
Receives the same receipt 25 times.

App.js:228 Confirmation #0 Receipt: {blockHash: "0x96c23b505bffebf034c648d809f0612d649e1023318bba1fe00bf7f9b2dd22cb", blockNumber: 22, contractAddress: null, cumulativeGasUsed: 225822, from: "0x069c8dc5839356fa40a4763d5deb9c6221ca22fc", …} blockHash : "0x96c23b505bffebf034c648d809f0612d649e1023318bba1fe00bf7f9b2dd22cb" blockNumber : 22 contractAddress : null cumulativeGasUsed : 225822 events : {LogNewCampaign: {…}} from : "0x069c8dc5839356fa40a4763d5deb9c6221ca22fc" gasUsed : 225822 logsBloom : "0x00000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000010040000000001000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000040000000000000000000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000" status : "0x1" to : "0x32eaa84f5ffb08f519ba0603f9a415066849a105" transactionHash : "0xe4b776183172e1068e0b02b5e45a234e240818fb063e1a14f77a6e4dc807b81e" transactionIndex : 0 __proto__ : Object
.
.
.
Start Campaign:: Confirmation #24 Receipt: {blockHash: "0x96c23b505bffebf034c648d809f0612d649e1023318bba1fe00bf7f9b2dd22cb", blockNumber: 22, contractAddress: null, cumulativeGasUsed: 225822, from: "0x069c8dc5839356fa40a4763d5deb9c6221ca22fc", …} blockHash : "0x96c23b505bffebf034c648d809f0612d649e1023318bba1fe00bf7f9b2dd22cb" blockNumber : 22 contractAddress : null cumulativeGasUsed : 225822 events : {LogNewCampaign: {…}} from : "0x069c8dc5839356fa40a4763d5deb9c6221ca22fc" gasUsed : 225822 logsBloom : "0x00000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000010040000000001000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000040000000000000000000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000" status : "0x1" to : "0x32eaa84f5ffb08f519ba0603f9a415066849a105" transactionHash : "0xe4b776183172e1068e0b02b5e45a234e240818fb063e1a14f77a6e4dc807b81e" transactionIndex : 0 __proto__ : Object

@fdouglis
Copy link

I just started hitting this problem as well. It isn't just that it reports a new confirmation on the same block repeatedly, it's that the deploy call that is hitting this doesn't return until web3 gives up by hitting its internal limit. So it takes many seconds to deploy. The weird thing is that this started happening just on one system in particular -- I had deployed on EC2, turned that into an AMI, launched a new instance from it, and hit this problem. I'm wondering if the message is a total red herring and it is actually some security policy or something else at play.

I will add I am using ganache (it happens with ganache-cli as well).

@fdouglis
Copy link

Follow up: I just came across #1393 ... seems to suggest that the new behavior may indeed be a recent, intentional change to the library. I'm wondering if my application isn't keeping up with the latest and greatest interface.

@fdouglis
Copy link

And another followup: I saw that things ran fine on a VM where I had been running beta 30, and not on a new one running 33. I removed node_packages and specified 30 specifically in my package. json, and confirmed that it had the same behavior I'd been used to. So it's definitely a recent change to web3 that broke my script.

@fdouglis
Copy link

Surprised by the radio silence on this. If the change from 30 toward 33 was unintended, I'd hope someone would identify this and agree, with the assumption the regression would be fixed at some point. If it was intentional, as #1393 suggests, can someone explain why, and suggest what the right model is to avoid sticking around?

Is the issue that in a real deployment, you'd want multiple confirmations, or something along those lines? If that is the case, then there should be a flag to limit the number needed, for use in test cases like ganache.

@cgewecke
Copy link
Collaborator

cgewecke commented Mar 30, 2018

@fdouglis Hi. (I wrote #1393)

Tried to reproduce this on ganache-cli with 31 and it seemed to work as expected. When the PromiEvent is thenned, it immediately resolves a receipt. Then the confirmation handler fires repeatedly and also resolves the receipt. They can be used in combination. Will double-check this behavior against geth dev shortly.

If you have a chance, could you post a small example deployment call that fails against ganache-cli so I can debug?

@fdouglis
Copy link

OK, here is a pretty minimal js file that repeats the problem. Note, I said it worked in 30 and not 33. So I don't know if 31 is the one that broke it, or 32 or 33.

The Solidity file is a trivial contract that takes a single uint32 as a parameter.

@fdouglis
Copy link

BTW, there may be some misunderstanding on our part of how to handle the deploy, and maybe it's a question of documentation rather than a bug.

@cgewecke
Copy link
Collaborator

cgewecke commented Mar 30, 2018

@fdouglis Nice script.

With these changes: (using v33)

  • the default gas value set to: 0x4C4B40 (5 million)
  • the config:
wsAddrGeth: 'ws://localhost:8546',
rpcAddr: 'http://localhost:8545',

Using the rpcAddr and running latest ganache-cli the output looks correct:

...etc...
Ethereum transaction 0x18a54c1ef5460d21d8238f768b580cb1bece99a9d4334885fa87392231947bc6 successfully posted.
txHash:  0x18a54c1ef5460d21d8238f768b580cb1bece99a9d4334885fa87392231947bc6
Addr:  0x636b064Cfd708114E6A906eE5F2227A96668802A
hit -then- clause of deploy
0x636b064Cfd708114E6A906eE5F2227A96668802A
confirmation callback ignored
confirmation callback ignored
...etc...

Same result using wsAddrGeth and running dockerized geth in dev mode with websockets enabled

docker run -it -p 8546:8546 -p 30303:30303 \ 
             ethereum/client-go  \ 
            --ws --wsaddr="0.0.0.0" --wsorigins "*" \
            --dev --dev.period 2 \

Output:

... etc ...
Ethereum transaction 0x98061db4e759ce32ebbbfdcb68037379a149be7cee6b25a37e1abe5619eedc92 successfully posted.
txHash:  0x98061db4e759ce32ebbbfdcb68037379a149be7cee6b25a37e1abe5619eedc92
confirmation callback ignored # <-- note (different than ganache) 
Addr:  0xC52C92F5e9C40d81B5Ca2547DF1878Eb8D4d53E8
hit -then- clause of deploy
0xC52C92F5e9C40d81B5Ca2547DF1878Eb8D4d53E8
confirmation callback ignored
confirmation callback ignored

Same result using rpcAddr and running dockerized geth in dev mode with rpc enabled:

docker run -it -p 8545:8545 -p 30303:30303 \
            ethereum/client-go \ 
            --rpc --rpcaddr "0.0.0.0" \
            --dev --dev.period 2 \

Does this work on your end?

@fdouglis
Copy link

@cgewecke, thanks. When you say "the output looks correct" what do you mean? With v0.30, I would see a single confirmation, and then it would return. Now I see many, and it doesn't return until they stop. This slows down interactions that are deploying the contract and waiting for it to be deployed.

It may be that this is an artifact of how we developed the app. That if you deploy in something that continues to run other threads and do useful work, you can get these confirmations and drop them on the floor and not notice.

@cgewecke
Copy link
Collaborator

cgewecke commented Mar 30, 2018

@fdouglis The promise is resolving and the emitter keeps emitting. Isn't this correct behavior?

hit -then- clause of deploy

[Edit] What is your execution context? The above seems like it would be unproblematic in node or the browser.

@fdouglis
Copy link

Well, this is what I've been trying to figure out. It appears you explicitly wanted to change things to have this behavior, but it was unexpected from our perspective. Why would the same deploy keep getting the same confirmation many times? I could see that if, for instance, I deployed it into a chain with many nodes and each one somehow confirmed it. Here, it's just ganache. So, in the past, I used to see that it was deployed -- I'm actually not sure I got the confirmation debug message at all, but certainly not many times.

And, to be clear, the fact that I see the callback repeatedly wouldn't be a huge issue under some circumstances. I don't have to log each such occurrence (and in fact at some point I tweaked my code to print the 2nd then suppress any further). But it's the delay that is the problem. We run this in a bit of a synchronous form: invoke deploy, wait for it to finish, and save the contract address in a file. Subsequent references use the address from the file if we start nodejs again. So if there is a lag because it's not really really done until it hits the limit on these confirmations, there is a problem.

If the answer is, that's the expected behavior, even though we happened to use a version that didn't have that behavior and got away with it, we need to decide how to deal. For instance, detecting the successful deployment and returning immediately even if more confirmations may arrive in the background.

@cgewecke
Copy link
Collaborator

cgewecke commented Mar 30, 2018

@fdouglis Ah I understand. There's some discussion of the necessity of repeated confirmations from a chain architecture perspective here. Additionally there are use-cases relating to Infura or any other load balanced public node clusters, nicely articulated here

@fdouglis
Copy link

fdouglis commented Apr 2, 2018

OK, @cgewecke, so this gets back to my comment about testing versus the real world. I could find no documentation suggesting that number of confirmations is configurable. I fully agree you would want to count your confirmations in real deployments. But when you are talking to ganache, you're talking to a single blockchain, so you're simply repeating the same confirmation, right? A parameter in truffle.js would handle this case. Perhaps it's already there, just not easily found, or perhaps it could be added?

@fdouglis
Copy link

fdouglis commented Apr 2, 2018

P.S. I could see it if there were some benefit to not just knowing how many nodes mined a contract but also how many blocks on one node were written afterwards, with the idea being that the deeper you get, the more reliable the transaction. However, in my test, I was deploying the contract, then waiting for the receipt, then moving on. Nothing else was happening, hence the appearance that the same confirmation was being repeated numerous times (which sounds like a bug from where I sit). And since it didn't return from the full deploy sequence until enough confirmations were hit, it just needlessly slowed down deployment.

To summarize,

  • Configurable number of confirmations before proceeding
  • Is it a bug to get the same confirmation multiple times from the same node when no new blocks have been mined?

@cgewecke
Copy link
Collaborator

cgewecke commented Apr 2, 2018

@fdouglis Fair points!

Is it a bug to get the same confirmation multiple times from the same node when no new blocks have been mined?

Yes, I believe that is the bug noted in this issue, caused by polling at 1 second intervals. It affects non-websocket client connections.

@fdouglis
Copy link

fdouglis commented Apr 2, 2018

I see. I think I was confused because when you showed your output with all the confirmations, you seemed to be suggesting it was working as expected. Thanks.

@dpurhar27
Copy link

dpurhar27 commented Apr 2, 2018

Sorry @fdouglis have you found a way to fix this? My DApp was functioning correctly until 2 days ago, and now my front-ends event.watch() appears to not see the event emitted until I run the function again (by clicking the button on the gui again). As soon as I push the button, the event.watch() sees the event from the previous turn. Is this related to the problems you are/were experiencing? I also run this in a synchronous fashion, I click the button, wait for the event, then continue onto compare what the event returns. So it is completely messing the DApp functionality now.

@fdouglis
Copy link

fdouglis commented Apr 3, 2018

@dpurhar27 Not sure if its' the same issue or not, sorry. I solved it by either fixing on an earlier beta or more actively ignoring the extra confirmations (I'd been printing a message, and stopped). The latter is quieter but still has performance issues.

I've asked that they add a parameter to override this, but also, they seemed to say it's a known bug with the HTTP interface, if that's what you're using.

@CryptoKiddies
Copy link

I can confirm this issue connecting via http. As soon as the contract is mined, I get a confirmation fired every second up to 24. Is there a proposed fix?

@HugoPeters1024
Copy link

@GeeeCoin I encouter exactly the same

@frozeman
Copy link
Contributor

frozeman commented Jun 15, 2018 via email

@nivida
Copy link
Contributor

nivida commented Nov 29, 2018

Yes the "newHeads subscription" gets fired each second with the HTTP provider because it does not support subscriptions. If you're using and WebSocket or IPC provider it uses the real "newHeads" subscription and then it will be triggered on each head. But yes it's wrong that it increases the confirmation counter.

This should be fixed with the PR #2000

@nivida nivida closed this as completed Nov 29, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants