Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Channel fee optimization #94

Merged
merged 6 commits into from
Oct 27, 2021
Merged

Channel fee optimization #94

merged 6 commits into from
Oct 27, 2021

Conversation

bitromortac
Copy link
Owner

@bitromortac bitromortac commented Oct 20, 2021

Add update-fee command, which is meant to be invoked periodically (~every week) for channel fee optimization purposes.

The command will adjust the fee rates and optionally the base fees (they are set to zero by default) after user consent, such that the demand for liquidity of a channel is taken into account. A liquidity reserve is kept for excess demand with a fee premium.

The command aims for a weekly throughput of 100k sat per channel (configurable). The weekly throughput will be optimized for maximal fees in future work. See the readme changes for further details.

Todo:

  • add description in readme
  • change type annotations
  • make global throughput configurable
  • exempt channels from optimization via config file
  • comment on initial fee setting

@C-Otto
Copy link
Contributor

C-Otto commented Oct 20, 2021

I don't think that once a week is enough, I'd say once every few hours is more realistic. Furthermore, I believe that channels are wildly different, which is why the throughput definition should be channel-specific (if this approach is used, at all).

@bitromortac
Copy link
Owner Author

bitromortac commented Oct 20, 2021

I don't think that once a week is enough, I'd say once every few hours is more realistic.

This is not going to be possible for the whole of the network. If everybody does this it's putting up a lot of load on the gossip and nodes will strongly start to rate-limit. Right now, fee optimization is done over all channels for each interval, but this could be torn appart and be done on an individual basis, which is the goal once I have a watcher daemon ready.

Furthermore, I believe that channels are wildly different, which is why the throughput definition should be channel-specific (if this approach is used, at all).

Very true, but the wild difference of channels is taken into account, have a look at what the command gives you as a suggested output and compare whether it is according to your desired fee change (you don't have to apply them - you are going to be asked whether you want or not). I agree that the global throughput is just a guess here, but at least it should be configurable. I wanted to reduce the number of parameters one has to apply to a minimum. As I mentioned above, the optimal throughput has to be learned over some optimization periods, which can be then incorporated into a per-channel throughput limit.

@bitromortac bitromortac force-pushed the fee-optimization branch 6 times, most recently from c36bd06 to 5327d91 Compare October 21, 2021 13:40
@bitromortac bitromortac changed the title Fee optimization: economic rebalancing and throughput constraining Channel fee optimization: economic throughput constraining with liquidity buffer Oct 21, 2021
@bitromortac bitromortac changed the title Channel fee optimization: economic throughput constraining with liquidity buffer Channel fee optimization Oct 21, 2021
@bitromortac bitromortac force-pushed the fee-optimization branch 2 times, most recently from e19fd3c to 9a08888 Compare October 26, 2021 11:36
bitromortac and others added 6 commits October 26, 2021 13:51
* adapt base fees
* adapt fee rates
* add method for tracking fee history
* set fees on a per peer basis
If a channel is present in the section of ignored channels, all channels
with the peer are ignored for fee updates.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants