Node Placement in the Bitcoin Lightning Network
Sponsored by:
National Action Council for Minorities in Engineering(NACME) Google Applied Machine Learning Intensive (AMLI) at the UNIVERSITY OF KENTUCKY
Developed by:
- Ndizeye Tschesquis -
Berea College
- Renato Diaz -
University of Central Florida
- Tony Ramirez -
University of Kentucky
- Daniela Rodriguez -
Florida International University
- Vincent Davis -
University of Kentucky
A service that provides a solution to utilizing the Lightning Network to its utmost potential.
A reinforcement agent is used to locate the best node placement in the Bitcoin Lightning Network so that we may lie on the greatest number of cheapest paths. To accomplish this goal a graph convolution network is used to aggregate the features of each node neighbors up to three layers. Open AI gym was the environment for the reinforcement agent, Critical Actor, to interact based on the given inputs. In other words, mBIB-BOLT may lead to a more viable solution to helping the Lightning Network grow and benefit Bitcoin as a trusted worldwide cryptocurrency. By perfecting the Lightning Network, we can benefit by getting overall lower fees and faster transactions on the main Bitcoin blockchain. This also helps to decrease the average cost of the network.
Check out the LIVE Lightning Network here!
Three months prior the project, a node was activated in the lightning network and hourly snapshots of the Lightning Network was taken. However, cleaning the data was critical to increase efficiency. The following criteria shows what nodes and edges pass the filter:
Node | Edges |
---|---|
Nodes must been active at least 30 days since the the snapshot was taken | Capacity should be greater than 1,000,000 satoshi |
Nodes must have a degree greater than two | Must not be dissabled nor have null polices |
The link below contains 100 filter snapshots of the lightning network. To view one file at a time you must download Dadroit json viewer. https://drive.google.com/file/d/17w4L2yzb2KgSOLRrqQbH84N2UjBytpvu/view?usp=sharing
The environment allowed the reinforcement agent to learn from different scenarios and calculate the change of the maximum betweenness centrality for each move made by the agent, which is the reward. As any game, a character has multiple ways to begin a game. In our case, the agent could learn from the entire snapshot which had thousands of nodes. Or a subgraph to maximize the learning rate of the agent. The user can also increase the number of episodes. The more episodes are ran the more the agent will learn from the data. The picture below demonstrates a general idea how the agent interacts with the agent.
GCN is our model that comes from DGL, a Python package, and allows the program to train on the same agent (the Actor Critic) on varying sizes of graphs. The model is initially fed with four features: betweenness centrality, degree centrality, closeness centrality, and edge vector; and passed into a message passing system that makes up the network of the model to collect feature information about the nodes neighbors. This ends in a resulting feature vector that characterizes the node in context of location and the network to then pass through the GCN for training. It is then aggregated through summation and the output is passed into a linear layer with a 'relu' function.
This is our agent that will perform the act of playing our game. The Actor aspect of the agent makes a decision, and the Critic aspect of the agent determines whether that action was a good one or bad one based on the reward of that action. The critic calculates a TD-Error (Temporal Difference Error), the difference between the expected and acutal reward. The TD-Error is passed to both the actor and critic aspects that helps enables them to learn.
Make sure you have at least 8 GB of RAM in order to run at the minimum 10,000 episodes (iterations/plays of our agent)
- Fork this repo
- Change directories into your project
- Download the snapshot zipfile and place it in your repo.
- Change manually in the main.py on how to interact with the enviroment.
- Snapshot vs. Subgraph
- Number of episodes
- Node ID (Already existing node in the network) else None (New node)
- load_model (reload the model)
- Budget (How many nodes to add to the node to maximize its betweeness centrality)
- run the program 'python3 main.py'
- Analyze the results
Please Feel Free To Contact:
- Vincent Davis
- Tony Ramirez : [email protected]
- Ndizeye Tschesquis : [email protected]
- Renato Diaz: [email protected]
Helpful forum for any additional questions.