Skip to content

Applied-Machine-Learning-2021/final-project-flashnet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mBIP-BOLT:

Maximum Betweenness Improvement Problem Basis of Lightning Technology

Node Placement in the Bitcoin Lightning Network

Sponsored by:

National Action Council for Minorities in Engineering(NACME) Google Applied Machine Learning Intensive (AMLI) at the UNIVERSITY OF KENTUCKY

Developed by:

Introducing FlashNet

A service that provides a solution to utilizing the Lightning Network to its utmost potential.

Description

A reinforcement agent is used to locate the best node placement in the Bitcoin Lightning Network so that we may lie on the greatest number of cheapest paths. To accomplish this goal a graph convolution network is used to aggregate the features of each node neighbors up to three layers. Open AI gym was the environment for the reinforcement agent, Critical Actor, to interact based on the given inputs. In other words, mBIB-BOLT may lead to a more viable solution to helping the Lightning Network grow and benefit Bitcoin as a trusted worldwide cryptocurrency. By perfecting the Lightning Network, we can benefit by getting overall lower fees and faster transactions on the main Bitcoin blockchain. This also helps to decrease the average cost of the network.

Check out the LIVE Lightning Network here!

image

Process Overview:

1. Obtaining our dataset to have our model train on:

Three months prior the project, a node was activated in the lightning network and hourly snapshots of the Lightning Network was taken. However, cleaning the data was critical to increase efficiency. The following criteria shows what nodes and edges pass the filter:

Node Edges
Nodes must been active at least 30 days since the the snapshot was taken Capacity should be greater than 1,000,000 satoshi
Nodes must have a degree greater than two Must not be dissabled nor have null polices

The link below contains 100 filter snapshots of the lightning network. To view one file at a time you must download Dadroit json viewer. https://drive.google.com/file/d/17w4L2yzb2KgSOLRrqQbH84N2UjBytpvu/view?usp=sharing

2. Making an environment for our model:

The environment allowed the reinforcement agent to learn from different scenarios and calculate the change of the maximum betweenness centrality for each move made by the agent, which is the reward. As any game, a character has multiple ways to begin a game. In our case, the agent could learn from the entire snapshot which had thousands of nodes. Or a subgraph to maximize the learning rate of the agent. The user can also increase the number of episodes. The more episodes are ran the more the agent will learn from the data. The picture below demonstrates a general idea how the agent interacts with the agent. Google AMLI Final Presentation

3. Graph Convolutional Network and Deep Graph Library:

GCN is our model that comes from DGL, a Python package, and allows the program to train on the same agent (the Actor Critic) on varying sizes of graphs. The model is initially fed with four features: betweenness centrality, degree centrality, closeness centrality, and edge vector; and passed into a message passing system that makes up the network of the model to collect feature information about the nodes neighbors. This ends in a resulting feature vector that characterizes the node in context of location and the network to then pass through the GCN for training. It is then aggregated through summation and the output is passed into a linear layer with a 'relu' function.

Screenshot (138)

4. ActorCritic:

This is our agent that will perform the act of playing our game. The Actor aspect of the agent makes a decision, and the Critic aspect of the agent determines whether that action was a good one or bad one based on the reward of that action. The critic calculates a TD-Error (Temporal Difference Error), the difference between the expected and acutal reward. The TD-Error is passed to both the actor and critic aspects that helps enables them to learn. Google AMLI Final Presentation  (1)

Usage instructions

Make sure you have at least 8 GB of RAM in order to run at the minimum 10,000 episodes (iterations/plays of our agent)

  1. Fork this repo
  2. Change directories into your project
  3. Download the snapshot zipfile and place it in your repo.
  4. Change manually in the main.py on how to interact with the enviroment.
    • Snapshot vs. Subgraph
    • Number of episodes
    • Node ID (Already existing node in the network) else None (New node)
    • load_model (reload the model)
    • Budget (How many nodes to add to the node to maximize its betweeness centrality)
  5. run the program 'python3 main.py'
  6. Analyze the results

Questions?

Please Feel Free To Contact:

Helpful forum for any additional questions.

Start Investing in Bitcoin Now!

About

final-project-flashnet created by GitHub Classroom

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages