Ayush Mangal*, Jitesh Jain*, Keerat Kaur Guliani*, Omkar Bhalerao*
This repo contains the code for the paper DEAP Cache: Deep Eviction Admission and Prefetching for Cache.
You can setup the repo by running the following commands:
$ git clone https://github.com/vlgiitr/deep_cache_replacement.git
$ pip install -r requirements.txt
The repository contains the following modules:
checkpoints/
- Contains the pretrained embeddings and a trained version of the DeepCache model.dataset/
- Dataset folderaddress_pc_files/
- Contains csv files with addresses and PCs with their corresponding future frequency and recencymisses/
- Contains csv files with the missed (separately calculated for LRU and LFU) addresses and PCs with their corresponding future frequency and recency
runs/
- Contains the tensorboard logs stored during DeepCache's trainingutils/
- Contains various utility files such as.py
scripts for various baselines, etc.cache_lecar.py
- Script for the modified LeCaR that evicts based on the future frequencies and recenciescache_model_train.py
- Script for training the DeepCache model.create_train_dataset.py
- Script for creating the dataloader for training DeepCacheembed_lstm_32.py
- Script for training the byte embeddingsgenerate_binary_permutations.py
- Script for generating a csv file with all the binary representations of numbers till 255 for the global vocabularyget_misses.py
- Script for storing the missed addresses and PCs in csv filesrequirements.txt
- Contains all the dependencies required for running the codestandard_algo_benchmark.py
- Script for caclculating hitrates on the dataset using all the baselines algorithmstest_sim.py
- Script for running the online test simulation
- To train the byte-embeddings, run the following command:
$ python embed_lstm_32.py
- To train DeepCache, run the following command:
$ python cache_model_train.py
- To run the online test simulation, run the following command
$ python test_sim.py
The hit-rates for various baselines and our approach are given in the table below:
Method | Mean Hit-Rate |
---|---|
LRU | 0.42 |
LFU | 0.43 |
FIFO | 0.36 |
LIFO | 0.03 |
BELADY | 0.54 |
Ours | 0.48 |
It can be observed that our method comes the closest in performance to the optimal figure obtained from BELADY’s algorithm (Oracle), thus demonstrating the validity of our approach.
The code is released under MIT License.