Omniocular is a framework for building deep learning models on code, implemented in PyTorch by the Data Systems Group at the University of Waterloo. Various modules in Omniocular are heavily inspired by (and are compatible with) Hedwig, a framework for dcoument classification.
- Reg-CNN: Convolutional networks with regularization
- Reg-LSTM: Regularized LSTM for token sequence classification
- HR-CNN: Hierarchical Convolutional Networks with regularization
- Reg-CNN: Convolutional networks with regularization
- Reg-LSTM: Regularized LSTM for token sequence classification
- HR-CNN: Hierarchical Convolutional Networks with regularization
- Token2vec: Word2vec-based embeddings for programming language tokens
- Code2vec: Distributed representations for code from collections of AST paths
Each model directory has a README.md
with further details.
Omniocular is designed for Python 3.6 and PyTorch 0.4. PyTorch recommends Anaconda for managing your environment. We recommend creating a custom environment as follows:
$ conda create --name omniocular python=3.6
$ source activate omniocular
And installing PyTorch as follows:
$ conda install pytorch=0.4.1 cuda92 -c pytorch
Other Python packages we use can be installed via pip:
$ pip install -r requirements.txt
Download the datasets and embeddings from the
omniocular-data
repository:
$ git clone https://github.com/omniocular/omniocular.git
$ git clone https://git.uwaterloo.ca/arkeshav/omniocular-data.git
Datasets, along with embeddings should be placed in the omniocular-data
folder, with the following directory structure:
.
├── omniocular
└── omniocular-data
├── embeddings
└── datasets
After cloning the omniocular-data repo, check if you have a text file containing the embeddings for Java:
cd omniocular-data/embeddings/
ls java1k_size300_min10.bin.txt