# Keras to Realtime Audio
This project demonstrates going from a python keras program to train a deep neural net on audio, to realtime deployment in a SuperCollider plugin written in C++, and Web Audio API javascript for a webpage. The neural net processes spectral data in frames.
The SuperCollider route adapts the kerasify library for file saving and loading, with a streamlined neural net implementation, and the web route uses the ONNX file format for saving and loading a model and MMLL for audio processing.
The original python code depends upon the librosa MIR audio library, keras and the aforementioned kerasify and onnxmltools.
Building the SuperCollider plugin requires the SuperCollider source code (LINK) and CMake.
The code was developed by Nick Collins as part of the AHRC funded MIMIC project (Musically Intelligent Machines Interacting Creatively). It is released under an MIT license (python and javascript parts), excepting that the SuperCollider source code is under GNU GPL and so thus is the plugin.