Skip to content

NNtrainer is Software Framework for Training Neural Network Models on Devices.

License

Notifications You must be signed in to change notification settings

lhs8928/nntrainer

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NNtrainer

Code Coverage GitHub repo size GitHub issues GitHub pull requests Coverity Scan Build Status DailyBuild

NNtrainer is a Software Framework for training Neural Network models on devices.

Overview

NNtrainer is an Open Source Project. The aim of the NNtrainer is to develop a Software Framework to train neural network models on embedded devices which have relatively limited resources. Rather than training whole layers of a network, NNtrainer trains only one or a few layers of the layers added after a feature extractor.

Even though NNTrainer can be used to train sub-models, it requires implementation of additional functionalities to train models obtained from other machine learning and deep learning libraries. In the current version, various machine learning algorithms such as k-Nearest Neighbor (k-NN), Neural Networks, Logistic Regression and Reinforcement Learning algorithms are implemented. We also provide examples for various tasks such as transfer learning of models. In some of these examples, deep learning models such as Mobilenet V2 trained with Tensorflow-lite, are used as feature extractors. All of these were tested on Galaxy S8 with Android and PC (Ubuntu 16.04).

Official Releases

Tizen Ubuntu Android/NDK Build
6.0M2 and later 18.04 9/P
arm armv7l badge Available Ready
arm64 aarch64 badge Available android badge
x64 x64 badge ubuntu badge Ready
x86 x86 badge N/A N/A
Publish Tizen Repo PPA
API C (Official) C/C++ C/C++
  • Ready: CI system ensures build-ability and unit-testing. Users may easily build and execute. However, we do not have automated release & deployment system for this instance.
  • Available: binary packages are released and deployed automatically and periodically along with CI tests.
  • Daily Release
  • SDK Support: Tizen Studio (6.0 M2+)

Maintainer

Reviewers

Components

Supported Layers

This component defines layers which consist of a neural network model. Layers have their own properties to be set.

Keyword Layer Name Description
conv2d Convolution 2D Convolution 2-Dimentional Layer
pooling2d Pooling 2D Pooling 2-Dimentional Layer. Support average / max / global average / global max pooling
flatten Flatten Flatten Layer
fully_connected Fully Connected Fully Connected Layer
input Input Input Layer. This is not always required.
batch_normalization Batch Normalization Layer Batch Normalization Layer.
loss layer loss layer hidden from users
activation activaiton layer set by layer property

Supported Optimizers

NNTrainer Provides

Keyword Optimizer Name Description
sgd Stochastic Gradient Decent -
adam Adaptive Moment Estimation -

Supported Loss Functions

NNTrainer provides

Keyword Loss Name Description
mse Mean squared Error -
cross Cross Entropy - sigmoid if activation last layer is sigmoid
cross Cross Entropy - softmax if activation last layer is softmax

Supported Activation Functions

NNTrainer provides

Keyword Loss Name Description
tanh tanh function set as layer property
sigmoid sigmoid function set as layer property
relu relu function set as layer propery
softmax softmax function set as layer propery
weight_initializer Weight Initialization Xavier(Normal/Uniform), LeCun(Normal/Uniform), HE(Normal/Unifor)
weight_regularizer weight decay ( L2Norm only ) needs set weight_regularizer_param & type
learnig_rate_decay learning rate decay need to set step

Tensor

Tensor is responsible for calculation of a layer. It executes several operations such as addition, division, multiplication, dot production, data averaging and so on. In order to accelerate calculation speed, CBLAS (C-Basic Linear Algebra: CPU) and CUBLAS (CUDA: Basic Linear Algebra) for PC (Especially NVIDIA GPU) are implemented for some of the operations. Later, these calculations will be optimized. Currently, we supports lazy calculation mode to reduce complexity for copying tensors during calculations.

Keyword Description
4D Tensor B, C, H, W
Add/sub/mul/div -
sum, average, argmax -
Dot, Transpose -
normalization, standardization -
save, read -

Others

NNTrainer provides

Keyword Loss Name Description
weight_initializer Weight Initialization Xavier(Normal/Uniform), LeCun(Normal/Uniform), HE(Normal/Unifor)
weight_regularizer weight decay ( L2Norm only ) needs set weight_regularizer_constant & type
learnig_rate_decay learning rate decay need to set step

APIs

Currently, we provide C APIs for Tizen. C++ API will be also provided soon.

Examples for NNTrainer

A demo application which enable user defined custom shortcut on galaxy watch.

An example to train mnist dataset. It consists two convolution 2d layer, 2 pooling 2d layer, flatten layer and fully connected layer.

A reinforcement learning example with cartpole game. It is using DeepQ algorithm.

Transfer learning examples with for image classification using the Cifar 10 dataset and for OCR. TFlite is used for feature extractor and modify last layer (fully connected layer) of network.

Tizen CAPI Example

An example to demonstrate c api for Tizen. It is same transfer learing but written with tizen c api.~ Deleted instead moved to a test

A transfer learning example with for image classification using the Cifar 10 dataset. TFlite is used for feature extractor and compared with KNN.

A logistic regression example using NNTrainer.

Instructions for installing NNTrainer.

Instructions for preparing NNTrainer for execution

Open Source License

The nntrainer is an open source project released under the terms of the Apache License version 2.0.

Contributing

Contributions are welcome! Please see our Contributing Guide for more details.

About

NNtrainer is Software Framework for Training Neural Network Models on Devices.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 84.2%
  • Python 7.1%
  • C 4.6%
  • Meson 2.0%
  • Makefile 1.6%
  • Shell 0.4%
  • CMake 0.1%