Skip to content
forked from serizba/cppflow

Run TensorFlow models in C++ without installation and without Bazel

License

Notifications You must be signed in to change notification settings

dbersan/cppflow

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CppFlow

Run TensorFlow models in c++ without Bazel, without TensorFlow installation and without compiling Tensorflow.

    // Read the graph
    Model model{"graph.pb"};
    model.init();
    
    // Prepare inputs and outputs
    Tensor input{model, "input"};
    Tensor output{model, "output"};
    
    // Run
    model.run(input, output);

CppFlow uses Tensorflow C API to run the models, meaning you can use it without installing Tensorflow and without compiling the whole TensorFlow repository with bazel, you just need to download the C API. With this project you can manage and run your models in C++ without worrying about void, malloc or free. With CppFlow you easily can:

  • Open .pb models created with Python
  • Restore checkpoints
  • Save new checkpoints
  • Feed new data to your inputs
  • Retrieve data from the outputs

How To Run It

Since it uses TensorFlow C API you just have to download it.

You can either install the library system wide by following the tutorial on the Tensorflow page or you can place the contents of the archive in a folder called libtensorflow in the home directory.

Afterwards, you can run the examples:

git clone [email protected]:serizba/cppflow.git
cd cppflow/examples/load_model
mkdir build
cd build
cmake ..
make
./example

Usage

Suppose we have a saved graph defined by the following TensorFlow Python code (examples/load_model/create_model.py):

# Two simple inputs
a = tf.placeholder(tf.float32, shape=(1, 100), name="input_a")
b = tf.placeholder(tf.float32, shape=(1, 100), name="input_b")
# Output
c = tf.add(a, b, name='result')

Create Model

You need the graph definition in a .pb file to create a model (examples/load_model/model.pb), then you can init it or restore from checkpoint

Model model("graph.pb");
// Initialize the variables...
model.init();
// ... or restore from checkpoint
model.restore("train.ckpt")

Define Inputs and Outputs

You can create the Tensors by the name of the operations (if you don't know use model.get_operations())

Tensor input_a{model, "input_a"};
Tensor input_b{model, "input_b"};
Tensor output{model, "result"};

Feed new data to the inputs

Excepected inputs have a shape=(1,100), therefore we have to supply 100 elements:

// Create a vector data = {0,1,2,...,99}
std::vector<float> data(100);
std::iota(data.begin(), data.end(), 0);

// Feed data to the inputs
input_a->set_data(data);
input_b->set_data(data);

Run and get the ouputs

// Run!
model.run({input_a, input_b}, output);

// Write the output: 0, 2, 4, 6,.., 198
for (float f : output->get_data<float>()) {
    std::cout << f << " ";
}
std::cout << std::endl;

About

Run TensorFlow models in C++ without installation and without Bazel

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 100.0%