Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default MaxPoolingOp only supports NHWC on device type CPU but I don't have MaxPooling in this model #262

Open
DemO-O-On opened this issue May 31, 2024 · 1 comment

Comments

@DemO-O-On
Copy link

DemO-O-On commented May 31, 2024

Hello, I want to make prediction from this:

auto input = cppflow::tensor(points_teeth, shape);
for (auto s: model_points.get_operations()){
      std::cout << s << std::endl;
}
output = model_points({{"serving_default_input_3:0", input}},{"StatefulPartitionedCall:0"});

Here's what this code prints:

count/Read/ReadVariableOp
total
total/Read/ReadVariableOp
count_1
count_1/Read/ReadVariableOp
total_1
total_1/Read/ReadVariableOp
count_2
count_2/Read/ReadVariableOp
total_2
total_2/Read/ReadVariableOp
Adam/v/conv1d_20/bias
Adam/v/conv1d_20/bias/Read/ReadVariableOp
Adam/m/conv1d_20/bias
Adam/m/conv1d_20/bias/Read/ReadVariableOp
Adam/v/conv1d_20/kernel
Adam/v/conv1d_20/kernel/Read/ReadVariableOp
Adam/m/conv1d_20/kernel
Adam/m/conv1d_20/kernel/Read/ReadVariableOp
Adam/v/batch_normalization_35/beta
Adam/v/batch_normalization_35/beta/Read/ReadVariableOp
Adam/m/batch_normalization_35/beta
Adam/m/batch_normalization_35/beta/Read/ReadVariableOp
Adam/v/batch_normalization_35/gamma
Adam/v/batch_normalization_35/gamma/Read/ReadVariableOp
Adam/m/batch_normalization_35/gamma
Adam/m/batch_normalization_35/gamma/Read/ReadVariableOp
Adam/v/conv1d_19/bias
Adam/v/conv1d_19/bias/Read/ReadVariableOp
Adam/m/conv1d_19/bias
Adam/m/conv1d_19/bias/Read/ReadVariableOp
Adam/v/conv1d_19/kernel
Adam/v/conv1d_19/kernel/Read/ReadVariableOp
Adam/m/conv1d_19/kernel
Adam/m/conv1d_19/kernel/Read/ReadVariableOp
Adam/v/batch_normalization_34/beta
Adam/v/batch_normalization_34/beta/Read/ReadVariableOp
Adam/m/batch_normalization_34/beta
Adam/m/batch_normalization_34/beta/Read/ReadVariableOp
Adam/v/batch_normalization_34/gamma
Adam/v/batch_normalization_34/gamma/Read/ReadVariableOp
Adam/m/batch_normalization_34/gamma
Adam/m/batch_normalization_34/gamma/Read/ReadVariableOp
Adam/v/conv1d_transpose_17/bias
Adam/v/conv1d_transpose_17/bias/Read/ReadVariableOp
Adam/m/conv1d_transpose_17/bias
Adam/m/conv1d_transpose_17/bias/Read/ReadVariableOp
Adam/v/conv1d_transpose_17/kernel
Adam/v/conv1d_transpose_17/kernel/Read/ReadVariableOp
Adam/m/conv1d_transpose_17/kernel
Adam/m/conv1d_transpose_17/kernel/Read/ReadVariableOp
Adam/v/batch_normalization_33/beta
Adam/v/batch_normalization_33/beta/Read/ReadVariableOp
Adam/m/batch_normalization_33/beta
Adam/m/batch_normalization_33/beta/Read/ReadVariableOp
Adam/v/batch_normalization_33/gamma
Adam/v/batch_normalization_33/gamma/Read/ReadVariableOp
Adam/m/batch_normalization_33/gamma
Adam/m/batch_normalization_33/gamma/Read/ReadVariableOp
Adam/v/conv1d_transpose_16/bias
Adam/v/conv1d_transpose_16/bias/Read/ReadVariableOp
Adam/m/conv1d_transpose_16/bias
Adam/m/conv1d_transpose_16/bias/Read/ReadVariableOp
Adam/v/conv1d_transpose_16/kernel
Adam/v/conv1d_transpose_16/kernel/Read/ReadVariableOp
Adam/m/conv1d_transpose_16/kernel
Adam/m/conv1d_transpose_16/kernel/Read/ReadVariableOp
Adam/v/batch_normalization_32/beta
Adam/v/batch_normalization_32/beta/Read/ReadVariableOp
Adam/m/batch_normalization_32/beta
Adam/m/batch_normalization_32/beta/Read/ReadVariableOp
Adam/v/batch_normalization_32/gamma
Adam/v/batch_normalization_32/gamma/Read/ReadVariableOp
Adam/m/batch_normalization_32/gamma
Adam/m/batch_normalization_32/gamma/Read/ReadVariableOp
Adam/v/conv1d_transpose_15/bias
Adam/v/conv1d_transpose_15/bias/Read/ReadVariableOp
Adam/m/conv1d_transpose_15/bias
Adam/m/conv1d_transpose_15/bias/Read/ReadVariableOp
Adam/v/conv1d_transpose_15/kernel
Adam/v/conv1d_transpose_15/kernel/Read/ReadVariableOp
Adam/m/conv1d_transpose_15/kernel
Adam/m/conv1d_transpose_15/kernel/Read/ReadVariableOp
Adam/v/batch_normalization_31/beta
Adam/v/batch_normalization_31/beta/Read/ReadVariableOp
Adam/m/batch_normalization_31/beta
Adam/m/batch_normalization_31/beta/Read/ReadVariableOp
Adam/v/batch_normalization_31/gamma
Adam/v/batch_normalization_31/gamma/Read/ReadVariableOp
Adam/m/batch_normalization_31/gamma
Adam/m/batch_normalization_31/gamma/Read/ReadVariableOp
Adam/v/conv1d_transpose_14/bias
Adam/v/conv1d_transpose_14/bias/Read/ReadVariableOp
Adam/m/conv1d_transpose_14/bias
Adam/m/conv1d_transpose_14/bias/Read/ReadVariableOp
Adam/v/conv1d_transpose_14/kernel
Adam/v/conv1d_transpose_14/kernel/Read/ReadVariableOp
Adam/m/conv1d_transpose_14/kernel
Adam/m/conv1d_transpose_14/kernel/Read/ReadVariableOp
Adam/v/batch_normalization_30/beta
Adam/v/batch_normalization_30/beta/Read/ReadVariableOp
Adam/m/batch_normalization_30/beta
Adam/m/batch_normalization_30/beta/Read/ReadVariableOp
Adam/v/batch_normalization_30/gamma
Adam/v/batch_normalization_30/gamma/Read/ReadVariableOp
Adam/m/batch_normalization_30/gamma
Adam/m/batch_normalization_30/gamma/Read/ReadVariableOp
Adam/v/conv1d_transpose_13/bias
Adam/v/conv1d_transpose_13/bias/Read/ReadVariableOp
Adam/m/conv1d_transpose_13/bias
Adam/m/conv1d_transpose_13/bias/Read/ReadVariableOp
Adam/v/conv1d_transpose_13/kernel
Adam/v/conv1d_transpose_13/kernel/Read/ReadVariableOp
Adam/m/conv1d_transpose_13/kernel
Adam/m/conv1d_transpose_13/kernel/Read/ReadVariableOp
Adam/v/batch_normalization_29/beta
Adam/v/batch_normalization_29/beta/Read/ReadVariableOp
Adam/m/batch_normalization_29/beta
Adam/m/batch_normalization_29/beta/Read/ReadVariableOp
Adam/v/batch_normalization_29/gamma
Adam/v/batch_normalization_29/gamma/Read/ReadVariableOp
Adam/m/batch_normalization_29/gamma
Adam/m/batch_normalization_29/gamma/Read/ReadVariableOp
Adam/v/conv1d_transpose_12/bias
Adam/v/conv1d_transpose_12/bias/Read/ReadVariableOp
Adam/m/conv1d_transpose_12/bias
Adam/m/conv1d_transpose_12/bias/Read/ReadVariableOp
Adam/v/conv1d_transpose_12/kernel
Adam/v/conv1d_transpose_12/kernel/Read/ReadVariableOp
Adam/m/conv1d_transpose_12/kernel
Adam/m/conv1d_transpose_12/kernel/Read/ReadVariableOp
Adam/v/batch_normalization_28/beta
Adam/v/batch_normalization_28/beta/Read/ReadVariableOp
Adam/m/batch_normalization_28/beta
Adam/m/batch_normalization_28/beta/Read/ReadVariableOp
Adam/v/batch_normalization_28/gamma
Adam/v/batch_normalization_28/gamma/Read/ReadVariableOp
Adam/m/batch_normalization_28/gamma
Adam/m/batch_normalization_28/gamma/Read/ReadVariableOp
Adam/v/conv1d_18/bias
Adam/v/conv1d_18/bias/Read/ReadVariableOp
Adam/m/conv1d_18/bias
Adam/m/conv1d_18/bias/Read/ReadVariableOp
Adam/v/conv1d_18/kernel
Adam/v/conv1d_18/kernel/Read/ReadVariableOp
Adam/m/conv1d_18/kernel
Adam/m/conv1d_18/kernel/Read/ReadVariableOp
Adam/v/batch_normalization_27/beta
Adam/v/batch_normalization_27/beta/Read/ReadVariableOp
Adam/m/batch_normalization_27/beta
Adam/m/batch_normalization_27/beta/Read/ReadVariableOp
Adam/v/batch_normalization_27/gamma
Adam/v/batch_normalization_27/gamma/Read/ReadVariableOp
Adam/m/batch_normalization_27/gamma
Adam/m/batch_normalization_27/gamma/Read/ReadVariableOp
Adam/v/conv1d_17/bias
Adam/v/conv1d_17/bias/Read/ReadVariableOp
Adam/m/conv1d_17/bias
Adam/m/conv1d_17/bias/Read/ReadVariableOp
Adam/v/conv1d_17/kernel
Adam/v/conv1d_17/kernel/Read/ReadVariableOp
Adam/m/conv1d_17/kernel
Adam/m/conv1d_17/kernel/Read/ReadVariableOp
Adam/v/batch_normalization_26/beta
Adam/v/batch_normalization_26/beta/Read/ReadVariableOp
Adam/m/batch_normalization_26/beta
Adam/m/batch_normalization_26/beta/Read/ReadVariableOp
Adam/v/batch_normalization_26/gamma
Adam/v/batch_normalization_26/gamma/Read/ReadVariableOp
Adam/m/batch_normalization_26/gamma
Adam/m/batch_normalization_26/gamma/Read/ReadVariableOp
Adam/v/conv1d_16/bias
Adam/v/conv1d_16/bias/Read/ReadVariableOp
Adam/m/conv1d_16/bias
Adam/m/conv1d_16/bias/Read/ReadVariableOp
Adam/v/conv1d_16/kernel
Adam/v/conv1d_16/kernel/Read/ReadVariableOp
Adam/m/conv1d_16/kernel
Adam/m/conv1d_16/kernel/Read/ReadVariableOp
Adam/v/batch_normalization_25/beta
Adam/v/batch_normalization_25/beta/Read/ReadVariableOp
Adam/m/batch_normalization_25/beta
Adam/m/batch_normalization_25/beta/Read/ReadVariableOp
Adam/v/batch_normalization_25/gamma
Adam/v/batch_normalization_25/gamma/Read/ReadVariableOp
Adam/m/batch_normalization_25/gamma
Adam/m/batch_normalization_25/gamma/Read/ReadVariableOp
Adam/v/conv1d_15/bias
Adam/v/conv1d_15/bias/Read/ReadVariableOp
Adam/m/conv1d_15/bias
Adam/m/conv1d_15/bias/Read/ReadVariableOp
Adam/v/conv1d_15/kernel
Adam/v/conv1d_15/kernel/Read/ReadVariableOp
Adam/m/conv1d_15/kernel
Adam/m/conv1d_15/kernel/Read/ReadVariableOp
Adam/v/batch_normalization_24/beta
Adam/v/batch_normalization_24/beta/Read/ReadVariableOp
Adam/m/batch_normalization_24/beta
Adam/m/batch_normalization_24/beta/Read/ReadVariableOp
Adam/v/batch_normalization_24/gamma
Adam/v/batch_normalization_24/gamma/Read/ReadVariableOp
Adam/m/batch_normalization_24/gamma
Adam/m/batch_normalization_24/gamma/Read/ReadVariableOp
Adam/v/conv1d_14/bias
Adam/v/conv1d_14/bias/Read/ReadVariableOp
Adam/m/conv1d_14/bias
Adam/m/conv1d_14/bias/Read/ReadVariableOp
Adam/v/conv1d_14/kernel
Adam/v/conv1d_14/kernel/Read/ReadVariableOp
Adam/m/conv1d_14/kernel
Adam/m/conv1d_14/kernel/Read/ReadVariableOp
learning_rate
learning_rate/Read/ReadVariableOp
iteration
iteration/Read/ReadVariableOp
conv1d_20/bias
conv1d_20/bias/Read/ReadVariableOp
conv1d_20/kernel
conv1d_20/kernel/Read/ReadVariableOp
batch_normalization_35/moving_variance
batch_normalization_35/moving_variance/Read/ReadVariableOp
batch_normalization_35/moving_mean
batch_normalization_35/moving_mean/Read/ReadVariableOp
batch_normalization_35/beta
batch_normalization_35/beta/Read/ReadVariableOp
batch_normalization_35/gamma
batch_normalization_35/gamma/Read/ReadVariableOp
conv1d_19/bias
conv1d_19/bias/Read/ReadVariableOp
conv1d_19/kernel
conv1d_19/kernel/Read/ReadVariableOp
batch_normalization_34/moving_variance
batch_normalization_34/moving_variance/Read/ReadVariableOp
batch_normalization_34/moving_mean
batch_normalization_34/moving_mean/Read/ReadVariableOp
batch_normalization_34/beta
batch_normalization_34/beta/Read/ReadVariableOp
batch_normalization_34/gamma
batch_normalization_34/gamma/Read/ReadVariableOp
conv1d_transpose_17/bias
conv1d_transpose_17/bias/Read/ReadVariableOp
conv1d_transpose_17/kernel
conv1d_transpose_17/kernel/Read/ReadVariableOp
batch_normalization_33/moving_variance
batch_normalization_33/moving_variance/Read/ReadVariableOp
batch_normalization_33/moving_mean
batch_normalization_33/moving_mean/Read/ReadVariableOp
batch_normalization_33/beta
batch_normalization_33/beta/Read/ReadVariableOp
batch_normalization_33/gamma
batch_normalization_33/gamma/Read/ReadVariableOp
conv1d_transpose_16/bias
conv1d_transpose_16/bias/Read/ReadVariableOp
conv1d_transpose_16/kernel
conv1d_transpose_16/kernel/Read/ReadVariableOp
batch_normalization_32/moving_variance
batch_normalization_32/moving_variance/Read/ReadVariableOp
batch_normalization_32/moving_mean
batch_normalization_32/moving_mean/Read/ReadVariableOp
batch_normalization_32/beta
batch_normalization_32/beta/Read/ReadVariableOp
batch_normalization_32/gamma
batch_normalization_32/gamma/Read/ReadVariableOp
conv1d_transpose_15/bias
conv1d_transpose_15/bias/Read/ReadVariableOp
conv1d_transpose_15/kernel
conv1d_transpose_15/kernel/Read/ReadVariableOp
batch_normalization_31/moving_variance
batch_normalization_31/moving_variance/Read/ReadVariableOp
batch_normalization_31/moving_mean
batch_normalization_31/moving_mean/Read/ReadVariableOp
batch_normalization_31/beta
batch_normalization_31/beta/Read/ReadVariableOp
batch_normalization_31/gamma
batch_normalization_31/gamma/Read/ReadVariableOp
conv1d_transpose_14/bias
conv1d_transpose_14/bias/Read/ReadVariableOp
conv1d_transpose_14/kernel
conv1d_transpose_14/kernel/Read/ReadVariableOp
batch_normalization_30/moving_variance
batch_normalization_30/moving_variance/Read/ReadVariableOp
batch_normalization_30/moving_mean
batch_normalization_30/moving_mean/Read/ReadVariableOp
batch_normalization_30/beta
batch_normalization_30/beta/Read/ReadVariableOp
batch_normalization_30/gamma
batch_normalization_30/gamma/Read/ReadVariableOp
conv1d_transpose_13/bias
conv1d_transpose_13/bias/Read/ReadVariableOp
conv1d_transpose_13/kernel
conv1d_transpose_13/kernel/Read/ReadVariableOp
batch_normalization_29/moving_variance
batch_normalization_29/moving_variance/Read/ReadVariableOp
batch_normalization_29/moving_mean
batch_normalization_29/moving_mean/Read/ReadVariableOp
batch_normalization_29/beta
batch_normalization_29/beta/Read/ReadVariableOp
batch_normalization_29/gamma
batch_normalization_29/gamma/Read/ReadVariableOp
conv1d_transpose_12/bias
conv1d_transpose_12/bias/Read/ReadVariableOp
conv1d_transpose_12/kernel
conv1d_transpose_12/kernel/Read/ReadVariableOp
batch_normalization_28/moving_variance
batch_normalization_28/moving_variance/Read/ReadVariableOp
batch_normalization_28/moving_mean
batch_normalization_28/moving_mean/Read/ReadVariableOp
batch_normalization_28/beta
batch_normalization_28/beta/Read/ReadVariableOp
batch_normalization_28/gamma
batch_normalization_28/gamma/Read/ReadVariableOp
conv1d_18/bias
conv1d_18/bias/Read/ReadVariableOp
conv1d_18/kernel
conv1d_18/kernel/Read/ReadVariableOp
batch_normalization_27/moving_variance
batch_normalization_27/moving_variance/Read/ReadVariableOp
batch_normalization_27/moving_mean
batch_normalization_27/moving_mean/Read/ReadVariableOp
batch_normalization_27/beta
batch_normalization_27/beta/Read/ReadVariableOp
batch_normalization_27/gamma
batch_normalization_27/gamma/Read/ReadVariableOp
conv1d_17/bias
conv1d_17/bias/Read/ReadVariableOp
conv1d_17/kernel
conv1d_17/kernel/Read/ReadVariableOp
batch_normalization_26/moving_variance
batch_normalization_26/moving_variance/Read/ReadVariableOp
batch_normalization_26/moving_mean
batch_normalization_26/moving_mean/Read/ReadVariableOp
batch_normalization_26/beta
batch_normalization_26/beta/Read/ReadVariableOp
batch_normalization_26/gamma
batch_normalization_26/gamma/Read/ReadVariableOp
conv1d_16/bias
conv1d_16/bias/Read/ReadVariableOp
conv1d_16/kernel
conv1d_16/kernel/Read/ReadVariableOp
batch_normalization_25/moving_variance
batch_normalization_25/moving_variance/Read/ReadVariableOp
batch_normalization_25/moving_mean
batch_normalization_25/moving_mean/Read/ReadVariableOp
batch_normalization_25/beta
batch_normalization_25/beta/Read/ReadVariableOp
batch_normalization_25/gamma
batch_normalization_25/gamma/Read/ReadVariableOp
conv1d_15/bias
conv1d_15/bias/Read/ReadVariableOp
conv1d_15/kernel
conv1d_15/kernel/Read/ReadVariableOp
batch_normalization_24/moving_variance
batch_normalization_24/moving_variance/Read/ReadVariableOp
batch_normalization_24/moving_mean
batch_normalization_24/moving_mean/Read/ReadVariableOp
batch_normalization_24/beta
batch_normalization_24/beta/Read/ReadVariableOp
batch_normalization_24/gamma
batch_normalization_24/gamma/Read/ReadVariableOp
conv1d_14/bias
conv1d_14/bias/Read/ReadVariableOp
conv1d_14/kernel
conv1d_14/kernel/Read/ReadVariableOp
serving_default_input_3
StatefulPartitionedCall
NoOp
Const
saver_filename
StatefulPartitionedCall_1
StatefulPartitionedCall_2
terminate called after throwing an instance of 'std::runtime_error'
  what():  Default MaxPoolingOp only supports NHWC on device type CPU
         [[{{function_node __inference__wrapped_model_1115831}}{{node model_2/max_pooling1d_2/MaxPool}}]]

I loaded another model in this file before and made predictions but I don't understand how and why they can interract. Also I noticed I have maxpooling in this model in python but don't have it in c++. Here's code of model in python:

input = Input(shape=(num_points, 3))
x1 = conv_block(input, 128)
x = conv_block(x1, 256)
x2 = conv_block(x, 1024)
x = conv_block(x2, 1024)
x3 = conv_block(x, 1024)
x7 = deconv_block(x3, 1024)
x = deconv_block(x7, 1024)
x4 = deconv_block(x, 1024)
x = deconv_block(x4, 512)
x5 = deconv_block(x, 256)
x = deconv_block(x5, 128)
x6 = MaxPool1D(4, data_format = 'channels_first')(x)
#x6 = tf.tile(x6, [1, num_points, 1])
segmentation_input = Concatenate()(
    [
        x1,
        x2,
        x4,
        x5,
        x6
    ]
)

segmentation_features = conv_block(segmentation_input, 256)

output = Conv1D(num_classes, 1, activation="sigmoid")(segmentation_features)
return Model(input, output)
@DemO-O-On
Copy link
Author

Upd: resaved model in Google Colab using only CPU and tried to make prediction with it. Here's new error:

(tensor: shape=[3 32 3], dtype=TF_FLOAT, data=
[[[0.0123270601 0.355330735 0.547858953]
  [0.000193346394 0.366971523 0.547208309]
  [0.0271116234 0.350225359 0.546478629]
  ...
  [0.119589187 0.348327279 0.536324918]
  [0.0871538371 0.366260797 0.535787404]
  [0.0218100566 0.398851484 0.535461068]]

 [[0.173270345 0.320599109 0.53522557]
  [0.058475405 0.380493224 0.5350824]
  [0.221432179 0.282036275 0.535037518]
  ...
  [0.236433044 0.289447248 0.529477835]
  [0.720739186 0.13403447 0.52921474]
  [0.0364344493 0.41110763 0.528939784]]

 [[0.732707858 0.136695519 0.528835058]
  [0.227762714 0.299189806 0.528707266]
  [0.092518039 0.384792328 0.528624833]
  ...
  [0.665200353 0.13419053 0.524711549]
  [0.686733603 0.139694497 0.524705172]
  [0.156475887 0.3657493 0.524685383]]])
terminate called after throwing an instance of 'std::runtime_error'
  what():  Could not deduce type! type_name: St6vectorIN7cppflow6tensorESaIS1_EE

It appears after all operations inside of predict

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant