You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My python version is 3.7 and whenever I get to quantization , I get the error:
Generating the quantization table:
Constant is not supported on esp-dl yet
LogSoftmax is not supported on esp-dl yet
My layers are simple and as shown below:
model = Sequential()
model.add(Conv2D(32, (5, 5), activation='relu', input_shape=(96, 96, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(6, activation='softmax'))
I have checked that softmax is supported in the layers. I am using calibrator.so file on linux. Should I be using convert.py to work around?
The text was updated successfully, but these errors were encountered:
github-actionsbot
changed the title
Quantization Layer Not supported: Softmax and constant
Quantization Layer Not supported: Softmax and constant (AIV-702)
Jul 25, 2024
For the constant operator, you can add 'extract_constant_to_initializer' in the onnxoptimizer.optimize() within the optimize_fp_model function. Refer to this commit for the modification: 020231e.
For the LogSoftmax operator, esp-dl does not currently support it; you can check your ONNX file again.
Hi,
I am following the tutorial https://blog.espressif.com/hand-gesture-recognition-on-esp32-s3-with-esp-deep-learning-176d7e13fd37 and have been getting issue on quantization step.
My python version is 3.7 and whenever I get to quantization , I get the error:
Generating the quantization table:
Constant is not supported on esp-dl yet
LogSoftmax is not supported on esp-dl yet
My layers are simple and as shown below:
model = Sequential()
model.add(Conv2D(32, (5, 5), activation='relu', input_shape=(96, 96, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(6, activation='softmax'))
I have checked that softmax is supported in the layers. I am using calibrator.so file on linux. Should I be using convert.py to work around?
The text was updated successfully, but these errors were encountered: