You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On GPU hardware, in production scripts, post-processing takes twice as long as neural network inference.
On an NVidia GeForce GTX 1080 Ti (12GB RAM), for one full-size Sentinel-2 tile, CRGA OS2 UNet model, tile size 1024:
With post-processing: ~6 minutes
Without post-processing: ~2 minutes
A temporary workaround for speedup could be to skip post-processing by setting nodatavalues to None in inference.py. However, this would induce some artifacts, especially on images containing NoData.
A more long-term solution would be to include the post-processing inside the Keras model.
The text was updated successfully, but these errors were encountered:
Starting from otbtf >=3.3.0 this would be easy to implement thanks to the otbtf.model.ModelBase.
We just have to implement the post-processing steps in the postprocess_outputs() method.
On GPU hardware, in production scripts, post-processing takes twice as long as neural network inference.
On an NVidia GeForce GTX 1080 Ti (12GB RAM), for one full-size Sentinel-2 tile, CRGA OS2 UNet model, tile size 1024:
A temporary workaround for speedup could be to skip post-processing by setting
nodatavalues
to None ininference.py
. However, this would induce some artifacts, especially on images containing NoData.A more long-term solution would be to include the post-processing inside the Keras model.
The text was updated successfully, but these errors were encountered: