You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We notice that there's some benefits to processing with a smaller tile size.
We previously tried to increase it, but this was because data loading was faster for large tiles.
For the rest of the processing, we see that smaller tiles result in fewer memory issues, so perhaps we can now use a 64px tilesize as default if possible, or else derive it from apply_neighborhood parameters.
The text was updated successfully, but these errors were encountered:
We can do something rather general in layercatalog.py:
elif(get_backend_config().default_reading_strategy == "load_per_product"):
datacubeParams.setLoadPerProduct(True)
if "tilesize" not in feature_flags:
#when doing load_per_product, tilesize does not affect read_performance, and smaller chunks are better for memory usage
getattr(datacubeParams, "tileSize_$eq")(128)
not committing this now, as it requires some followup. Maybe even better is to make this 'default' chunk size a parameter in the backend config or have it as a job option rather than custom feature flag.
We notice that there's some benefits to processing with a smaller tile size.
We previously tried to increase it, but this was because data loading was faster for large tiles.
For the rest of the processing, we see that smaller tiles result in fewer memory issues, so perhaps we can now use a 64px tilesize as default if possible, or else derive it from apply_neighborhood parameters.
The text was updated successfully, but these errors were encountered: