You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why not just stick to this for the resampling part (at least unless we find a faster or more feature rich implementation)?
scipy
10 loops, best of 3: 25.4 ms per loop
jax on device
10 loops, best of 3: 72.8 ms per loop
torch with Tesla T4
10000 loops, best of 3: 149 µs per loop
opencv cpu
1000 loops, best of 3: 1.3 ms per loop
cupy on device
100 loops, best of 3: 4.48 ms per loop
For the record, there is an interesting related discussion thread here: pytorch/pytorch#24870
stacked spatial transforms such as #107 #108 #109 #110 #111 could be implemented as computing multiple operations on a grid and doing one step resampling
This ticket looks for a "vanilla" non-gpu implementation
https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.map_coordinates.html
https://pytorch.org/docs/stable/nn.functional.html#grid-sample
The text was updated successfully, but these errors were encountered: