You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
uncertainties seems to have a very large performance impact when creating mid to large size arrays. Like, for creating a 2048x2048 array with uncertainties, common in astronomical images, it took around 24s for each array creation. According the profiler, the impact is basically due to the UFloat instances creation:
There is any way to handle uncertainties arrays without pass by these overhead? Is subclassing numpy ndarray instead of just creating a ndarray of UFloats a viable workaround to speedup the code?
The text was updated successfully, but these errors were encountered:
Thank you for the feedback. "Large" arrays with uncertainties are not fast, with this package.
There is a similar issue on the subject, so I'll close this one after this comment, but feel free to re-open it: #57. Handling fully-correlated uncertainties in arrays in a fast way requires some thinking. For example, if you invert a 4 million element matrix (like in your example), each of the 4 million elements depends on the 4 million other ones in a specific way: that's a huge amount of data (of the order of a terabyte), and therefore requires a lot of computations.
One option might (to be defined more precisely) would be to handle separately and in fast way some special cases like yours (initialization) and some simple operations (that result in each array element depending only on a few variables).
Any idea is welcome, at this stage (probably as comments in the other issue I was linking to). Thanks!
uncertainties
seems to have a very large performance impact when creating mid to large size arrays. Like, for creating a 2048x2048 array with uncertainties, common in astronomical images, it took around 24s for each array creation. According the profiler, the impact is basically due to theUFloat
instances creation:There is any way to handle uncertainties arrays without pass by these overhead? Is subclassing numpy
ndarray
instead of just creating andarray
ofUFloats
a viable workaround to speedup the code?The text was updated successfully, but these errors were encountered: