You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Instead create an index on index for each 5min and daily h5 file using h5.root.data.cols.index.createIndex(). This is a one-time operation (but also fix update_archive.py for the path where it creates a stat file fresh).
After this update then change the above to turn things around and compute index_start and index_stop based on tstart and tstop, then get the required rows with readWhere(...). This appears to reduce read times for short queries to less than 1 microsec, vs. 225 microsec now.
The text was updated successfully, but these errors were encountered:
This code is really slow:
Instead create an index on
index
for each5min
anddaily
h5 file usingh5.root.data.cols.index.createIndex()
. This is a one-time operation (but also fixupdate_archive.py
for the path where it creates a stat file fresh).After this update then change the above to turn things around and compute
index_start
andindex_stop
based ontstart
andtstop
, then get the required rows withreadWhere(...)
. This appears to reduce read times for short queries to less than 1 microsec, vs. 225 microsec now.The text was updated successfully, but these errors were encountered: