You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the QuakeLib library reads in the HDF5 event/simulation files completely into memory, regardless of size. Sometimes we have simulations that reach > 10GB and doing any simple PyVQ plots requires like 20GB of RAM and takes forever.
We need: An improved model that uses, say the h5py Python module to incrementally read in the event file. Maybe we should just be reading in 1000 events at a time, computing whatever we need, then keep the plot-able results and delete the events from memory ("del" in Python).
The text was updated successfully, but these errors were encountered:
Currently, the QuakeLib library reads in the HDF5 event/simulation files completely into memory, regardless of size. Sometimes we have simulations that reach > 10GB and doing any simple PyVQ plots requires like 20GB of RAM and takes forever.
We need: An improved model that uses, say the h5py Python module to incrementally read in the event file. Maybe we should just be reading in 1000 events at a time, computing whatever we need, then keep the plot-able results and delete the events from memory ("del" in Python).
The text was updated successfully, but these errors were encountered: