You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I saw that the pangeo-hubs disk, sized to 200 GB, is only using 5.8 GB and is peaking to ~19 GB of memory when starting up. This makes me think that the default size of the prometheus helm chart, configured to be 8GB, is sufficient and we shouldn't need to adjust it.
For mybinder.org-deploy with lots of pods and node that has allocated 2000 GB in SSD, we have a 30.5G /data folder and a 30Gi memory request. Overall, we have over-allocated storage space significantly I think.
Change the value so new clusters get it, and copy over the existing values to existing clusters support chart config as we can't reduce a PVCs disk space request
I also actually manually increased the size of the disk (with kubectl -n support edit pvc), but
that wasn't the problem. However, we need to persist the change regardless, so here it is.
2df2992 added the 100GB size configuration defined in basehub.
The text was updated successfully, but these errors were encountered:
consideRatio
changed the title
Tune prometheus-server disk size default
basehub: tune prometheus-server's configured disk size (currently 100G)
Feb 17, 2023
consideRatio
changed the title
basehub: tune prometheus-server's configured disk size (currently 100G)
basehub: reduce prometheus-server's configured disk size (currently 100G)
Feb 17, 2023
As I go through the manual process of moving data from pd-standard spinning disks to pd-balanced (#3112), I'll also try to size the disks appropriately wherever possible
I saw that the pangeo-hubs disk, sized to 200 GB, is only using 5.8 GB and is peaking to ~19 GB of memory when starting up. This makes me think that the default size of the prometheus helm chart, configured to be 8GB, is sufficient and we shouldn't need to adjust it.
For mybinder.org-deploy with lots of pods and node that has allocated 2000 GB in SSD, we have a
30.5G /data
folder and a30Gi
memory request. Overall, we have over-allocated storage space significantly I think.Action points
Related
The text was updated successfully, but these errors were encountered: