Skip to content
Andrey Mekin edited this page Apr 11, 2016 · 4 revisions

The Storage Monitoring is designed to help administrators of the Omaha server to follow up on storage space and manage it. Currently, it supports S3 and local file systems; we recommend using the last one only for a development environment.

Monitoring page:

This page contains a graphical representation of information about the space usage. This information is collected each 60 seconds. All information is stored using Django’s cache framework, please, pay attention to it when you choose a cache backend.

The data is represented in two different ways:

  • Used Space / Limit displays relations between a used space and a maximum limit for each type of object. Users can set up limits on the “Preferences” page.
  • Percentage chart displays percentage ratios of used space for each type to the total size of an occupied space.

Screenshot

Storage limits:

The Storage Monitoring provides an ability to set logical limits for storage. All available settings can be found on the “Preferences” page. The Omaha server launches tasks checking such limits by a schedule described in the CELERYBEAT_SCHEDULE parameter in the settings.py file.

List of tasks:

  • Deletion of crashes and feedbacks older than X days. Task is executed every 24 hours by default.
  • Deletion of crashes and feedbacks until the size of storage reaches X Gb. (Tasks are performed sequentially for each type of object). Task is executed every 60 minutes by default.
  • Deletion of crash duplicates. It removes all crashes with the same signature except X of the most recent. Task is executed every 24 hours by default.
  • Calculation of space usage. It saves the results in cache storage. Also, it checks size limits and produces Sentry notifications when a limit is exceeded.

Please, pay attention to the fact that all these limits are logical, this means that they can be exceeded for some time until a scheduled task does its job. So, it won’t save you from a problem of disk overflow. But since that S3 doesn’t have formal size limits, this function could help you to save money.

Manual Cleanup:

It provides administrators an ability to set and launch a cleanup process that purges old or needless files by particular conditions. It allows the user to specify the following deletion conditions:

  • Maximum age (days) sets a condition of removing objects older than X days,
  • Purge used space (Gb) reduces the number of files to X Gb. Primarily, it sorts data by age and then removes files starting with the oldest.

Crashes tab has an additional option:

  • Maximum amount of duplicates calculates and removes odd duplicates of crashes.

All cleanup tasks are executed asynchronously, and a special Sentry notification indicates the beginning of execution. Conditions are checked and processed in the following order:

  • Maximum amount of duplicates,
  • Maximum age (days),
  • Purge used space (Gb).

If a condition isn’t set, it means an unnecessary step will be skipped.

Sentry notifications.

All Sentry notifications that are associated with the Storage Monitoring have the [Limitation] prefix that allows the filtering and managing of such notifications. Each notification has a unique signature (new events produce separate e-mails) and contains the link to Splunk Server.

Cleanup logs on Splunk.

Splunk entries contain information about each removed elements (particularly its creation date, id, signature and etc.) and a general information about the cleanup task (e.g. a reason of deletion, amount of removed files, etc.).