Replies: 4 comments 3 replies
-
Hi. There are few possible reasons for this behaviour. One thing to keep in mind is that backfill uploads blocks to the long term storage. Blocks on long-term storage are queried only after compactor finds them and includes them in "bucket-index", and after that queriers and store-gateways need to reload bucket index. This can take up to 30 minutes in the default configuration. (Config options: Another possibility is that queriers don't actually query blocks from long-term storage if the query falls within most recent 12h (eg. query like "now-3h ... now"). In that case queriers only ask ingesters for data, but the block uploaded via backfill is not stored in ingesters. (Config options: To flush the results cache one can simply restart memcachd server, or execute |
Beta Was this translation helpful? Give feedback.
-
Thanks for the insight, since I start working on this yesterday, I now see the data, Thanks for the insights, and specifying the configuration parameters that are related to this topic 👍 |
Beta Was this translation helpful? Give feedback.
-
@asaf400 : Can you pls share you mimir config ? Also how you integrated mimir with grafana ? I am stuck where remote write isn't working as expected and could not see the data in grafana |
Beta Was this translation helpful? Give feedback.
-
@asaf400 how long did it take you to see the data you backfilled via mimir? I completed a backfill about 3 hours ago and refreshed the results-cache via
I see the recent data that prometheus is remote writing but I'm unable to see any historical data |
Beta Was this translation helpful? Give feedback.
-
I am not sure where to post this,
It's both a docs issue, and a help & support community request,
sorry if this is spam, here goes:
Hello, after a setup of Kube-Prometheus-Stack + Mimir (single tenant) on a local dev k8s cluster*,
I am able to see that Prometheus pushes new data into Mimir succesfully, but I would also like to 'import' \ backfill,
the old Prometheus data, which was present locally before remote_write was enabled in the config..
Since I wanted to upload a very small timerange, metrics period of less than 2 hours (prometheus chunk commit interval),
so that prometheus didn't even create a complete tsdb 'chunk',
I have used prometheus snapshot api call, to manually create a tsdb snapshot,
I then copied the snapshot data directories which looks like
and used the backfill command to upload:
But the data doesn't show up in grafana upon query,
So I have 2 explore tabs, one directed at the prometheus instance,
and the other tab directed at the Mimir instance (read endpoint)
with the same simple query:
container_memory_usage_bytes{pod="application"}
but the Mimir tab only ever shows the data since the remote_write was enabled,
and nothing from before, while the Prometheus has the complete range of data.
Screenshot of Prometheus results:
Screenshot of Mimir results (after tsdb snapshot block uploaded, only the fresh data from since remote_write is available) :
While reading the docs, I got the notion that I might be required to run a flush on the results-cache,
But I do not know how to perform such an action..
Under these docs articles:
https://grafana.com/docs/mimir/latest/operators-guide/configure/configure-tsdb-block-upload/
https://grafana.com/docs/mimir/latest/operators-guide/reference-http-api/
https://grafana.com/docs/mimir/latest/operators-guide/tools/mimirtool/#backfill
Most importantly this quote:
but unfortunately it is not stated in the docs exactly how to achieve this,
it is however stated that it's part of the query mechanism..?
and in the API docs, there is this API: GET,POST /ingester/flush
https://grafana.com/docs/mimir/latest/operators-guide/reference-http-api/#flush-chunks--blocks
Is that related?
** local cluster used in order to facilitate understanding about the migration from pure Prometheus --> Mimir in our production clusters
Edit: multitenant=false
Beta Was this translation helpful? Give feedback.
All reactions