-
Notifications
You must be signed in to change notification settings - Fork 521
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to query trace in s3 storage, and index.json.gz has not been updated for a long time. #3369
Comments
It's hard to say. Let's start by reviewing your compactor logs. The compactor is the component responsible for updating the tenant index and it may contain some clues about why your index is so out of date. cc @zalegrala |
It looks like #3224 hasn't been released, which contains an important fix for the PR linked above. |
hi joe, i am sorry that we did not save the complete logs. Here is a portion of the compactor logs.
There were no other error logs or logs like 'listing blocks complete'. |
Checking a little closer, it looks like the polling change is also unreleased. Are you overriding the image in your helm values? v2.3.1...main The fix above as you mention will help the compactor, but not the
The |
Yup - this was a bummer. I used an image from docker hub that was a little bit after the 2.3.0 release and it had this issue. Since I was new to Tempo, I thought I was screwing up the configuration with the s3 backend. Finally pieced together that the index.json.gz file was missing and that the compactor was responsible for creating/updating that file. I deployed 2.3.0-rc0 from docker hub and things work great! |
We keep release branches so not all commits to main get released right away with the immediate next release. If you want to run only the released images, stick to the tagged versions but drop the leading The upcoming Was this the same issue as originally reported? Was the image used was from main? Please correct me if I misunderstood. |
This issue has been automatically marked as stale because it has not had any activity in the past 60 days. |
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Environment:
Additional Context
It looks like the image of v2.3.0 includes the commit of polling improvements(#2652).
By executing "./tempo-cli list blocks single-tenant -c tempo.yaml" and printing the blockId obtained each time,
we found that it seemed to be trapped in an endless loop, and there were duplicate blockIds in the log.
After rolling back the image to v2.3.0-rc, everything went backed to normal.
Is it possible there is a bug in the listBlocks method? Or any wrong with our environment or configuration?
The text was updated successfully, but these errors were encountered: