-
Notifications
You must be signed in to change notification settings - Fork 882
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
compress_chunk() blocks other queries on the table for a long time #2732
Comments
Ok, so this issue is actually much worse than originally described, the blocking also affects normal read queries on the table. Looks like #2669 might be related to this. Though that one is for decompress, not compress. |
I'm also having this issue when using the backfill script on large data set. The compress/decompress is locking read queries on the hypertable that contains the compressed chunks. Currently running timescale 2.4.2 on Postgres 13. Is there a way to prevent this lock on read queries somehow? |
having this issue when my policy defined compression job runs. Blocks all writes to the table until the job is done even though an old chunk that is not being written to is being compressed. Makes compression unusable for high throughput... |
I am experiencing the same thing. How is compression deployed in the real world? Is concurrently compressing a chunk and being able to read from the table a paid feature? |
At this point, I might recommend Timescale change their official recommendations, and the default (7 days), on the size of chunks. A few reasons for this.
So it seems like the recommended/default chunk size was established back when Timescale was new, and has not kept up with the state of things. And honestly for me, due to all of the above, I'm considering shrinking our chunks down to 15 minutes. |
We are experiencing the same thing, we have some ETL pipelines that insert into hypertables that also need to update/delete some data. If a compression job is running, then the update/delete query is blocked until compression is complete. It should be noted that the rows being updated/deleted do not reside in the chunk being compressed but are in the "tip" of the table that is uncompressed. |
Are there any updates on this front? We'd like to backfill some data at our company, but the locking troubles described above are a huge blocker as it prevents inserts at the head of the table |
I am also experiencing significant disruption due to this locking behavior. I can actually tolerate the delay on new writes because I can ensure I'm not compressing the latest chunk where all the writes are going. However, the read lock is causing deadlocks and connection exhaustion in Ignition SCADA, lately even forcing me to cycle the DB connection as the deadlock is never released and the DB connection goes into permanent fault. That being said, compression is almost essential for me to get good query performance on my huge dataset, so I can't easily do without it. I wouldn't consider this just a performance enhancement, but a rather essential requirement for realtime systems. Appreciate your consideration. |
Relevant system information:
postgres --version
): postgres (PostgreSQL) 12.4 (Debian 12.4-1.pgdg90+1)\dx
inpsql
): timescaledb | 2.0.0-rc4Describe the bug
When compressing a chunk with
compress_chunk()
, after a certain point in the process, any queries withchunks_detailed_size()
will block (possibly other metadata information queries as well, not sure).To Reproduce
Steps to reproduce the behavior:
select compress_chunk('mychunk');
select * from chunks_detailed_size('mytable')
Expected behavior
Successful response in a short amount of time.
Actual behavior
On the data node:
...eventually timing out due to
statement_timeout
(in my case, 5 minutes).Additional context
While I doubt it would be possible to not block at all, I think the blocking time should be reduced to a few seconds at most.
The text was updated successfully, but these errors were encountered: