-
-
Notifications
You must be signed in to change notification settings - Fork 747
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zstandard compression for parameters and results #5995
base: master
Are you sure you want to change the base?
Conversation
… embedded liveaction doc in executions
st2common/tests/unit/migrations/test_v35_migrate_db_dict_field_values.py
Outdated
Show resolved
Hide resolved
@amanda11 @cognifloyd would you please take a look at this and give me some feedback. I am interested in if the direction the code is going in is acceptable. We are having numerous instances now of mongo documents being too large. I ran a check and one of the larger objects we have would compress down to .5 mb from 10 mb using zstandard. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor nit.
I don't know why Circle CI is falling. I'm guessing something we really on is no longer supported or was upgraded.
remove comment Co-authored-by: Jacob Floyd <[email protected]>
@cognifloyd The circle ci is fixed. rabbitmq needs priv=true and redis needed to be pinned. pr. can you merge? |
I merged the packaging fixes. |
I think zstandard compression is a good change in case numbers across various types of datasets confirm it's indeed worth it / good trade off of CPU cycles used for compressing / decompressing those field values. We decided to not enable compression by default as part of #4846 to keep things simple and reduce the scope of those changes. I'm fine with enabling the compression by default now, in case it numbers still back this up. As far as this change goes:
This one I'm not too sure about it yet and I need to dig in and think about it some more. I think it would be good if we split this PR into two - one for field compression changes and one for field value / foreign key change. In the end, both changes are somewhat related, but they have vastly different implications - one was already benchmarked in the past, it's backward / forward compatible and the other one is not, it could have large implications and also requires a potentially expensive and breaking migration (which is not backward compatible). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution.
I think we are on the right track, but I added some comments on splitting this PR into two and potentially adding more micro benchmarks, etc.
Thanks @Kami for the review. I would be ok just leaving this a breaking change. I don't have a lot of cycles to split it up. Thanks. |
FWIW we have forked over to this branch along with removing the deep copies in orquesta . We are seeing huge speed improvements. Also, no more Mongo document too large. |
why
mitigate (as much as possible) 'document too large' mongo exceptions. 16 mb is the limit for mongodb document size.
done
executionDB
model for theliveaction
paramter as it is now a string as opposed to an embedded document