-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(postgres): Optimize the database. #1842
Comments
edited: being considered in #2112 |
<<Apply "integration" performance tests.>> This is done in the https://github.com/waku-org/test-waku-query repo |
I'm reluctant to perform the @Menduist enhancement. We might still have a bottleneck in
|
Tests had been performed in the database itself with ~12 million rows. On the other hand, the bottleneck is within the |
Thanks for adding this point @jm-clius! |
We conclude the Postgres optimization for now. |
Background
When
nwaku
has the "store/archive" protocol mounted it can store and retrieve historical messages. All this information is kept in a single table,messages
. We need to optimize this.Details
We need to get the maximum performance possible with regard to insert/select operations.
We need a rapid response when duplicate msgs happens. For that, we may need to adapt how the message id is generated so that we achieve high selectivity.
Tasks
[ ] Rename the(that renaming caused issues in the current existingmessages
table toMESSAGE
.shards.test
fleet.- [ ] Apply @Menduist 's enhancement suggestions for a more appropriate asynchronous handling: feat(common): added postgress async pool wrapper #1631 (comment)ℹ️ We understand that a query performance is acceptable in Grafana by checking that "Waku Archive Query Duration" is <50ms.
ℹ️ For that, we will use the next repo: https://github.com/waku-org/test-waku-query (cc - @richard-ramos )
Related issue
#1604
The text was updated successfully, but these errors were encountered: