-
Notifications
You must be signed in to change notification settings - Fork 10.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Long MongoDB queries - performance #22193
Comments
Hi Emil and thanks for reporting this. There are a number of performance related Issues that you and Anton have reported recently. I am wondering if we should collect them up and add them to our list of things to be looked at? |
Hi John, |
I have the same issue too. |
I suspect that this not optimal query was introduced in this feature 2b5831b EDIT: |
And I think it would be really great if all search queries and statistics generation was dedicated to secondary mongodb node, which idle now all the time. |
One query that is causing a high CPU in our mongo is the one used by "Search messages" feature, inside a chat room. Sample query:
docsExamined: 1658624 I believe that an index using room id (rid) would solve this problem. |
Hey, @ankar84. Secondary, thankfully. |
Are there other tables that would benefit from added indexes? |
another example of expensive query
|
hey guys.. the slow queries using the indexes will be released on 3.18.0 but are already available at 3.18.0-rc.0 if you want to try it out. you can also try applying the indexes manually if you don't want to try a new version (see https://github.com/RocketChat/Rocket.Chat/pull/22879/files#diff-7329ec5df7668119aeae5fc2b79061127fd3059bd0ff3b2e9eaf9c36236dbf2cR33-R34) if you try it out, pls let me know if they help.. thx. |
Description:
Hi, after upgrade to 3.14.0 I see in MongoDB logs a lots notifications about long last queries as below example
2021-05-28T10:46:50.657+0200 I COMMAND [conn322090] command rocketchat.rocketchat_message command: find { find: "rocketchat_message", filter: { _hidden: { $ne: true }, rid: "pwcbshayJiYEzQhJt", tcount: { $exists: true } }, sort: { tlm: -1 }, projection: { joinCode: 0, members: 0, importIds: 0, e2e: 0 }, limit: 25, returnKey: false, showRecordId: false, lsid: { id: UUID("13b58f4a-bb53-4300-8715-dc11ddc145dc") }, $clusterTime: { clusterTime: Timestamp(1622191610, 53), signature: { hash: BinData(0, B4E712374B9B6A2DF772871112E9BFFB109EB878), keyId: 6945063607709204546 } }, $db: "rocketchat" } planSummary: IXSCAN { tcount: 1, tlm: 1 } keysExamined:43586 docsExamined:43586 hasSortStage:1 cursorExhausted:1 numYields:340 nreturned:25 reslen:14672 locks:{ Global: { acquireCount: { r: 341 } }, Database: { acquireCount: { r: 341 } }, Collection: { acquireCount: { r: 341 } } } storage:{} protocol:op_msg 356ms
Before upgrade, on 3.9.7 version no entries about this query. I think it may be a cause heaviest load on DB.
Also graph for
Meteor facts
looks strange. In time this jumps ofmongo-livedata time-spent-in-QUERYING-phase
is see slow down of Rocket.Chat (greyed out messages i.e.)EDIT:
From time to time you can observe spikes in
Meteor facts
Let me know if you need more graphs or data for analysis.
Steps to reproduce:
Meteor facts
graph in MetricsExpected behavior:
No long processing operations in Db logs
Actual behavior:
A lot of long processing queries
Server Setup Information:
Client Setup Information
Additional context
Relevant logs:
The text was updated successfully, but these errors were encountered: