Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Login token creation not respecting number of token limits #32216

Merged
merged 9 commits into from
Apr 20, 2024

Conversation

sampaiodiego
Copy link
Member

The maximum login tokens limit for users has a huge impact on both Rocket.Chat and MongoDB performance, but since this endpoint doesn't trigger a real "login", it wasn't being applied at there.. but since it is indeed creating new login tokens, it should be treated like so, that's why this can be considered a fix.

I'm also deprecating the mentioned endpoint since this was a feature already under a environment variable for many reasons. Rocket.Chat has now evolved enough where this approach is not required anymore.

I've also tested the performance impact of a user with too many login tokens, you can see the results below:

Performance impact

User starting with 50 tokens in DB

This is prior the fix:

k6 run --duration 30s -e "USER_ID=..." -e "AUTH_TOKEN=..." -e "USERNAME=..." ./create-token.js

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: ./create-token.js
     output: -

  scenarios: (100.00%) 1 scenario, 1 max VUs, 1m0s max duration (incl. graceful stop):
           * default: 1 looping VUs for 30s (gracefulStop: 30s)


     data_received..................: 2.4 MB 78 kB/s
     data_sent......................: 664 kB 22 kB/s
     http_req_blocked...............: avg=10.49µs  min=1µs    med=5µs    max=10.76ms  p(90)=7µs     p(95)=8µs    
     http_req_connecting............: avg=380ns    min=0s     med=0s     max=930µs    p(90)=0s      p(95)=0s     
     http_req_duration..............: avg=12.02ms  min=3.55ms med=8.5ms  max=279.74ms p(90)=22.74ms p(95)=38.48ms
       { expected_response:true }...: avg=12.02ms  min=3.55ms med=8.5ms  max=279.74ms p(90)=22.74ms p(95)=38.48ms
     http_req_failed................: 0.00%  ✓ 0         ✗ 2441
     http_req_receiving.............: avg=161.97µs min=35µs   med=112µs  max=6.54ms   p(90)=203µs   p(95)=290µs  
     http_req_sending...............: avg=37.55µs  min=6µs    med=29µs   max=7.31ms   p(90)=46µs    p(95)=54µs   
     http_req_tls_handshaking.......: avg=0s       min=0s     med=0s     max=0s       p(90)=0s      p(95)=0s     
     http_req_waiting...............: avg=11.82ms  min=3.5ms  med=8.29ms max=279.48ms p(90)=22.26ms p(95)=38.25ms
     http_reqs......................: 2441   81.265407/s
     iteration_duration.............: avg=12.27ms  min=3.62ms med=8.73ms max=279.91ms p(90)=23.07ms p(95)=38.6ms 
     iterations.....................: 2441   81.265407/s
     vus............................: 1      min=1       max=1 
     vus_max........................: 1      min=1       max=1 


running (0m30.0s), 0/1 VUs, 2441 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs  30s

User starting with 10k token in DB

This is also prior the fix, hence a user was allowed to have that many login tokes:

k6 run --duration 30s -e "USER_ID=..." -e "AUTH_TOKEN=..." -e "USERNAME=..." ./create-token.js

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: ./create-token.js
     output: -

  scenarios: (100.00%) 1 scenario, 1 max VUs, 1m0s max duration (incl. graceful stop):
           * default: 1 looping VUs for 30s (gracefulStop: 30s)


     data_received..................: 513 kB 17 kB/s
     data_sent......................: 145 kB 4.8 kB/s
     http_req_blocked...............: avg=11.07µs  min=2µs     med=6µs     max=2.15ms   p(90)=9µs     p(95)=9µs     
     http_req_connecting............: avg=1.09µs   min=0s      med=0s      max=583µs    p(90)=0s      p(95)=0s      
     http_req_duration..............: avg=56.17ms  min=29.29ms med=37.26ms max=336.22ms p(90)=99.24ms p(95)=112.77ms
       { expected_response:true }...: avg=56.17ms  min=29.29ms med=37.26ms max=336.22ms p(90)=99.24ms p(95)=112.77ms
     http_req_failed................: 0.00%  ✓ 0         ✗ 532
     http_req_receiving.............: avg=196.16µs min=50µs    med=150µs   max=5.34ms   p(90)=254µs   p(95)=402.59µs
     http_req_sending...............: avg=45.17µs  min=13µs    med=37.5µs  max=1.1ms    p(90)=57µs    p(95)=60µs    
     http_req_tls_handshaking.......: avg=0s       min=0s      med=0s      max=0s       p(90)=0s      p(95)=0s      
     http_req_waiting...............: avg=55.92ms  min=29.22ms med=37.08ms max=335.94ms p(90)=99.11ms p(95)=111.97ms
     http_reqs......................: 532    17.694357/s
     iteration_duration.............: avg=56.48ms  min=29.4ms  med=37.51ms max=336.77ms p(90)=99.61ms p(95)=113ms   
     iterations.....................: 532    17.694357/s
     vus............................: 1      min=1       max=1
     vus_max........................: 1      min=1       max=1


running (0m30.1s), 0/1 VUs, 532 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs  30s

As you can see, the extra load added was so massive that only 532 new login tokens were created during the same period.

Ensuring 50 tokens (after the change)

k6 run --duration 30s -e "USER_ID=..." -e "AUTH_TOKEN=..." -e "USERNAME=..." ./create-token.js

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: ./create-token.js
     output: -

  scenarios: (100.00%) 1 scenario, 1 max VUs, 1m0s max duration (incl. graceful stop):
           * default: 1 looping VUs for 30s (gracefulStop: 30s)


     data_received..................: 4.2 MB 139 kB/s
     data_sent......................: 1.2 MB 39 kB/s
     http_req_blocked...............: avg=5.76µs   min=1µs    med=4µs    max=2.12ms   p(90)=6µs     p(95)=7µs    
     http_req_connecting............: avg=148ns    min=0s     med=0s     max=644µs    p(90)=0s      p(95)=0s     
     http_req_duration..............: avg=6.68ms   min=4.01ms med=5.21ms max=267.42ms p(90)=10.65ms p(95)=13.13ms
       { expected_response:true }...: avg=6.68ms   min=4.01ms med=5.21ms max=267.42ms p(90)=10.65ms p(95)=13.13ms
     http_req_failed................: 0.00%  ✓ 0          ✗ 4334
     http_req_receiving.............: avg=138.36µs min=18µs   med=94.5µs max=8.42ms   p(90)=183µs   p(95)=249µs  
     http_req_sending...............: avg=34.45µs  min=4µs    med=26µs   max=4.13ms   p(90)=42µs    p(95)=48µs   
     http_req_tls_handshaking.......: avg=0s       min=0s     med=0s     max=0s       p(90)=0s      p(95)=0s     
     http_req_waiting...............: avg=6.51ms   min=3.9ms  med=5.06ms max=267.33ms p(90)=10.33ms p(95)=12.9ms 
     http_reqs......................: 4334   144.440576/s
     iteration_duration.............: avg=6.89ms   min=4.11ms med=5.41ms max=267.55ms p(90)=10.93ms p(95)=13.41ms
     iterations.....................: 4334   144.440576/s
     vus............................: 1      min=1        max=1 
     vus_max........................: 1      min=1        max=1 


running (0m30.0s), 0/1 VUs, 4334 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs  30s

After the change, even with an additional operations to make sure the limits are being respected, since the number of login tokens is not increasing anymore, it was able to generate almost the double amount of login tokens comparing to prior the fix.

@sampaiodiego sampaiodiego requested a review from a team as a code owner April 15, 2024 22:04
Copy link
Contributor

dionisio-bot bot commented Apr 15, 2024

Looks like this PR is ready to merge! 🎉
If you have any trouble, please check the PR guidelines

Copy link

changeset-bot bot commented Apr 15, 2024

🦋 Changeset detected

Latest commit: 2a4075d

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 32 packages
Name Type
@rocket.chat/meteor Patch
@rocket.chat/core-typings Patch
@rocket.chat/rest-typings Patch
@rocket.chat/apps Patch
@rocket.chat/core-services Patch
@rocket.chat/cron Patch
@rocket.chat/gazzodown Patch
@rocket.chat/livechat Patch
@rocket.chat/model-typings Patch
@rocket.chat/ui-contexts Patch
@rocket.chat/account-service Patch
@rocket.chat/authorization-service Patch
@rocket.chat/ddp-streamer Patch
@rocket.chat/omnichannel-transcript Patch
@rocket.chat/presence-service Patch
@rocket.chat/queue-worker Patch
@rocket.chat/stream-hub-service Patch
@rocket.chat/api-client Patch
@rocket.chat/license Patch
@rocket.chat/omnichannel-services Patch
@rocket.chat/pdf-worker Patch
@rocket.chat/presence Patch
rocketchat-services Patch
@rocket.chat/ddp-client Patch
@rocket.chat/fuselage-ui-kit Patch
@rocket.chat/models Patch
@rocket.chat/ui-avatar Patch
@rocket.chat/ui-client Patch
@rocket.chat/ui-video-conf Patch
@rocket.chat/uikit-playground Patch
@rocket.chat/web-ui-registration Patch
@rocket.chat/instance-status Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

Copy link

codecov bot commented Apr 15, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 55.43%. Comparing base (e96460d) to head (2a4075d).

Additional details and impacted files

Impacted file tree graph

@@             Coverage Diff             @@
##           develop   #32216      +/-   ##
===========================================
+ Coverage    55.37%   55.43%   +0.06%     
===========================================
  Files         2323     2329       +6     
  Lines        51295    51349      +54     
  Branches     10492    10503      +11     
===========================================
+ Hits         28405    28466      +61     
+ Misses       20387    20375      -12     
- Partials      2503     2508       +5     
Flag Coverage Δ
e2e 54.88% <ø> (+0.10%) ⬆️
unit 74.92% <100.00%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

KevLehman
KevLehman previously approved these changes Apr 15, 2024
@sampaiodiego sampaiodiego added this to the 6.8 milestone Apr 19, 2024
@ggazzo ggazzo added the stat: QA assured Means it has been tested and approved by a company insider label Apr 19, 2024
@dionisio-bot dionisio-bot bot added the stat: ready to merge PR tested and approved waiting for merge label Apr 19, 2024
@kodiakhq kodiakhq bot merged commit 17bc631 into develop Apr 20, 2024
43 checks passed
@kodiakhq kodiakhq bot deleted the fix-create-token-limits branch April 20, 2024 15:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat: QA assured Means it has been tested and approved by a company insider stat: ready to merge PR tested and approved waiting for merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants