Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache access information #443

Closed
htdvisser opened this issue Apr 2, 2019 · 3 comments
Closed

Cache access information #443

htdvisser opened this issue Apr 2, 2019 · 3 comments
Assignees
Labels
c/identity server This is related to the Identity Server performance Something is slow or takes too much CPU/Memory/... scalability This could become a problem at scale
Milestone

Comments

@htdvisser
Copy link
Contributor

Summary:

Results of (*IdentityServer).authInfo should be cached.

Why do we need this?

For performance reasons.

What is already there? What do you see now?

When ttn-lw-stack is started with all components attached, calls to the NS/AS/JS all use the modified IS rights hook that talks to the DB. As a result, a full SetEndDevice will do the entire rights check 4 times.

The IS rights hook calls (*IdentityServer).getRights, which is quite a heavy call. It calls authInfo, which looks up the access token or API key and verifies it (with pbkdf2) and entityRights which derives the membership tree. In entityRights there is already some caching for the membership trees so that we don't have to hit the database all the time.

What is missing? What do you want to see?

We should avoid doing pbkdf2 hashing all the time. Ideally we should also avoid hitting the DB with API key (or Access Token and Client) GETs all the time.

How do you propose to implement this?

  • In authInfo we can add a check with Redis before doing auth.Password(...).Validate(...). Then we've already made sure that the API key still exists.
  • We can cache API key GETs for a short time
  • We can cache Access Token and Client GETs for a short time

Can you do this yourself and submit a Pull Request?

Yes I can

@htdvisser htdvisser added performance Something is slow or takes too much CPU/Memory/... scalability This could become a problem at scale labels Apr 2, 2019
@htdvisser htdvisser added this to the Next Up milestone Apr 2, 2019
@htdvisser htdvisser self-assigned this Apr 2, 2019
@johanstokking johanstokking added the c/identity server This is related to the Identity Server label May 28, 2019
@htdvisser htdvisser modified the milestones: Next Up, July 2019 Jun 25, 2019
@htdvisser htdvisser modified the milestones: July 2019, August 2019 Jul 30, 2019
@htdvisser htdvisser modified the milestones: August 2019, September 2019 Aug 22, 2019
@johanstokking johanstokking modified the milestones: September 2019, Next Up Sep 11, 2019
@htdvisser htdvisser modified the milestones: Next Up, Backlog Nov 12, 2019
@htdvisser htdvisser modified the milestones: Backlog, 2021 Q3 Mar 31, 2021
@htdvisser
Copy link
Contributor Author

More details in related issues: https://github.com/TheThingsIndustries/lorawan-stack/issues/1393 and #3304

@htdvisser htdvisser modified the milestones: 2021 Q3, v3.15.2 Sep 23, 2021
@NicolasMrad NicolasMrad modified the milestones: v3.15.2, v3.16.0 Oct 12, 2021
@htdvisser
Copy link
Contributor Author

htdvisser commented Oct 12, 2021

Moving this to v3.16.0 so that we can first measure how much performance we can actually gain from this after the changes in #4732.

@htdvisser
Copy link
Contributor Author

After doing some profiling of production deployments, it doesn't look like this will make much difference.

Access key + Client / API key + User lookups in the database cost < 5 ms, so this isn't really an issue. PBKDF2 hashing was already made less time-consuming for Access Keys and API keys by lowering the number of iterations in #3038 there. So I don't think there's that much to win here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
c/identity server This is related to the Identity Server performance Something is slow or takes too much CPU/Memory/... scalability This could become a problem at scale
Projects
None yet
Development

No branches or pull requests

3 participants