-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug Hitting ThrottlingException
on GetWorkGroup
with threads turned up
#595
Comments
@dacreify @juliansteger-sc could you try the latest version from the main branch and verify that the fix solves your issue? |
@nicor88 I installed from I noticed that the cache key includes the client object: Given the module-level lock here I'm guessing there's only one instance of Just confirming that I understand how the caching is working. |
@dacreify lru_cache allow you to cache result in case the same inputs are passed multiple times. |
same for us, but fix looks reasonable. thanks for investigating & fixing |
Let's close this issue for now, given the fact that either AWS or the caching behaviour helped to mitigate the issue. |
Discussed in #591
Originally posted by dacreify March 6, 2024
Recently we started hitting
ThrottlingException
onGetWorkGroup
calls at this spot in the code:https://github.com/dbt-athena/dbt-athena/blob/a1b8db5de90b20557bcd5e0c51a30177bcddaa5f/dbt/adapters/athena/impl.py#L231
From CloudTrail it looks like
dbt-athena
winds up making aGetWorkGroup
call for every model run. There's no documented quota for this call but obviously we're hitting one. ActualStartQueryExecution
can go 20/second or burst to 80/second soGetWorkGroup
definitely seems to be below that in any case.Anyone else hit this? Can we cache the results of
GetWorkGroup
to avoid it?The text was updated successfully, but these errors were encountered: