-
Notifications
You must be signed in to change notification settings - Fork 356
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: generate and use swagger typescript client [DET-3249 DET-3324 DET-3355] #691
Conversation
82cd024
to
c5985bf
Compare
4799c98
to
0ac4a59
Compare
for later: it seems |
f5fc3f2
to
c33ce3f
Compare
e91da6e
to
2644c18
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The use case of the swagger api looks good for the logout example. We can keep evaluating other use cases, such as a GET with params, POST and STREAMING (logs) as they get implemented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work! I think this could use a little more detail on how you've tested this change, and perhaps talk a bit about how we expect the new package to be versioned/maintained/published.
webui/react/Makefile
Outdated
# WARN this module also depends on the swagger generated api client | ||
# which is not built or updated here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: do we want to get rid of this/improve this in future work, or is it just a caution for other people in this code? if the former, can we make a ticket/link it; if the latter, can we be a bit more clear on where one should go look if they want more information?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm I don't see a good way of getting rid of this without more changes to our build system or introducing separate make targets and using different ones in CI vs isolated WebUI development vs local cluster build. I can add more information
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added wording to point the user to the target module. the info here would/could change if we make changes to the CI. I designed it this way to break up CI dependencies. if we don't want to do the process would be simpler but slower.
webui/api-ts-sdk/package.json
Outdated
@@ -0,0 +1,20 @@ | |||
{ | |||
"name": "@determined-ai/api-ts-sdk", | |||
"version": "1.0.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
blocking: Can we make sure we've considered what we want our numbering scheme to look like here? For example, should it map directly to our API version? (If so, maybe we start at 0.0.0 for the moment, until we're ready to declare API v1 "done"?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sidenote the version probably wouldn't really matter until we start publishing the package on NPM. we could probably defer it until we come up with fine grain versioning for our APIs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated the version
any thoughts on the package name for the API ts sdk/binding ? |
5170640
to
8cdf9c4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
- Docker (>= 19.03) | ||
- Protoc (>= 3.0) | ||
- Java (>= 7) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: Do we need to announce in engineering once this lands that they need to update? Is this just if you want to work on the front-end?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm needed by anyone that needs to build the npm package, publish it, or otherwise use it so for the time being mostly anyone working touching the frontend or wanting to run frontend tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please announce in #engineering once this lands. Thanks!
@@ -1,14 +1,21 @@ | |||
/* eslint-disable @typescript-eslint/camelcase */ | |||
import * as DetSwagger from '@determined-ai/api-ts-sdk'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
non-blocking: Feels weird to call out the technology as part of the variable name but I don't have a better idea so this is probably fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree I'm open to suggestions. DetApi makes the most sense to me but we already have something like that. IMO once we are mostly on the swagger generated we'd switch it around 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, please don't block on this.
…-ai#691) The expconf resources.max_slots attribute is documented as managed by Slurm, but there was a partial (non-functional) implementation in DispatchRM as well. If we ever submitted max_slots worth of work the remainder would remain QUEUED forever. I did attempt to fix the existing support, but the obvious fixes were not sufficient. Since this is already documented as managed by Slurm, just remove the support and let the workload managers manage it.
…-ai#691) The expconf resources.max_slots attribute is documented as managed by Slurm, but there was a partial (non-functional) implementation in DispatchRM as well. If we ever submitted max_slots worth of work the remainder would remain QUEUED forever. I did attempt to fix the existing support, but the obvious fixes were not sufficient. Since this is already documented as managed by Slurm, just remove the support and let the workload managers manage it.
The expconf resources.max_slots attribute is documented as managed by Slurm, but there was a partial (non-functional) implementation in DispatchRM as well. If we ever submitted max_slots worth of work the remainder would remain QUEUED forever. I did attempt to fix the existing support, but the obvious fixes were not sufficient. Since this is already documented as managed by Slurm, just remove the support and let the workload managers manage it.
The expconf resources.max_slots attribute is documented as managed by Slurm, but there was a partial (non-functional) implementation in DispatchRM as well. If we ever submitted max_slots worth of work the remainder would remain QUEUED forever. I did attempt to fix the existing support, but the obvious fixes were not sufficient. Since this is already documented as managed by Slurm, just remove the support and let the workload managers manage it.
The expconf resources.max_slots attribute is documented as managed by Slurm, but there was a partial (non-functional) implementation in DispatchRM as well. If we ever submitted max_slots worth of work the remainder would remain QUEUED forever. I did attempt to fix the existing support, but the obvious fixes were not sufficient. Since this is already documented as managed by Slurm, just remove the support and let the workload managers manage it.
The expconf resources.max_slots attribute is documented as managed by Slurm, but there was a partial (non-functional) implementation in DispatchRM as well. If we ever submitted max_slots worth of work the remainder would remain QUEUED forever. I did attempt to fix the existing support, but the obvious fixes were not sufficient. Since this is already documented as managed by Slurm, just remove the support and let the workload managers manage it.
The expconf resources.max_slots attribute is documented as managed by Slurm, but there was a partial (non-functional) implementation in DispatchRM as well. If we ever submitted max_slots worth of work the remainder would remain QUEUED forever. I did attempt to fix the existing support, but the obvious fixes were not sufficient. Since this is already documented as managed by Slurm, just remove the support and let the workload managers manage it.
The expconf resources.max_slots attribute is documented as managed by Slurm, but there was a partial (non-functional) implementation in DispatchRM as well. If we ever submitted max_slots worth of work the remainder would remain QUEUED forever. I did attempt to fix the existing support, but the obvious fixes were not sufficient. Since this is already documented as managed by Slurm, just remove the support and let the workload managers manage it.
The expconf resources.max_slots attribute is documented as managed by Slurm, but there was a partial (non-functional) implementation in DispatchRM as well. If we ever submitted max_slots worth of work the remainder would remain QUEUED forever. I did attempt to fix the existing support, but the obvious fixes were not sufficient. Since this is already documented as managed by Slurm, just remove the support and let the workload managers manage it.
The expconf resources.max_slots attribute is documented as managed by Slurm, but there was a partial (non-functional) implementation in DispatchRM as well. If we ever submitted max_slots worth of work the remainder would remain QUEUED forever. I did attempt to fix the existing support, but the obvious fixes were not sufficient. Since this is already documented as managed by Slurm, just remove the support and let the workload managers manage it.
The expconf resources.max_slots attribute is documented as managed by Slurm, but there was a partial (non-functional) implementation in DispatchRM as well. If we ever submitted max_slots worth of work the remainder would remain QUEUED forever. I did attempt to fix the existing support, but the obvious fixes were not sufficient. Since this is already documented as managed by Slurm, just remove the support and let the workload managers manage it.
The expconf resources.max_slots attribute is documented as managed by Slurm, but there was a partial (non-functional) implementation in DispatchRM as well. If we ever submitted max_slots worth of work the remainder would remain QUEUED forever. I did attempt to fix the existing support, but the obvious fixes were not sufficient. Since this is already documented as managed by Slurm, just remove the support and let the workload managers manage it.
The expconf resources.max_slots attribute is documented as managed by Slurm, but there was a partial (non-functional) implementation in DispatchRM as well. If we ever submitted max_slots worth of work the remainder would remain QUEUED forever. I did attempt to fix the existing support, but the obvious fixes were not sufficient. Since this is already documented as managed by Slurm, just remove the support and let the workload managers manage it.
…-ai#691) The expconf resources.max_slots attribute is documented as managed by Slurm, but there was a partial (non-functional) implementation in DispatchRM as well. If we ever submitted max_slots worth of work the remainder would remain QUEUED forever. I did attempt to fix the existing support, but the obvious fixes were not sufficient. Since this is already documented as managed by Slurm, just remove the support and let the workload managers manage it.
Description
depends on #680
Test Plan
Commentary (optional)