Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: generate and use swagger typescript client [DET-3249 DET-3324 DET-3355] #691

Merged
merged 6 commits into from
Jun 25, 2020

Conversation

hamidzr
Copy link
Member

@hamidzr hamidzr commented Jun 11, 2020

Description

depends on #680

Test Plan

  • make sure we can build the npm module: a java step and a Node step.
  • make sure WebUI builds and we can use a generated endpoint in React (logout)
    • Click the logout to test logout and check that the cookie is invalid
  • make sure CI builds in a reasonable time
  • check and make sure types and API comments come through the generated TS code.

Commentary (optional)

@hamidzr hamidzr force-pushed the 3249-swagger-ts-client branch 2 times, most recently from 82cd024 to c5985bf Compare June 11, 2020 03:20
@hamidzr hamidzr marked this pull request as draft June 12, 2020 21:23
@hamidzr hamidzr force-pushed the 3249-swagger-ts-client branch 3 times, most recently from 4799c98 to 0ac4a59 Compare June 13, 2020 00:33
@hamidzr
Copy link
Member Author

hamidzr commented Jun 13, 2020

for later: it seems proto get-deps does not get all the dependencies for that module https://app.circleci.com/pipelines/github/determined-ai/determined/2476/workflows/b5b68796-3f99-4d58-801a-e6d915804251/jobs/50858

@hamidzr hamidzr marked this pull request as ready for review June 13, 2020 01:36
@hamidzr hamidzr force-pushed the 3249-swagger-ts-client branch 3 times, most recently from e91da6e to 2644c18 Compare June 13, 2020 01:40
@hamidzr hamidzr changed the title feat: generate and use swagger typescript client [DET-3249] feat: generate and use swagger typescript client [DET-3249 DET-3224 DET-3355] Jun 13, 2020
@hamidzr hamidzr changed the title feat: generate and use swagger typescript client [DET-3249 DET-3224 DET-3355] feat: generate and use swagger typescript client [DET-3249 DET-3324 DET-3355] Jun 13, 2020
Copy link
Contributor

@hkang1 hkang1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The use case of the swagger api looks good for the logout example. We can keep evaluating other use cases, such as a GET with params, POST and STREAMING (logs) as they get implemented.

@hkang1 hkang1 assigned ybt195 and unassigned hkang1 and ybt195 Jun 16, 2020
Copy link
Contributor

@justin-determined-ai justin-determined-ai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work! I think this could use a little more detail on how you've tested this change, and perhaps talk a bit about how we expect the new package to be versioned/maintained/published.

Comment on lines 6 to 7
# WARN this module also depends on the swagger generated api client
# which is not built or updated here
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: do we want to get rid of this/improve this in future work, or is it just a caution for other people in this code? if the former, can we make a ticket/link it; if the latter, can we be a bit more clear on where one should go look if they want more information?

Copy link
Member Author

@hamidzr hamidzr Jun 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm I don't see a good way of getting rid of this without more changes to our build system or introducing separate make targets and using different ones in CI vs isolated WebUI development vs local cluster build. I can add more information

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added wording to point the user to the target module. the info here would/could change if we make changes to the CI. I designed it this way to break up CI dependencies. if we don't want to do the process would be simpler but slower.

@@ -0,0 +1,20 @@
{
"name": "@determined-ai/api-ts-sdk",
"version": "1.0.0",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

blocking: Can we make sure we've considered what we want our numbering scheme to look like here? For example, should it map directly to our API version? (If so, maybe we start at 0.0.0 for the moment, until we're ready to declare API v1 "done"?)

Copy link
Member Author

@hamidzr hamidzr Jun 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sidenote the version probably wouldn't really matter until we start publishing the package on NPM. we could probably defer it until we come up with fine grain versioning for our APIs

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the version

@hamidzr
Copy link
Member Author

hamidzr commented Jun 18, 2020

any thoughts on the package name for the API ts sdk/binding ?

Copy link
Contributor

@justin-determined-ai justin-determined-ai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

- Docker (>= 19.03)
- Protoc (>= 3.0)
- Java (>= 7)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: Do we need to announce in engineering once this lands that they need to update? Is this just if you want to work on the front-end?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm needed by anyone that needs to build the npm package, publish it, or otherwise use it so for the time being mostly anyone working touching the frontend or wanting to run frontend tests.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please announce in #engineering once this lands. Thanks!

@@ -1,14 +1,21 @@
/* eslint-disable @typescript-eslint/camelcase */
import * as DetSwagger from '@determined-ai/api-ts-sdk';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

non-blocking: Feels weird to call out the technology as part of the variable name but I don't have a better idea so this is probably fine.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree I'm open to suggestions. DetApi makes the most sense to me but we already have something like that. IMO once we are mostly on the swagger generated we'd switch it around 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, please don't block on this.

@hamidzr hamidzr merged commit 06d0850 into determined-ai:master Jun 25, 2020
@hamidzr hamidzr deleted the 3249-swagger-ts-client branch June 25, 2020 22:46
tayritenour pushed a commit to tayritenour/determined that referenced this pull request Apr 25, 2023
…-ai#691)

The expconf resources.max_slots attribute is documented as managed by
Slurm, but there was a partial (non-functional) implementation in
DispatchRM as well.  If we ever submitted max_slots worth of work
the remainder would remain QUEUED forever.   I did attempt to fix
the existing support, but the obvious fixes were not sufficient.  Since
this is already documented as managed by Slurm, just remove the support
and let the workload managers manage it.
eecsliu pushed a commit to eecsliu/determined that referenced this pull request Jun 23, 2023
…-ai#691)

The expconf resources.max_slots attribute is documented as managed by
Slurm, but there was a partial (non-functional) implementation in
DispatchRM as well.  If we ever submitted max_slots worth of work
the remainder would remain QUEUED forever.   I did attempt to fix
the existing support, but the obvious fixes were not sufficient.  Since
this is already documented as managed by Slurm, just remove the support
and let the workload managers manage it.
stoksc pushed a commit that referenced this pull request Jun 26, 2023
The expconf resources.max_slots attribute is documented as managed by
Slurm, but there was a partial (non-functional) implementation in
DispatchRM as well.  If we ever submitted max_slots worth of work
the remainder would remain QUEUED forever.   I did attempt to fix
the existing support, but the obvious fixes were not sufficient.  Since
this is already documented as managed by Slurm, just remove the support
and let the workload managers manage it.
eecsliu pushed a commit that referenced this pull request Jun 28, 2023
The expconf resources.max_slots attribute is documented as managed by
Slurm, but there was a partial (non-functional) implementation in
DispatchRM as well.  If we ever submitted max_slots worth of work
the remainder would remain QUEUED forever.   I did attempt to fix
the existing support, but the obvious fixes were not sufficient.  Since
this is already documented as managed by Slurm, just remove the support
and let the workload managers manage it.
eecsliu pushed a commit that referenced this pull request Jun 28, 2023
The expconf resources.max_slots attribute is documented as managed by
Slurm, but there was a partial (non-functional) implementation in
DispatchRM as well.  If we ever submitted max_slots worth of work
the remainder would remain QUEUED forever.   I did attempt to fix
the existing support, but the obvious fixes were not sufficient.  Since
this is already documented as managed by Slurm, just remove the support
and let the workload managers manage it.
stoksc pushed a commit that referenced this pull request Jul 20, 2023
The expconf resources.max_slots attribute is documented as managed by
Slurm, but there was a partial (non-functional) implementation in
DispatchRM as well.  If we ever submitted max_slots worth of work
the remainder would remain QUEUED forever.   I did attempt to fix
the existing support, but the obvious fixes were not sufficient.  Since
this is already documented as managed by Slurm, just remove the support
and let the workload managers manage it.
eecsliu pushed a commit that referenced this pull request Jul 24, 2023
The expconf resources.max_slots attribute is documented as managed by
Slurm, but there was a partial (non-functional) implementation in
DispatchRM as well.  If we ever submitted max_slots worth of work
the remainder would remain QUEUED forever.   I did attempt to fix
the existing support, but the obvious fixes were not sufficient.  Since
this is already documented as managed by Slurm, just remove the support
and let the workload managers manage it.
stoksc pushed a commit that referenced this pull request Oct 17, 2023
The expconf resources.max_slots attribute is documented as managed by
Slurm, but there was a partial (non-functional) implementation in
DispatchRM as well.  If we ever submitted max_slots worth of work
the remainder would remain QUEUED forever.   I did attempt to fix
the existing support, but the obvious fixes were not sufficient.  Since
this is already documented as managed by Slurm, just remove the support
and let the workload managers manage it.
azhou-determined pushed a commit that referenced this pull request Dec 7, 2023
The expconf resources.max_slots attribute is documented as managed by
Slurm, but there was a partial (non-functional) implementation in
DispatchRM as well.  If we ever submitted max_slots worth of work
the remainder would remain QUEUED forever.   I did attempt to fix
the existing support, but the obvious fixes were not sufficient.  Since
this is already documented as managed by Slurm, just remove the support
and let the workload managers manage it.
wes-turner pushed a commit that referenced this pull request Feb 2, 2024
The expconf resources.max_slots attribute is documented as managed by
Slurm, but there was a partial (non-functional) implementation in
DispatchRM as well.  If we ever submitted max_slots worth of work
the remainder would remain QUEUED forever.   I did attempt to fix
the existing support, but the obvious fixes were not sufficient.  Since
this is already documented as managed by Slurm, just remove the support
and let the workload managers manage it.
@dannysauer dannysauer added this to the 0.12.11 milestone Feb 6, 2024
rb-determined-ai pushed a commit that referenced this pull request Feb 29, 2024
The expconf resources.max_slots attribute is documented as managed by
Slurm, but there was a partial (non-functional) implementation in
DispatchRM as well.  If we ever submitted max_slots worth of work
the remainder would remain QUEUED forever.   I did attempt to fix
the existing support, but the obvious fixes were not sufficient.  Since
this is already documented as managed by Slurm, just remove the support
and let the workload managers manage it.
amandavialva01 pushed a commit that referenced this pull request Mar 18, 2024
The expconf resources.max_slots attribute is documented as managed by
Slurm, but there was a partial (non-functional) implementation in
DispatchRM as well.  If we ever submitted max_slots worth of work
the remainder would remain QUEUED forever.   I did attempt to fix
the existing support, but the obvious fixes were not sufficient.  Since
this is already documented as managed by Slurm, just remove the support
and let the workload managers manage it.
eecsliu pushed a commit that referenced this pull request Apr 18, 2024
The expconf resources.max_slots attribute is documented as managed by
Slurm, but there was a partial (non-functional) implementation in
DispatchRM as well.  If we ever submitted max_slots worth of work
the remainder would remain QUEUED forever.   I did attempt to fix
the existing support, but the obvious fixes were not sufficient.  Since
this is already documented as managed by Slurm, just remove the support
and let the workload managers manage it.
eecsliu pushed a commit to determined-ai/determined-release-testing that referenced this pull request Apr 22, 2024
…-ai#691)

The expconf resources.max_slots attribute is documented as managed by
Slurm, but there was a partial (non-functional) implementation in
DispatchRM as well.  If we ever submitted max_slots worth of work
the remainder would remain QUEUED forever.   I did attempt to fix
the existing support, but the obvious fixes were not sufficient.  Since
this is already documented as managed by Slurm, just remove the support
and let the workload managers manage it.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants