-
Notifications
You must be signed in to change notification settings - Fork 138
Add the ability to create CloudEndpoint resources using kfctl. #351
Conversation
* This is GCP specific code that allows CloudEndpoints to be created using the CloudEndpoint controller. A Cloud endpoint is a KRM style resource so we can kust have `kfctl apply -f {path}` invoke the appropriate logic. * For GCP this addresses GoogleCloudPlatform/kubeflow-distribution#36; specifically when deploying private GKE the CloudEndpoints controller won't be able to contact the servicemanagement API. This provides a work around by running it locally. * This pattern seems extensible; i.e. other platforms could link in code to handle CR's specific to their platforms. This could basically be an alternative to plugins. * I added a context flag to control the kubecontext that apply applies to. Unfortunately, it doesn't look like there is an easy way to use that in the context of applying KFDef. It looks like the current logic assumes the cluster will be added to the KFDef metadata and then look up that cluster in .kubeconfig. * Modifying that logic to support the context flag seemed riskier then simply adding a comment to the flag. * Added some warnings that KFUpgrade is deprecated since per kubeflow#304 we want to follow the off shelf workflow.
/assign @adrian555 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 4 of 4 files at r1.
Reviewable status: all files reviewed, 1 unresolved discussion (waiting on @jlewi and @Tomcli)
cmd/kfctl/cmd/apply.go, line 89 at r1 (raw file):
} return nil case ep.Kind:
kubeContext
is not used by other config, but is it mandatory for ep? If so, would it help check a valid kubeContext
is part of the command line?
@jlewi review is done above. I do have one general question. From the description above I am not clear whether there will be one or two commands to deploy Kubeflow. If as you said that this can be an alternative to the plugins, then I would suspect it will take two commands (or, two runs of If I get it right, my concern would be whether this model is sustainable in the long term. I probably need to read more on the CloudEndPoint controller. But not quite sure about the reason to have this Hope this does not add more confusion. :) |
@adrian555 Thanks for the thoughtful response. So the first problem is as follows: We as GCP want/need to ship additional CLI functionality to support our customers. The question is whether we should bake this into the existing kfctl or ship a new GCP specific binary. Our preference would be to ship it as part of kfctl and not require users to install a separate CLI. I have tried to do it in a way that is extensible to other performs and Kubernetes native. In particular, the syntax
means we aren't adding a new flag sub command. So in this regard I think its not a one off hack for There is a risk of binary bloat and possibly conflicting version dependencies but arguably that's not a new problem since we already allow platform specific logic to be baked into kfctl.
On GCP there will be multiple commands to deploy kubeflow. Initially we are using Makefile to glue these commands together. My hope is that we will find a more cloud native solution to glue it together. That could be a KF specific solution (e.g. some iteration on the current approach) but I'm hoping a more generic, cloud native solution will emerge. skaffold for example has a very pipeline centric view but it currently wouldn't work for our use cases.
This is any area where multiple oppionions are possible and it may not be a place where one size fits all. My personal oppinion is that treating it as a composable build pipeline makes it easier for folks to customize to meet their particular needs. Fundamentally our process is two step
Increasingly we see the need for folks to introduce additional steps in the pipeline. So with this pr the process is
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: all files reviewed, 1 unresolved discussion (waiting on @adrian555 and @Tomcli)
cmd/kfctl/cmd/apply.go, line 89 at r1 (raw file):
Previously, adrian555 (Adrian Zhuang) wrote…
kubeContext
is not used by other config, but is it mandatory for ep? If so, would it help check a validkubeContext
is part of the command line?
A context is not required. If none is supplied the current context is used. Validation happens within ep.Process; i.e. ep.Process will throw an error if the context doesn't exist.
So the semantics are the same as other K8s CLI tools.
Thanks @jlewi for the detailed info! I totally agree on the pipelining aspect and we already see there are multiple sub commands/functions such as One thing I may point out is the example of So my problem is more about whether the ep.Process should be part of the |
Both of those options diverge from kubectl semantics which is what we've modeled kfctl on. The kubernetes approach to bulk operations is to specify a single directory or file with multiple resources. We don't need to add new subcommands or flags precisely because the KRM (i.e. metadata) allows the binary to figure out what resource to handle it appropriately. So if we really wanted to we could support the having a single file with both resources separated by "---". |
/lgtm Thanks @jlewi . |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: adrian555, jlewi The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…low#351) * This is GCP specific code that allows CloudEndpoints to be created using the CloudEndpoint controller. A Cloud endpoint is a KRM style resource so we can kust have `kfctl apply -f {path}` invoke the appropriate logic. * For GCP this addresses GoogleCloudPlatform/kubeflow-distribution#36; specifically when deploying private GKE the CloudEndpoints controller won't be able to contact the servicemanagement API. This provides a work around by running it locally. * This pattern seems extensible; i.e. other platforms could link in code to handle CR's specific to their platforms. This could basically be an alternative to plugins. * I added a context flag to control the kubecontext that apply applies to. Unfortunately, it doesn't look like there is an easy way to use that in the context of applying KFDef. It looks like the current logic assumes the cluster will be added to the KFDef metadata and then look up that cluster in .kubeconfig. * Modifying that logic to support the context flag seemed riskier then simply adding a comment to the flag. * Added some warnings that KFUpgrade is deprecated since per kubeflow#304 we want to follow the off shelf workflow.
…low#351) * This is GCP specific code that allows CloudEndpoints to be created using the CloudEndpoint controller. A Cloud endpoint is a KRM style resource so we can kust have `kfctl apply -f {path}` invoke the appropriate logic. * For GCP this addresses GoogleCloudPlatform/kubeflow-distribution#36; specifically when deploying private GKE the CloudEndpoints controller won't be able to contact the servicemanagement API. This provides a work around by running it locally. * This pattern seems extensible; i.e. other platforms could link in code to handle CR's specific to their platforms. This could basically be an alternative to plugins. * I added a context flag to control the kubecontext that apply applies to. Unfortunately, it doesn't look like there is an easy way to use that in the context of applying KFDef. It looks like the current logic assumes the cluster will be added to the KFDef metadata and then look up that cluster in .kubeconfig. * Modifying that logic to support the context flag seemed riskier then simply adding a comment to the flag. * Added some warnings that KFUpgrade is deprecated since per kubeflow#304 we want to follow the off shelf workflow.
…low#351) * This is GCP specific code that allows CloudEndpoints to be created using the CloudEndpoint controller. A Cloud endpoint is a KRM style resource so we can kust have `kfctl apply -f {path}` invoke the appropriate logic. * For GCP this addresses GoogleCloudPlatform/kubeflow-distribution#36; specifically when deploying private GKE the CloudEndpoints controller won't be able to contact the servicemanagement API. This provides a work around by running it locally. * This pattern seems extensible; i.e. other platforms could link in code to handle CR's specific to their platforms. This could basically be an alternative to plugins. * I added a context flag to control the kubecontext that apply applies to. Unfortunately, it doesn't look like there is an easy way to use that in the context of applying KFDef. It looks like the current logic assumes the cluster will be added to the KFDef metadata and then look up that cluster in .kubeconfig. * Modifying that logic to support the context flag seemed riskier then simply adding a comment to the flag. * Added some warnings that KFUpgrade is deprecated since per kubeflow#304 we want to follow the off shelf workflow.
…low#351) * This is GCP specific code that allows CloudEndpoints to be created using the CloudEndpoint controller. A Cloud endpoint is a KRM style resource so we can kust have `kfctl apply -f {path}` invoke the appropriate logic. * For GCP this addresses GoogleCloudPlatform/kubeflow-distribution#36; specifically when deploying private GKE the CloudEndpoints controller won't be able to contact the servicemanagement API. This provides a work around by running it locally. * This pattern seems extensible; i.e. other platforms could link in code to handle CR's specific to their platforms. This could basically be an alternative to plugins. * I added a context flag to control the kubecontext that apply applies to. Unfortunately, it doesn't look like there is an easy way to use that in the context of applying KFDef. It looks like the current logic assumes the cluster will be added to the KFDef metadata and then look up that cluster in .kubeconfig. * Modifying that logic to support the context flag seemed riskier then simply adding a comment to the flag. * Added some warnings that KFUpgrade is deprecated since per kubeflow#304 we want to follow the off shelf workflow.
…low#351) * This is GCP specific code that allows CloudEndpoints to be created using the CloudEndpoint controller. A Cloud endpoint is a KRM style resource so we can kust have `kfctl apply -f {path}` invoke the appropriate logic. * For GCP this addresses GoogleCloudPlatform/kubeflow-distribution#36; specifically when deploying private GKE the CloudEndpoints controller won't be able to contact the servicemanagement API. This provides a work around by running it locally. * This pattern seems extensible; i.e. other platforms could link in code to handle CR's specific to their platforms. This could basically be an alternative to plugins. * I added a context flag to control the kubecontext that apply applies to. Unfortunately, it doesn't look like there is an easy way to use that in the context of applying KFDef. It looks like the current logic assumes the cluster will be added to the KFDef metadata and then look up that cluster in .kubeconfig. * Modifying that logic to support the context flag seemed riskier then simply adding a comment to the flag. * Added some warnings that KFUpgrade is deprecated since per kubeflow#304 we want to follow the off shelf workflow.
This is GCP specific code that allows CloudEndpoints to be created using
the CloudEndpoint controller. A Cloud endpoint is a KRM style resource
so we can kust have
kfctl apply -f {path}
invoke the appropriatelogic.
For GCP this addresses CLI (kfctl)? to apply CloudEndpoints resources GoogleCloudPlatform/kubeflow-distribution#36; specifically when
deploying private GKE the CloudEndpoints controller won't be able
to contact the servicemanagement API. This provides a work around
by running it locally.
This pattern seems extensible; i.e. other platforms could link in
code to handle CR's specific to their platforms. This could basically
be an alternative to plugins.
I added a context flag to control the kubecontext that apply applies to.
Unfortunately, it doesn't look like there is an easy way to use
that in the context of applying KFDef. It looks like the current logic
assumes the cluster will be added to the KFDef metadata and then
look up that cluster in .kubeconfig.
then simply adding a comment to the flag.
Added some warnings that KFUpgrade is deprecated since per Upgrades in 1.1 should follow kustomize off the shelf workflow #304
we want to follow the off shelf workflow.