-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Follow the same standards with provider kubernetes
for environment variable
#225
Comments
Would also be great if we could get |
Would you accept a PR that implements this? |
I would accept a PR that makes kubeconfig_incluster settable via an env var for users that wasn't to keep the HCL agnostic and control via the environment which auth is being used. A PR that isn't fully backwards compatible for existing users is not an option. I am also not keen on adding additional attributes to the provider if the one's already there can get the job done. You could already now achieve what you want simply by always supplying a path to a kubeconfig and have that kubeconfig set the desired auth. |
The primary use case I am suggesting supporting is for CI environments where a kubeconfig is not present because it is generated with an exec call. Backwards compatibility is assumed always. |
Exec call could mean many things. Can you show an example? |
From the hashicorp/kubernetes provider docs:
Essentially everything this provider supports with respect to connecting to clusters, I would be glad to add support for in this provider in whatever way would be backwards compatibility with existing functionality. The kubernetes provider is used widely by the entire Terraform ecosystem and as such reflects the needs of the community. I'd like this provider to support those needs (as mine are among them). |
What you're trying to do @tkellen is possible by defining the kubeconfig as a HCL map, and then passing it into kubeconfig_raw using Terraform's yamlencode function. There should be examples how to do this in the issues, use the search. This issue is about something else. I am not planning to add individual kubeconfig attributes to the provider spec because I have zero interest in playing catch up supporting all of them. |
Roger that. I'll fork the provider. The ergonomics of building a kubeconfig are not something I am interested in maintaining. Here is another format your provider does not support: data "aws_eks_cluster" "this_env" {
name = local.name
}
data "aws_eks_cluster_auth" "this_env" {
name = local.name
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.this_env.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.this_env.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.this_env.token
}
} |
To be clear, I appreciate the work you and the other contributors have done here but it seems very strange that you don't support configuring your provider in the way kubernetes/helm/kubectl do. |
If I extract the functionality from the kubernetes provider into a library and make it possible for this provider to utilize that functionality, would you consider adding this support? That would allow you to kick requests from users "upstream" to me. I would also seek to get the helm, kubectl and kubernetes providers to share the functionality, though obviously I can't guarantee it would be adopted this way. |
@tkellen I doubt that maintaining a fork is less work than simply doing something like this and getting the host, CA and token path from the env vars inside the pod instead of via the AWS provider data sources like in the example. If you search the issues, you will find plenty of examples and plenty of explanation why I am not interested in duplicating things that can be set in a kubeconfig in the provider schema. Since this issue is about something else entirely, though, I kindly ask you to stop derailing the conversation now. |
🙄 roger that, will fork. |
Currently only one configuration can be set which is quite limited on running this provider in CI and in development.
Would suggest 2 things:
only one of
kubeconfig_incluster,kubeconfig_path,kubeconfig_rawcan be specified, but
kubeconfig_incluster,kubeconfig_pathwere specified.
.The text was updated successfully, but these errors were encountered: