-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] secrets in artifact stanza #3854
Comments
You can probably add a template stanza like this:
This in term will create 2 env interpolation variables that you can use like this
I am using this method to set splunk token for Logging. Here is my working example in json format just for reference.
|
@latchmihay I tried this to retrieve S3 credentials from vault but it did not work for me combined with the artifact element... |
++ |
Beyond AWS credentials, you can't secure git ssh keys or https credentials either. There appears to be no variable interpolation in the artifact stanza (at least not that i can get working). Trying to put variables in the environment then pulling in an env var does not work in either source or options. |
could also be nice to have a way to use it for docker hub downloads:
|
We'd also love this to be possible as we pull artifacts from GitLab using its API and right now we have to put the API key in the clear in the jobspec. We'd much rather store in Vault... |
@dadgar It seems this request does not get much attention. What is the recommended approach for handling secrets in |
What I've ended up doing is using a local proxy http server that mirrors the contents of the s3 bucket on a in-cluster http address. I use vault to inject s3 credentials into the proxy server, and then i can pull artifacts without credentials locally just from the http server. Its not wildy secure, as any in-cluster user can pull the artifacts, but it works. |
Recently also faced this problem. I cannot pass data with authorization parameters to the artifact stanza. Is there really no progress on this issue in two years? :( |
@dadgar Any update regarding this issue? Is there any plan to implement this feature? |
Just wanted to share some brainstorming we've been doing internally. No timeline yet, but this is a topic of active discussion. Here are 3 ideas we're kicking around: 1. Retry failed artifactsThis is the hackiest approach, but also the easiest to implement: when an artifact fails we could continue with the task setup ("prestart hooks" in internal jargon) and retry failed artifacts after the The big pro here is that The con is probably obvious: ew. "Expected failures" is never a user experience we want at HashiCorp. Not to mention the We may be able to special case 2. New
|
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Thanks so much @schmichael for addressing this. I really like solution 2, I think that's quite close to what i was hoping for in the OP. |
One minor point, I'd prefer the following syntax
and use full templating, as you suggest |
Here is a workaround until this is implemented
|
Very interested in this as well |
We're very interested in this. It's a bit of a blocker for our adoption of Nomad. We make heavy use of Vault for managing credentials. @ryanmickler's suggestion to use the name in the stanza definition and provide a path in the body is our preference as well. |
We're in a similar boat, but would actually prefer the I can confirm that a simple two-line change swapping the processing order allows the use of environment variables from templates in --- a/client/allocrunner/taskrunner/task_runner_hooks.go
+++ b/client/allocrunner/taskrunner/task_runner_hooks.go
@@ -65,7 +65,6 @@ func (tr *TaskRunner) initHooks() {
newLogMonHook(tr, hookLogger),
newDispatchHook(alloc, hookLogger),
newVolumeHook(tr, hookLogger),
- newArtifactHook(tr, hookLogger),
newStatsHook(tr, tr.clientConfig.StatsCollectionInterval, hookLogger),
newDeviceHook(tr.devicemanager, hookLogger),
}
@@ -105,6 +104,8 @@ func (tr *TaskRunner) initHooks() {
}))
}
+ tr.runnerHooks = append(tr.runnerHooks, newArtifactHook(tr, hookLogger))
+
// Always add the service hook. A task with no services on initial registration
// may be updated to include services, which must be handled with this hook.
tr.runnerHooks = append(tr.runnerHooks, newServiceHook(serviceHookConfig{ |
The only thing that comes to mind would be breaking source coming from an artifact stanza - https://www.nomadproject.io/docs/job-specification/template#source A flag in the template block to indicate if the template should be rendered before or after the artifact block would fix that though. |
I wanted to add that this feature would be very useful. Would really like to be able to define an artifact stanza like so:
and then use a template stanza to generate that GitHub token using the plugin defined at https://github.com/martinbaillie/vault-plugin-secrets-github |
The mentioned flag does already exists:
|
That flag doesn't change anything about the order of template rendering and artifact fetching though, so even with FYI, I've written a basic implementation of @schmichael's second proposal at #11473. It's still incomplete but maybe it helps get the ball rolling :) |
@lukas-w Thanks for pointing out the ordering of template and artifact rendering. I am lacking some background but can you explain design decision a bit more in detail? Apart of that thanks for your proposal about the artifact endpoint authentication. |
That's probably a question better posed to the Hashicorp devs but I suppose it's so that artifacts can be used as template sources as @nvx pointed out in #3854 (comment). |
I recently had to work aroud this with a wrapper script on the main task which just uses the aws cli to pull in the artifacts from s3. I am setting the |
I updated the title to better indicate that whatever solution we create for this should be compatible with Nomad Variables, which were introduced in Nomad v1.4.0, so @schmichael's proposals will need a bit of adjustments and updates. Summarizing @SamMousa comments in #15384, there are some scenarios where secrets are needed for setting the task runtime environment, but these secrets should not be made available to the task, so any workaround that makes use of the Examples include Docker's |
@lgfa29 I came across this issue for the Nomad Variables use case, so thanks for updating the issue to take Nomad Variables into consideration. As a user, what I'd ideally like to express is the following using Nomad Variables. Given a job named
The above would be evaluated with the job's ACLs, not that of the submitter or server. I'm aware that the Nomad 1.4 constraint on Nomad Variables is that they must be used in a Ultimately, placing env vars in a |
There is another possible workaround a la the client configuration file: You could use the |
Hello @nrdxp , |
1000 million % agree that it does not make sense why I cannot use a vault value anywhere in the nomad job spec…we have all these variables and can set custom variables but we cannot do something like below even…keep it simple and you could use this anywhere…even task names could be set from vault in the task stanza….not that I want to do that and yes there would be some setup and a declaration in a variable stanza…or just make a special flag that can be setup. ideal solution
That is all we want, really but there are two other options that out of the two I’m surprised no one mentioned it (I guess bc it doesn’t use vault). Option 1:create a wrapper script. I use a deploy script that I run my jobs with so all I need to do is Option 2:Use Consul KV to store values from template that can be used in other places. This is exactly what some other people suggested and it completely understandable that they thought it worked the same way or would work. I have been searching high and low everywhere for a good solution to the auth section and not having it printed in the UI. I found this example and adapted it in my jobs right away: https://github.com/angrycub/nomad_example_jobs/blob/main/docker/auth_from_template/auth.nomad
This is exactly what we want, except we want to use vault instead. This completely works and is great. You can set your consul kv to only allow people with the correct permissions to get to those and have them auth with vault before they get there so it is close. It will also restart the task if the value changes here so that will redeploy everything with the new values. So I am using this option now currently until we can get them to use vault like this. That was the option that isn’t listed here or I scrolled past it, I think it also explains why we need/want this feature also. I only happen to find someone using this recently after searching forever, always check examples if they are offered :) sorry for the over informative comment. Option 3:Just put your password in your jobspec and share it everywhere, put it in a public repo…why we have so many secrets? That is my breakdown of this, please give us the Vault solution bc it would be better and make more sense. |
Is this still under development? Would be very useful for our deployment as well |
@burnsjeremy your
@ajaincdsc We still intend to implement a solution for secret access in the |
Thanks @schmichael I had meant to come back and say that bc while swapping mine out after finding those I noticed why it wasn’t working was I didn’t have the policy set correctly so I had to adjust my read policies and it worked. I did find some frustrating places that it didn’t work, so being able to set a global policy for vault and reading the vault data to set a global variable would be a better way to handle that. I run into this a lot and I’m sure others do also. |
Yeah, absolutely. It's an unfortunately tricky problem because HCL has always been parsed into JSON by the CLI long before Nomad agents have a chance to look at it. Templates work because HCL treats them as an opaque string value and leaves it for Nomad to process later. That means only processes downstream of templates can access env vars set in templates which is an internal implementation detail and therefore just awful from a UX perspective. The options I outline above are still more or less what we're considering, but 2 & 3 are just a lot of work (both design and code). |
Can you please share the way you did this ? |
I've been trying to find a way to securely pull artifacts
however, this requires me to hard code in my access keys. What i'd love to be able to do is something like:
Of course, this would never word for a few reasons, mainly because id' have to do two 'reads' of the credentials, which would generate new keys.
Perhaps i could propose something of the form:
Although, there's is probably an issue here with regards this go-getter pull is happening on the host rather than in the container, so we'll need the host to have the keys injects.
Is there anyway to achieve this in nomad without having to pull the artifact from inside the container itself?
The text was updated successfully, but these errors were encountered: