Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] secrets in artifact stanza #3854

Open
ryanmickler opened this issue Feb 8, 2018 · 36 comments
Open

[Feature] secrets in artifact stanza #3854

ryanmickler opened this issue Feb 8, 2018 · 36 comments

Comments

@ryanmickler
Copy link
Contributor

ryanmickler commented Feb 8, 2018

I've been trying to find a way to securely pull artifacts

artifact {
  source      = "..."
    destination = "..."
    options { 
      aws_access_key_id = "XYXYX"
      aws_access_key_secret = "XYXYXYX"
  }
}

however, this requires me to hard code in my access keys. What i'd love to be able to do is something like:

vault {
  policies = ["s3_artifact_bucket"]
}
    
artifact {
  source      = "..."
    destination = "..."
    options { 
      aws_access_key_id = "<pulled from vault>"
      aws_access_key_secret = "<pulled from vault>"
  }
}

Of course, this would never word for a few reasons, mainly because id' have to do two 'reads' of the credentials, which would generate new keys.

Perhaps i could propose something of the form:

vault {
  policies = ["s3_artifact_bucket"]
}
    
artifact {
  source      = "..."
    destination = "..."
    options {
       env = <<BLOCK
{{ with secret "aws/creds/s3_artifacts" }}
aws_access_key_id = "{{.Data.access_key}}"
aws_access_key_secret = "{{.Data.secret_key}}"{{end}}
BLOCK
    }
}

Although, there's is probably an issue here with regards this go-getter pull is happening on the host rather than in the container, so we'll need the host to have the keys injects.

Is there anyway to achieve this in nomad without having to pull the artifact from inside the container itself?

@latchmihay
Copy link

latchmihay commented Feb 28, 2018

You can probably add a template stanza like this:

template {
  data = <<BLOCK
{{ with secret "aws/creds/s3_artifacts" }}
aws_access_key_id="{{.Data.access_key}}"
aws_access_key_secret="{{.Data.secret_key}}"{{end}}
BLOCK
  destination = "secrets/aws.env"
  env         = true
}

This in term will create 2 env interpolation variables that you can use like this

artifact {
  source      = "..."
    destination = "..."
    options { 
      aws_access_key_id = "${aws_access_key_id}"
      aws_access_key_secret = "${aws_access_key_secret}"
  }
}

I am using this method to set splunk token for Logging. Here is my working example in json format just for reference.

"logging": [
                {
                  "config": [
                    {
                      ....
                      "splunk-token": "${SPLUNK_TOKEN}",
                      ...
                      ...
                    }
                  ],
                  "type": "splunk"
                }
              ],

            "Templates": [
              {
                "ChangeMode": "restart",
                "ChangeSignal": "",
                "DestPath": "secrets/SPLUNK_TOKEN.env",
                "EmbeddedTmpl": "SPLUNK_TOKEN={{with secret \"/secret/jobs/splunk\"}}{{.Data.token}}{{end}}\n",
                "Envvars": true,
                "LeftDelim": "{{",
                "Perms": "0644",
                "RightDelim": "}}",
                "SourcePath": "",
                "Splay": 5000000000,
                "VaultGrace": 15000000000
              }
            ],

@nicolai86
Copy link

@latchmihay I tried this to retrieve S3 credentials from vault but it did not work for me combined with the artifact element...

@dmartin-isp
Copy link

++
It's painful to have an artifact section that supports S3 and not have the ability to pull the aws credentials from Vault.

@jhitt25
Copy link

jhitt25 commented Feb 22, 2019

Beyond AWS credentials, you can't secure git ssh keys or https credentials either. There appears to be no variable interpolation in the artifact stanza (at least not that i can get working). Trying to put variables in the environment then pulling in an env var does not work in either source or options.

@ybovard
Copy link

ybovard commented Feb 27, 2019

could also be nice to have a way to use it for docker hub downloads:

...
  task "xxx" {
            driver = "docker"
            config {
                image = "my/private:image"
                ssl = true
                auth {
                    server_address = "dockerhub.io"
                    username = "{{with secret \"/secret/docker/hub\"}}{{.Data.user}}{{end}}"
                    password = "{{with secret \"/secret/docker/hub\"}}{{.Data.password}}{{end}}"
                }
                port_map = {
                    http = 8080
                }
            }

@radcool
Copy link

radcool commented Aug 5, 2019

We'd also love this to be possible as we pull artifacts from GitLab using its API and right now we have to put the API key in the clear in the jobspec. We'd much rather store in Vault...

@sam701
Copy link

sam701 commented May 10, 2020

@dadgar It seems this request does not get much attention. What is the recommended approach for handling secrets in artifact stanza for now? Should this stanza be better avoided? Small artifacts can be easily inlined in templates. But what about large ones? Is it preferred to pack everything into a docker image? I'd appreciate your thoughts on this topic.

@ryanmickler
Copy link
Contributor Author

What I've ended up doing is using a local proxy http server that mirrors the contents of the s3 bucket on a in-cluster http address. I use vault to inject s3 credentials into the proxy server, and then i can pull artifacts without credentials locally just from the http server. Its not wildy secure, as any in-cluster user can pull the artifacts, but it works.

@pauldon2
Copy link

Recently also faced this problem. I cannot pass data with authorization parameters to the artifact stanza. Is there really no progress on this issue in two years? :(

@xDmitriev
Copy link

xDmitriev commented Aug 22, 2020

@dadgar Any update regarding this issue? Is there any plan to implement this feature?

@schmichael
Copy link
Member

schmichael commented Aug 27, 2020

Just wanted to share some brainstorming we've been doing internally. No timeline yet, but this is a topic of active discussion. Here are 3 ideas we're kicking around:

1. Retry failed artifacts

This is the hackiest approach, but also the easiest to implement: when an artifact fails we could continue with the task setup ("prestart hooks" in internal jargon) and retry failed artifacts after the template stanza has been processed and populates the environment with necessary secrets.

The big pro here is that artifacts could use secrets from the environment and it would Just Work. Nothing new to learn.

The con is probably obvious: ew. "Expected failures" is never a user experience we want at HashiCorp. Not to mention the template stanza is already a source of significant debugging pain since it cannot be statically validated and instead you have to run your job and monitor its progress to see if the template executed as desired. Now as you're monitoring template execution you first see an artifact failure which even if expected really doesn't make for a pleasant operational experience.

We may be able to special case artifact failures and make this approach a lot more palatable. We could also add a new flag to artifacts like artifact.post_template = true to force the intended ordering

2. New credentials stanza

Does anybody else find using template for all secrets a bit painful? It's an unnecessary layer of abstraction when you just want to use secrets elsewhere in your jobspec (eg artifact or task.config).

We could add a new credential (or secret ... naming is hard) stanza to allow using Vault secrets without having to create a template.

By default these secrets would not be exposed to the task via environment variables. I think this would be a nice security benefit to keep from having to expose S3 and Docker Hub credentials to services when they're only needed internally by the Nomad client. They'd only ever be in memory, and ephemerally during task prestart at that.

Expand for an example that covers the artifact and task.config use cases above:

task "xxx" {
    vault {
        policies = ["s3_artifact_bucket"]

	credentials "aws/creds/s3_artifacts" {
            # A bit unfortunate to mix hardcoded fields like `name` with dynamic fields like `access_key`.
            # Probably want to iterate on this design.
            name = "s3"

            # Do we need full templating here or just implicitly reference
            # fields on .Data?
            access_key = "access_key" 
            secret_key = "secret_key"
        }
	credentials "secret/docker/hub" {
	    name = "docker"
            user = "user" 
            password = "password"
        }
    }
    artifact {
      source      = "..."
        destination = "..."
        options { 
          aws_access_key_id = "${credentials.s3.access_key}"
          aws_access_key_secret = "${credentials.s3.secret_key}"
      }
    }
    driver = "docker"
    config {
        image = "my/private:image"
        ssl = true
        auth {
            server_address = "dockerhub.io"
            username = "${credentials.docker.user}"
            password = "${credentials.docker.password}"
        }
        port_map = {
            http = 8080
        }
    }
}

We may want to bubble credential up a level in case we ever migrate secrets to a plugin/provider model instead of just Vault.

3. HCL2 vault_secret function

Similar to above but we could use a vault_secret function instead of a new stanza:

dockerRegCredential = vault_secret("foo/bar")
config {
  username = ${dockerRegCredential.user_name}
  password = ${dockerRegCredential.password}
}

The pro is an optimally concise syntax. Secrets can be fetched right where they're used.

The con is that it depends on HCL2 and might be slightly harder to debug than a purely declarative approach like the new stanza. This approach has the most unknowns since Nomad is still evaluating how best to fully integrate HCL2 (it's used internally a bit already).

Feedback welcome!

@FernandoMiguel

This comment has been minimized.

@schmichael

This comment has been minimized.

@ryanmickler
Copy link
Contributor Author

Thanks so much @schmichael for addressing this. I really like solution 2, I think that's quite close to what i was hoping for in the OP.

@ryanmickler
Copy link
Contributor Author

ryanmickler commented Sep 3, 2020

One minor point, I'd prefer the following syntax

	credentials "docker" {
	    path = "secret/docker/hub"
        }

and use full templating, as you suggest

@spuder
Copy link
Contributor

spuder commented Sep 30, 2020

Here is a workaround until this is implemented

  • Store git user password in vault
  • Nomad fetches git password from vault and inject it as environment variable
  • Template creates script to clone git repo
  • Life cycle hook 'prestart' ensures the git repo is started before the other services
    task "git-clone" {
      template {
        data = <<EOH
DEPLOY_PASSWORD="{{with secret "secret/data/git_password"}}{{.Data.data.git_password}}{{end}}"
        EOH
        destination = "secret/deploypass.sh"
        env = true
      }
      template {
        data = <<EOH
#!/bin/bash
git --version
CODE_REPO=/alloc
if [ -z ${DEPLOY_PASSWORD+x} ]
then 
  echo "DEPLOY_PASSWORD is not set, script most likely not running under nomad"
else
  if [ ! -d ${CODE_REPO}/.git ]
  then
    git clone --depth 5 --branch ${BRANCH} "https://foobar:${DEPLOY_PASSWORD}@gitlab.example.com/foo/foobar.git"
  else
    cd $CODE_REPO
    git checkout ${BRANCH}
    git pull
  fi
fi
        EOH
        destination = "alloc/git-clone/clone.sh"
        perms = "755"
      }
      driver = "exec"
      config {
        command = "/alloc/git-clone/clone.sh"
      }
      env {
        "PATH" = "/bin:/sbin:/usr/bin:/usr/local/bin"
      }
      lifecycle {
        hook = "prestart"
        sidecar = false

      }
    }

@Oloremo
Copy link
Contributor

Oloremo commented Feb 7, 2021

Very interested in this as well

@sprsquish
Copy link

We're very interested in this. It's a bit of a blocker for our adoption of Nomad. We make heavy use of Vault for managing credentials. @ryanmickler's suggestion to use the name in the stanza definition and provide a path in the body is our preference as well.

@lukas-w
Copy link
Contributor

lukas-w commented Jul 14, 2021

We're in a similar boat, but would actually prefer the artifact.post_template proposal to process artifacts after the template hook. It appears like it's the simplest and at the same time most flexible solution as it would also give access to other template features such as key or service (imagine wanting to download artifacts from a service registered with Consul). Maybe this option could even be the default?

I can confirm that a simple two-line change swapping the processing order allows the use of environment variables from templates in artifact. I didn't test anything else though, I'm sure this breaks a couple of things.

--- a/client/allocrunner/taskrunner/task_runner_hooks.go
+++ b/client/allocrunner/taskrunner/task_runner_hooks.go
@@ -65,7 +65,6 @@ func (tr *TaskRunner) initHooks() {
                newLogMonHook(tr, hookLogger),
                newDispatchHook(alloc, hookLogger),
                newVolumeHook(tr, hookLogger),
-               newArtifactHook(tr, hookLogger),
                newStatsHook(tr, tr.clientConfig.StatsCollectionInterval, hookLogger),
                newDeviceHook(tr.devicemanager, hookLogger),
        }
@@ -105,6 +104,8 @@ func (tr *TaskRunner) initHooks() {
                }))
        }
 
+       tr.runnerHooks = append(tr.runnerHooks, newArtifactHook(tr, hookLogger))
+
        // Always add the service hook. A task with no services on initial registration
        // may be updated to include services, which must be handled with this hook.
        tr.runnerHooks = append(tr.runnerHooks, newServiceHook(serviceHookConfig{

@nvx
Copy link
Contributor

nvx commented Jul 27, 2021

I can confirm that a simple two-line change swapping the processing order allows the use of environment variables from templates in artifact. I didn't test anything else though, I'm sure this breaks a couple of things.

The only thing that comes to mind would be breaking source coming from an artifact stanza - https://www.nomadproject.io/docs/job-specification/template#source

A flag in the template block to indicate if the template should be rendered before or after the artifact block would fix that though.

@mibeyene
Copy link

I wanted to add that this feature would be very useful. Would really like to be able to define an artifact stanza like so:

artifact {
  destination = "local/testing"
  source = "git::https://oauth2:${GITHUB_TOKEN}@github.com/<org_name>/<repo_name>"
}

and then use a template stanza to generate that GitHub token using the plugin defined at https://github.com/martinbaillie/vault-plugin-secrets-github

@gthieleb
Copy link

gthieleb commented Nov 8, 2021

A flag in the template block to indicate if the template should be rendered before or after the artifact block would fix that though.

The mentioned flag does already exists:

env = True

@lukas-w
Copy link
Contributor

lukas-w commented Nov 8, 2021

That flag doesn't change anything about the order of template rendering and artifact fetching though, so even with env = true environment variables from the template won't be available in the artifact stanza because the artifact block is processed first.

FYI, I've written a basic implementation of @schmichael's second proposal at #11473. It's still incomplete but maybe it helps get the ball rolling :)

@gthieleb
Copy link

gthieleb commented Nov 9, 2021

@lukas-w Thanks for pointing out the ordering of template and artifact rendering. I am lacking some background but can you explain design decision a bit more in detail?
I am refering to the env = True flag because this was the first that comes to my mind when differencing these categories. From my point of view a template flagged by env falls into the category of environment processing which IMO should happen before all other job tasks initialisation steps.

Apart of that thanks for your proposal about the artifact endpoint authentication.
Currently we are backed up by AWS IAM instance roles and using S3.
The management wants us to use Sonatype NEXUS as an artifacts store, where authentication will be a topic.

@lukas-w
Copy link
Contributor

lukas-w commented Nov 9, 2021

I am lacking some background but can you explain design decision a bit more in detail?

That's probably a question better posed to the Hashicorp devs but I suppose it's so that artifacts can be used as template sources as @nvx pointed out in #3854 (comment).

@nrdxp
Copy link

nrdxp commented Aug 4, 2022

I recently had to work aroud this with a wrapper script on the main task which just uses the aws cli to pull in the artifacts from s3. I am setting the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY variables in a template by pulling directly from vault and that is enough to get it working. If templates got processed before artifacts, maybe only conditionally if they target the secrets directory, it seems like that would be enough to solve it, as the aws credential resolver is already aware of these variables and will pick them up in any application that uses their official sdks

@lgfa29 lgfa29 changed the title [Feature] vault secrets in artifact stanza [Feature] secrets in artifact stanza Dec 5, 2022
@lgfa29 lgfa29 added theme/variables Variables feature theme/vault labels Dec 5, 2022
@lgfa29
Copy link
Contributor

lgfa29 commented Dec 5, 2022

I updated the title to better indicate that whatever solution we create for this should be compatible with Nomad Variables, which were introduced in Nomad v1.4.0, so @schmichael's proposals will need a bit of adjustments and updates.

Summarizing @SamMousa comments in #15384, there are some scenarios where secrets are needed for setting the task runtime environment, but these secrets should not be made available to the task, so any workaround that makes use of the env is discarded.

Examples include Docker's auth block and artifact that require authentication for download. So the solution needs to be able to handle jobspec fields that are not under Nomad's control (such as task driver configuration).

@acaloiaro
Copy link
Contributor

acaloiaro commented Dec 8, 2022

@lgfa29 I came across this issue for the Nomad Variables use case, so thanks for updating the issue to take Nomad Variables into consideration.


As a user, what I'd ideally like to express is the following using Nomad Variables.

Given a job named job_name and a Nomad Variable scoped to nomad/jobs/job_name with keys AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, I'd like to express the env vars for a task with an artifact stanza as follows:

env {
  {{with nomadVar "nomad/jobs/job_name"}}
  AWS_ACCESS_KEY_ID={{.AWS_ACCESS_KEY_ID}}
  AWS_SECRET_ACCESS_KEY={{.AWS_SECRET_ACCESS_KEY}}
  {{end}}
}

The above would be evaluated with the job's ACLs, not that of the submitter or server.

I'm aware that the Nomad 1.4 constraint on Nomad Variables is that they must be used in a template stanza, and that supporting the above expression may not be the simplest thing to implement, but I wanted to share my user feedback.

Ultimately, placing env vars in a template stanza with env = true is not terribly difficult, it's simply not intuitive from the user perspective, and I think the desired expression above more clearly expresses how users might think about the problem.

@nrdxp
Copy link

nrdxp commented Feb 14, 2023

There is another possible workaround a la the client configuration file:
https://developer.hashicorp.com/nomad/docs/configuration/client#artifact-parameters

You could use the set_environmental_variables to pass in secrets to the artifact stanza, directly from the nomad client's environment. This isn't quite as dynamic as I'm sure some would like, but it's something.

@frakev
Copy link

frakev commented Feb 15, 2023

Hello @nrdxp ,
Would it be possible to have an example of the Nomad configuration file (with the artifact parameter) and a job using artifact S3 ? It's not working for me.
Thank you !

@burnsjeremy
Copy link

1000 million % agree that it does not make sense why I cannot use a vault value anywhere in the nomad job spec…we have all these variables and can set custom variables but we cannot do something like below even…keep it simple and you could use this anywhere…even task names could be set from vault in the task stanza….not that I want to do that and yes there would be some setup and a declaration in a variable stanza…or just make a special flag that can be setup.

ideal solution

auth  {
  username = ${var.vault_read.ops.data.docker.data.user}
  password = ${var.vault_read.ops.data.docker.data.password}
}

That is all we want, really but there are two other options that out of the two I’m surprised no one mentioned it (I guess bc it doesn’t use vault).

Option 1:

create a wrapper script. I use a deploy script that I run my jobs with so all I need to do is ./deploy dev in the proper job and in that wrapper shell I use the vault CLI to pull my Docker auth. Then it puts it in the nomad run command as a var. Then in my job spec I have it declared then use the value it gets passed. That unfortunately also shows the value in the job def in the UI. And there’s no restart if the value is changed. We just had to swap out entire docker from a vendor to our internal team so we had to update every job individually with the new values and it was such a pain…then had to redeploy them to ensure that every container existed in our new internal team. About 110 jobs (a lot dispatched from a param job) and 45 different Docker containers so you can imagine how tedious that can get. So that would make the point of using the proposed solution and not one like this option.

Option 2:

Use Consul KV to store values from template that can be used in other places. This is exactly what some other people suggested and it completely understandable that they thought it worked the same way or would work. I have been searching high and low everywhere for a good solution to the auth section and not having it printed in the UI. I found this example and adapted it in my jobs right away: https://github.com/angrycub/nomad_example_jobs/blob/main/docker/auth_from_template/auth.nomad

template {
        destination = "local/secret.env"
        env         = true
        change_mode = "restart"
        data        = <<EOH
DOCKER_USER={{ key "kv/docker/config/user" }}
DOCKER_PASS={{ key "kv/docker/config/pass" }}
EOH
}
  driver = "docker"
  config {
    image = "registry.service.consul:5000/redis:latest"
    auth {
      username = "${DOCKER_USER}"
      password = "${DOCKER_PASS}"
     }

This is exactly what we want, except we want to use vault instead. This completely works and is great. You can set your consul kv to only allow people with the correct permissions to get to those and have them auth with vault before they get there so it is close. It will also restart the task if the value changes here so that will redeploy everything with the new values. So I am using this option now currently until we can get them to use vault like this.

That was the option that isn’t listed here or I scrolled past it, I think it also explains why we need/want this feature also. I only happen to find someone using this recently after searching forever, always check examples if they are offered :)

sorry for the over informative comment.

Option 3:

Just put your password in your jobspec and share it everywhere, put it in a public repo…why we have so many secrets?

That is my breakdown of this, please give us the Vault solution bc it would be better and make more sense.

@ajaincdsc
Copy link

Is this still under development? Would be very useful for our deployment as well

@schmichael
Copy link
Member

@burnsjeremy your Option 2 works with Vault and Nomad Variables as well:

@ajaincdsc We still intend to implement a solution for secret access in the artifact block but development has not begun.

@burnsjeremy
Copy link

Thanks @schmichael I had meant to come back and say that bc while swapping mine out after finding those I noticed why it wasn’t working was I didn’t have the policy set correctly so I had to adjust my read policies and it worked. I did find some frustrating places that it didn’t work, so being able to set a global policy for vault and reading the vault data to set a global variable would be a better way to handle that. I run into this a lot and I’m sure others do also.

@schmichael
Copy link
Member

I did find some frustrating places that it didn’t work, so being able to set a global policy for vault and reading the vault data to set a global variable would be a better way to handle that. I run into this a lot and I’m sure others do also.

Yeah, absolutely. It's an unfortunately tricky problem because HCL has always been parsed into JSON by the CLI long before Nomad agents have a chance to look at it. Templates work because HCL treats them as an opaque string value and leaves it for Nomad to process later. That means only processes downstream of templates can access env vars set in templates which is an internal implementation detail and therefore just awful from a UX perspective.

The options I outline above are still more or less what we're considering, but 2 & 3 are just a lot of work (both design and code).

@Laboltus
Copy link

being able to set a global policy for vault and reading the vault data to set a global variable

Can you please share the way you did this ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Needs Roadmapping
Development

No branches or pull requests