Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Options for Figaro on Elastic BeanStalk #273

Open
tomgallagher opened this issue Feb 20, 2019 · 2 comments
Open

Options for Figaro on Elastic BeanStalk #273

tomgallagher opened this issue Feb 20, 2019 · 2 comments

Comments

@tomgallagher
Copy link

Hello

Thanks for Figaro.

I'm thinking about moving one of my apps from Heroku to Elastic Beanstalk on AWS.

Figaro seems to be Heroku-focused. As far as I can see, I have a few options to make Figaro work on Elastic Beanstalk.

  1. Copy the environment variables into my Elastic Beanstalk environment one-by-one.

I'd like to avoid doing this if possible because I have a lot of them and it is bound to be error-prone.

  1. Just remove application.yml from the gitignore file.

I haven't tested this yet but this seems like the route of least resistance. It obviously undercuts the whole purpose of using Figaro in the first place though.

  1. Generate a remote configuration file

You mention this is in the docs but I can't see any examples. I'm not sure what this means. Could you elaborate a bit on how Figaro could retrieve a file from an S3 bucket, for example?

Thanks

Tom

@dgarwood
Copy link

@tomgallagher i certainly wouldn't do 2. I just dealt with a project where all the keys were in the credentials.yml.enc (yes, including dev and test) and a dev's laptop was stolen. even encrypted, the keys were still committed to the repo and had to be changed.

I'm currently working on an s3 approach to getting files on a system for config and I'll let you know how that goes.

@dgarwood
Copy link

@tomgallagher (and those who might find this later) follow up from my previous comment.

we used the aws-s3 sdk and have copies of our files on an s3 bucket. Then, during deploy, our capistrano task pulls down the file to the shared folder from the bucket. The main thing to leverage is SSHKit's file stream for upload, since that is what the s3 object will return. This is one method that works, and not the only way to accomplish this.

# taken from a Capistrano task
s3 = Aws::S3::Resource.new(
       region: fetch(:aws_region),
       credentials: Aws::Credentials.new(
         fetch(:aws_access_key),
         fetch(:aws_secret_key)
       )
     )
obj = s3.bucket(:aws_s3_bucket).object(:s3_file_path)
upload! obj.get.body, :server_file_path

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants