Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge pillar dict data when using includes #7302

Closed
micahhausler opened this issue Sep 17, 2013 · 8 comments
Closed

Merge pillar dict data when using includes #7302

micahhausler opened this issue Sep 17, 2013 · 8 comments
Labels
Duplicate Duplicate of another issue or PR - will be closed Feature new functionality including changes to functionality and code refactors, etc.

Comments

@micahhausler
Copy link

Pillars should be able to merge included pillars with the same dict name into a unified highstate pillar.

For example:

/srv/pillar/top.sls

base:
  '*':
    - postgres.wale

/srv/pillar/postgres/init.sls

postgres:
  version: 9.3
  data_directory: '/var/lib/postgresql/9.3/main'

/srv/pillar/postgres/wale.sls

include:
  - postgres

postgres:
  wale:
    s3_bucket: 's3://bucket-name/'
    aws_access_key_id: KEY_ID_HERE
    aws_secret_access_key: SECRET_KEY_HERE

Should generate pillar data that would look like this:

postgres:
  version: 9.3
  data_directory: '/var/lib/postgresql/9.3/main'
  wale:
    s3_bucket: 's3://bucket-name/'
    aws_access_key_id: KEY_ID_HERE
    aws_secret_access_key: SECRET_KEY_HERE
@basepi
Copy link
Contributor

basepi commented Sep 17, 2013

This would be useful. Thanks for the request.

@ryan-lane
Copy link
Contributor

+1 I have a pretty significant need for this. I have a deployment system that configures itself via pillars, where each repository has separate configuration. Some targets need multiple repositories, but not all targets should be able to read all of the repo pillars. So, I have:

In deployment/mediawiki_slot0.sls:

repo_config:
mediawiki/slot0:
grain: mediawiki
checkout_submodules: True

In deployment/mediawiki_slot1.sls:

repo_config:
mediawiki/slot1:
grain: mediawiki
checkout_submodules: True

In parsoid_Parsoid.sls:

repo_Parsoid:
parsoid/Parsoid:
grain: parsoid
checkout_module_calls: parsoid.restart_parsoid

In parsoid_config.sls:

repo_config:
parsoid/config:
grain: parsoid
checkout_module_calls: parsoid.restart_parsoid

In the above situation, I'd want to include mediawiki_slot1 and mediawiki_slot2 on one set of systems and parsoid_parsoid on another. So, I'd want my top to look like:

'mw*':

  • deployment/mediawiki_slot0
  • deployment/mediawiki_slot0
    'parsoid*':
    • deployment/parsoid_Parsoid
  • deployment/parsoid_config

I'd want the generated pillar data on mw* to look like:

repo_config:
    ----------
    mediawiki/slot0:
        ----------
        checkout_submodules:
            True
        grain:
            mediawiki
    mediawiki/slot1:
        ----------
        checkout_submodules:
            True
        grain:
            mediawiki

And the generated pillar data on parsoid* to look like:

repo_config:
    ----------
    parsoid/Parsoid:
        ----------
        checkout_module_calls:
            - parsoid.restart_parsoid
        grain:
            parsoid
    parsoid/config:
        ----------
        checkout_module_calls:
            - parsoid.restart_parsoid
        grain:
            parsoid

This way I only specifically targeted systems can read the pillar data for their repos. The alternative is to name the pillar with the repo name, but that involves a lot of not-so-nice logic in my modules and states.

@ryan-lane
Copy link
Contributor

Maybe it would be good to be able to specify whether the pillar will me merged or will overwrite, with a default of overwriting, as to preserve backwards compatibility.

@ndbroadbent
Copy link

+1, it would allow much more flexible configuration.

@basepi
Copy link
Contributor

basepi commented Nov 1, 2013

Now that I look at this one again, what differentiates it from #3991? Just curious if I'm missing something.

@ryan-lane
Copy link
Contributor

Yeah, looks the same.

@micahhausler
Copy link
Author

Yea, I believe it is the same, you can close it as a duplicate.

@basepi
Copy link
Contributor

basepi commented Nov 1, 2013

Alright, I'm going to close this one. I also wanted to say that this is a totally legitimate use case and we haven't forgotten about you guys! We definitely want to get this feature in.

@basepi basepi closed this as completed Nov 1, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Duplicate Duplicate of another issue or PR - will be closed Feature new functionality including changes to functionality and code refactors, etc.
Projects
None yet
Development

No branches or pull requests

4 participants