Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]External Auth with Syndic not Working as Expected #62618

Closed
doesitblend opened this issue Sep 2, 2022 · 7 comments
Closed

[BUG]External Auth with Syndic not Working as Expected #62618

doesitblend opened this issue Sep 2, 2022 · 7 comments
Assignees
Labels
Bug broken, incorrect, or confusing behavior VMware

Comments

@doesitblend
Copy link
Collaborator

Description

There appears to be an issue when attempting to target minions from a MoM with external authentication configured. When I use an external authentication configuration like this example, it all works great.

external_auth:
  pam:
    saltdev:
      - .*

However, as soon as I try to l limit the target and use salt -a pam to authenticate and run the command I get an authentication failure.

external_auth:
  pam:
    saltdev:
      - 'G@os:Ubuntu':
        - '.*'

Here is the error that I see from the CLI:

[root@mom salt]# salt -l trace  -a pam eed40f20a2bb test.ping
/usr/local/lib/python3.6/site-packages/OpenSSL/crypto.py:8: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in c
ryptography and will be removed in a future release.
  from cryptography import utils, x509
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Including configuration from '/etc/salt/master.d/ext_auth.conf'
[DEBUG   ] Reading configuration from /etc/salt/master.d/ext_auth.conf
[DEBUG   ] Including configuration from '/etc/salt/master.d/rest_api.conf'
[DEBUG   ] Reading configuration from /etc/salt/master.d/rest_api.conf
[DEBUG   ] Missing configuration file: /root/.saltrc
[DEBUG   ] Using importlib_metadata to load entry points
[TRACE   ] The required configuration section, 'fluent_handler', was not found the in the configuration. Not loading the fluent logging handlers module.
[TRACE   ] None of the required configuration sections, 'logstash_udp_handler' and 'logstash_zmq_handler', were found in the configuration. Not loading the Logstash logging handlers module.
[DEBUG   ] Override  __grains__: <module 'salt.loaded.int.log_handlers.sentry_mod' from '/usr/lib/python3.6/site-packages/salt/log/handlers/sentry_mod.py'>
[DEBUG   ] Configuration file path: /etc/salt/master
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Including configuration from '/etc/salt/master.d/ext_auth.conf'
[DEBUG   ] Reading configuration from /etc/salt/master.d/ext_auth.conf
[DEBUG   ] Including configuration from '/etc/salt/master.d/rest_api.conf'
[DEBUG   ] Reading configuration from /etc/salt/master.d/rest_api.conf
[DEBUG   ] Missing configuration file: /root/.saltrc
[DEBUG   ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] LazyLoaded pam.auth
username: saltdev
password:
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://127.0.0.1:4506
[DEBUG   ] Trying to connect to: tcp://127.0.0.1:4506
[TRACE   ] IPCClient: Connecting to socket: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] Closing AsyncZeroMQReqChannel instance
[DEBUG   ] Closing IPCMessageSubscriber instance
[DEBUG   ] Using importlib_metadata to load entry points
[DEBUG   ] LazyLoaded nested.output
[TRACE   ] data = Authorization error occurred.
Authorization error occurred.

I am attaching logs for further investigation of this issue that appears to be a bug with Syndic. When I try to do this directly from the master that is connected to the Salt minion, the authentication works as expected.

Setup
MoM -> Syndic master -> minion

These are all fresh VM's deployed via Salt Cloud running Salt 3004.2.

Steps to Reproduce the behavior

Add a user saltdev to your Master of Masters and configure the password for this user.

external_auth:
  pam:
    saltdev:
        - .*

The above configuration should work when running a command with salt -a pam for authentication. Then update the configuration to be like the following and restart your master of masters:

external_auth:
  pam:
    saltdev:
      - 'G@os:Ubuntu':
        - '.*'

Expected behavior
I would expect that the user is able to target the intended minion, however, I am not able to run any commands against any minion with this configuration.

Versions Report

salt --versions-report (Provided by running salt --versions-report. Please also mention any differences in master/minion versions.)
Master of Master and Salt master version info:
Salt Version:
          Salt: 3004.2

Dependency Versions:
          cffi: 1.15.1
      cherrypy: unknown
      dateutil: Not Installed
     docker-py: Not Installed
         gitdb: Not Installed
     gitpython: Not Installed
        Jinja2: 2.11.1
       libgit2: Not Installed
      M2Crypto: 0.35.2
          Mako: Not Installed
       msgpack: 0.6.2
  msgpack-pure: Not Installed
  mysql-python: Not Installed
     pycparser: 2.21
      pycrypto: Not Installed
  pycryptodome: Not Installed
        pygit2: Not Installed
        Python: 3.6.8 (default, Nov 16 2020, 16:55:22)
  python-gnupg: Not Installed
        PyYAML: 3.13
         PyZMQ: 17.0.0
         smmap: Not Installed
       timelib: Not Installed
       Tornado: 4.5.3
           ZMQ: 4.1.4

System Versions:
          dist: centos 7 Core
        locale: UTF-8
       machine: x86_64
       release: 3.10.0-1160.62.1.el7.x86_64
        system: Linux
       version: CentOS Linux 7 Core

-------------------------------

Minion Version Info
Salt Version:
          Salt: 3004.2
 
Dependency Versions:
          cffi: Not Installed
      cherrypy: Not Installed
      dateutil: Not Installed
     docker-py: Not Installed
         gitdb: Not Installed
     gitpython: Not Installed
        Jinja2: 3.1.2
       libgit2: Not Installed
      M2Crypto: Not Installed
          Mako: Not Installed
       msgpack: 1.0.4
  msgpack-pure: Not Installed
  mysql-python: Not Installed
     pycparser: Not Installed
      pycrypto: Not Installed
  pycryptodome: 3.15.0
        pygit2: Not Installed
        Python: 3.8.10 (default, Jun 22 2022, 20:18:18)
  python-gnupg: Not Installed
        PyYAML: 6.0
         PyZMQ: 21.0.2
         smmap: Not Installed
       timelib: Not Installed
       Tornado: 4.5.3
           ZMQ: 4.3.3
 
System Versions:
          dist: ubuntu 20.04 Focal Fossa
        locale: utf-8
       machine: x86_64
       release: 3.10.0-1160.76.1.el7.x86_64
        system: Linux
       version: Ubuntu 20.04 Focal Fossa

Additional context
Please see the attached logs for more information.

logs.tar.gz

@doesitblend doesitblend added Bug broken, incorrect, or confusing behavior needs-triage VMware labels Sep 2, 2022
@whytewolf whytewolf removed their assignment Sep 12, 2022
@dwoz
Copy link
Contributor

dwoz commented Sep 14, 2022

@waynew Isn't this something you looked at recently?

@Ch3LL Ch3LL added this to the Phosphorus v3005.0 milestone Sep 14, 2022
@waynew
Copy link
Contributor

waynew commented Sep 16, 2022

Just dropping an update here in addition to discussions I've had behind the scenes. This title is probably a bit of a red herring. eauth actually is working exactly as it should. The larger problem is that syndic targeting is maybe kind of broken?

CVE-2022-22941 was recently fixed in Salt - prior to that fix, Salt would see an empty list of valid minions and publish that event anyway. So given the example, saltdev would not only be able to target any minion with the os:Ubuntu grain, but also any other minion 😬

That was fixed.

However, currently the MoM doesn't retain any information about any of the syndic's minions, and AFAICT there's not any mechanism in place to allow the MoM to publish a request to the syndics to ask them for matching minions. Given Salt's powerful targeting capabilities, that's probably what should happen.

In other words:

  • When a salt call is made to an order_masters: True Salt master
  • An event should be published, asking all the syndics to provide matching targets
  • These minion IDs should be aggregated into a list on the MoM, including the MoM's local minions
  • This list should be compared via the existing mechanisms to ensure that all the minions are correct targets

That should solve this issue.

@waynew
Copy link
Contributor

waynew commented Sep 22, 2022

I have something that... might work. Horribly. But it might work.

Try this - create a python script:

CACHEDIR = '/var/cache/salt/master'   # this must match wherever salt's cache is, but that's the default

import salt.cache.localfs as l
grains = {'os': 'Ubuntu'}
l.store('minions/yourminion', grains, CACHEDIR)

Then run it with your salt's Python:

$(salt-call --local grains.get pythonexecutable --out=text | awk '{ print $2 }') path/to/that/script.py

Now based on my very limited testing you should be able to run your command. Though you'll have to run with a compound matcher:

salt -a pam -C 'G@os:Ubuntu' test.ping

In my local testing it's not actually returning for some reason -- i.e. that never comes back, but if I run async and then salt-run jobs.lookup_jid on the syndic, not the Master of Masters, I do get a return 🤷

So... it's a horrible workaround but it might be good enough 🤞

@doesitblend
Copy link
Collaborator Author

@waynew Thanks for looking. I will try this. Does this confirm that this is a "will not fix" type issue then?

@waynew
Copy link
Contributor

waynew commented Sep 23, 2022

Well, we do want to provide a fix, but it won't happen for the bug fix, due to the scope of the required fix 😥 We do plan to fix it for the 3006 release.

@cmacnevin
Copy link

When is the 3006 release planned? This is a major issue for us, still.

@waynew
Copy link
Contributor

waynew commented Jan 13, 2023

Fixed by #63382

@cmacnevin we're nearly to RC1 for 3006, I don't know if we have a firm date for when we're going to release RC1, but 3006 is planned to be our first LTS release, so definitely please keep your eye out for that and test it as soon as possible!

@waynew waynew closed this as completed Jan 13, 2023
@dwoz dwoz mentioned this issue Jan 15, 2023
3 tasks
@dwoz dwoz removed the needs-triage label Jan 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior VMware
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants