Skip to content

Salt SSH integration

Michael Calmer edited this page May 8, 2023 · 2 revisions

The following page describes the salt-ssh integration with SUSE Manager in order to support the ssh-push and ssh-push-tunnel connection methods with Salt minions.

ssh-push overview

Similar to the traditional stack, for Salt minions there's the option to use a ssh connection to manage the minions instead of using ZeroMQ. This functionality is based on Salt SSH. With Salt SSH one can execute salt commands and states over ssh without installing a salt-minion

The ssh connection is made on demand, when the server needs to execute an action on a minion. This is different from the always-connected mode used with Zeromq minions.

In SUSE Manager there are two ssh-push methods. In both cases the server initiates a ssh connection to the minion in order to execute the Salt call using salt-ssh. The difference in the two methods is how zypper/yum connects to the server repos:

  • plain ssh-push: zypper works as usual. The http(s) connection to the server is done directly.
  • ssh-push-tunnel: the http(s) connection is done through a ssh tunnel created by the server. The http(s) connection initiated by zypper is redirected through the tunnel by means of /etc/hosts aliasing (see below). This method is intended to be used when there's a firewall that blocks http(s) connections from the minion to the server.

salt-ssh integration

Like with any other Salt call SUSE Manager invokes salt-ssh via the salt-api.

salt-ssh relies on a Roster to get the details like the hostname, ports, ssh parameters of a ssh minion. SUSE Manager keeps these details in the database and makes them available to Salt by generating a temporary Roster file for each salt-ssh call. The location of the temporary Roster file is supplied to salt-ssh using the --roster-file= option.

Authentication

salt-ssh supports both password and key authentication. SUSE Manager uses both methods:

  • for bootstrapping, password authentication is used. In this case the key of the server is not yet authorized on the minion and the only way is to use a password. The password is used only transiently in the temporary roster file used for bootstrapping. It is not stored in any way.
  • all other calls use key authentication. During bootstrap the ssh key of the server is authorized on the minion (added to the minion's ~/.ssh/authorized_keys). Therefore subsequent calls do not need to use a password anymore.

User account for salt-ssh calls

The user for salt-ssh calls made by SUSE Manager is taken from the ssh_push_sudo_user setting. The default value of this is root.

If the value of ssh_push_sudo_user is not root then the --sudo options of salt-ssh is used.

ssh-push-tunnel http(s) redirection

For the ssh-push-tunnel method the traffic originating from zypper/yum has to be redirected through an ssh tunnel in order to bypass any firewall blocking a direct connection from the minion to the server.

This is achieved by using port 1233 in the repo url:

https://suma-server:1233/repourl...

and by aliasing the suma-server hostname to localhost in /etc/hosts:

127.0.0.1       localhost    suma-server
....

The server creates a reverse ssh tunnel that connects localhost:1233 on the minion to suma-server:443 (ssh ... -R 1233:suma-server:443)

The result is that zypper/yum will actually connect to localhost:1233 which is then forwarded to suma-server:443 via the ssh tunnel.

This implies that zypper can contact the server only if the tunnel is open. This happens only when the servers executes an action on the minion. Manual zypper operations that require server connectivity are not possible in this case.

SUSE Manager salt-ssh call sequence

  1. prepare the Salt Roster for the call
    1. create remote port forwarding option IF the contact method is ssh-push-tunnel
    2. compute the ProxyCommand IF the minion is connected through a proxy
    3. create Roster content:
      • hostname
      • user
      • port
      • remote_port_forwards: the remote port forwarding ssh option
      • ssh_options: other ssh options:
        • ProxyCommand if the minion connects through a SUMA proxy
      • timeout: default 180s
      • minion_opts:
        • master: set to the minion id if contact method is ssh-push-tunnel
  2. create a temporary Roster file
  3. execute a synchronous salt-ssh call via the API
  4. remove the temporary Roster file

See also:

Bootstrap process sequence

Bootstrapping minions uses salt-ssh under the hood. This happens for both regular and ssh minion.

The bootstrap sequence is a bit different than the regular salt-ssh call.

  1. for a regular minion generate and pre-authorize the Salt key of the minion
  2. if is a ssh minion and a proxy was selected retrieve the ssh public key of the proxy using the mgrutil.chain_ssh_cmd runner. The runner copies the public key of the proxy to the server using ssh. If needed it can chain multiple ssh commands to reach the proxy across multiple hops.
  3. generate pillar data for bootstrap. This contains:
    • mgr_server: the hostname of the SUSE Manager server
    • minion_id: the hostname of the minion to bootstrap
    • contact_method
    • mgr_sudo_user: the user for salt-ssh
    • activation_key if selected
    • minion_pub: the public minion key that was pre-authorized
    • minion_pem: the private minion key that was pre-authorized
    • proxy_pub_key: the public ssh key that was retrieved from the proxy if the target is an ssh minion and a proxy was selected
  4. if contact method is ssh-push-tunnel fill the remote port forwarding option
  5. if the minion connects through a SUMA proxy compute the ProxyCommand option. This depends on the path used to connect to the proxy, e.g. server -> proxy1 -> proxy2 -> minion
  6. generate the roster for bootstrap into a temporary file. This contains:
    • hostname
    • user
    • password
    • port
    • remote_port_forwards: the remote port forwarding ssh option
    • ssh_options: other ssh options:
      • ProxyCommand if the minion connects through a SUMA proxy
    • timeout: default 180s
  7. execute salt-ssh --roster-file=<temporary_bootstrap_roster> minion state.apply certs,<bootstrap_state> via the Salt API. Here <bootstrap_state> is either bootstrap for regular minions or ssh_bootstrap for ssh minions

bootstrap process diagram

See also:

Proxy support

In order to make salt-ssh work with SUSE Managers proxies the ssh connection is chained from one server/proxy to the next. This is also know as multi-hop or multi gateway ssh connection.

ssh multi hop diagram

ProxyCommand

In order to redirect the ssh connection through the proxies the ssh ProxyCommand option is used. This options invokes an arbitrary command that is expected to connect to the ssh port on the target host. The standard input and output of the command is used by the invoking ssh process to talk to the remote ssh daemon.

The ProxyCommand basically replaces the TCP/IP connection. It doesn't do any authorization, encryption, etc. Its role is simply to create a byte stream to the remote ssh daemon's port.

E.g. connecting to a server behind a gateway:

ssh proxy command

In this example netcat (nc) is used to pipe port 22 of the target host into the ssh std i/o.

salt-ssh call sequence via proxy

  1. SUSE Manager initates the ssh connections as described above
  2. Additionally the ProxyCommand uses ssh to create a connection from the server to the minion through the proxies.

E.g. of ProxyCommand using two proxies and plain ssh-push method:

# 1
/usr/bin/ssh -i /srv/susemanager/salt/salt_ssh/mgr_ssh_id -o StrictHostKeyChecking=no -o User=mgrsshtunnel  proxy1 
# 2
/usr/bin/ssh -i /var/lib/spacewalk/mgrsshtunnel/.ssh/id_susemanager_ssh_push -o StrictHostKeyChecking=no -o User=mgrsshtunnel -W minion:22  proxy2
  1. connect from the server to the first proxy
  2. connect from the first proxy to the second and forward standard input/output on the client to minion:22 using the -W option.

ssh push plain sequence

E.g. of ProxyCommand using two proxies and ssh-push-tunnel connection method:

# 1
/usr/bin/ssh -i /srv/susemanager/salt/salt_ssh/mgr_ssh_id -o User=mgrsshtunnel  proxy1
# 2
/usr/bin/ssh -i /home/mgrsshtunnel/.ssh/id_susemanager_ssh_push -o User=mgrsshtunnel  proxy2 
# 3
/usr/bin/ssh -i /home/mgrsshtunnel/.ssh/id_susemanager_ssh_push -o User=root -R 1233:proxy2:443 minion
# 4
/usr/bin/ssh -i /root/.ssh/mgr_own_id -W minion:22 -o User=root minion
  1. connect from the server to the first proxy
  2. connect from the first proxy to the second
  3. connect from the second proxy to the minion and open an reverse tunnel (-R 1233:proxy2:443) from the minion to the https port on the second proxy
  4. connect from the minion to itself and forward the std i/o of the server to the ssh port of the minion (-W minion:22). This is equivalent to ssh ... proxy2 netcat minion 22 and is needed because ssh doesn't allow to have both the reverse tunnel (-R 1233:proxy2:443) and the standard i/o forwarding (-W minion:22) in the same command.

ssh push tunnel sequence

See also:

Users and ssh keys management

In order to connect to a proxy the parent server/proxy uses a specific user called mgrsshtunnel.

The ssh config (/etc/ssh/sshd_config) of the proxy will force the execution of /usr/sbin/mgr-proxy-ssh-force-cmd when mgrsshtunnel connects.

/usr/sbin/mgr-proxy-ssh-force-cmd is a simple shell script that allows only the execution of scp, ssh or cat commands.

The connection to the proxy or minion is authorized using ssh keys in the following way:

  • The server connects to the minion and to the first proxy using the key in /srv/susemanager/salt/salt_ssh/mgr_ssh_id
  • Each proxy has its own key pair in /home/mgrsshtunnel/.ssh/id_susemanager_ssh_push
  • Each proxy authorizes the key of the parent proxy or server
  • The minion authorized its own key

ssh keys authorization

See also:

Repository access via proxy

For both ssh-push and ssh-push-tunnel the minion connects to the proxy to retrieve packages and repo data.

The difference is how the connection works:

  • In case of ssh-push, zypper or yum connect directly to the proxy using http(s). This assumes there's not firewall between the minion and the proxy that would block http connections initiated by the minion. ssh push repo access

  • In case of ssh-push-tunnel, the http connection to the proxy is redirected through a reverse ssh tunnel. ssh push tunnel repo access

Proxy setup

When the spacewalk-proxy package is installed on the proxy the user mgrsshtunnel is created if it doesn't already exist.

During the initial configuration with configure-proxy.sh the following happens:

  1. generate a ssh key pair or import an existing one
  2. retrieve the ssh key of the parent server/proxy in order to authorize it on the proxy
  3. configure the sshd of the proxy to restrict the user mgrsshtunnel

This configuration is done by the mgr-proxy-ssh-push-init script. This is called from configure-proxy.sh and the user doesn't have to invoke it manually.

Retrieving the parent key is done by calling an HTTP endpoint on the parent server or proxy.

  1. First https//$PARENT/pub/id_susemanager_ssh_push.pub is tried. If the parent is proxy this will return the public ssh key of that proxy.
  2. If a 404 is received then it's assumed the parent is a server not a proxy and https://$PARENT/rhn/manager/download/saltssh/pubkey is tried.
    1. If /srv/suseemanager/salt/salt_ssh/mgr_ssh_id.pub already exists on the server it's returned
    2. If the public key doesn't exist (because salt-ssh has not been invoked yet) generate the key by calling the mgrutil.ssh_keygen runner

Note: salt-ssh generates a key pair the first time it is invoked in /srv/suseemanager/salt/salt_ssh/mgr_ssh_id. The previous sequence is needed in case a proxy is configured before salt-ssh was invoked for the first time.

See also:

Tips for debugging

  • ssh is executed by the salt-api process under the user salt
  • The salt-ssh calls made from SUSE Manager are logged to /var/log/salt/api
  • Set log_level:trace in /etc/salt/master to see the actual ssh command being executed or use -ltrace if executing from the cli
  • To make a call from the command line of the server prepare a roster file /tmp/roster:
      <minionid>:
          host: <minionhost>
          user: root
          port: 22
          timeout: 180
          ...
    and then run:
    sudo -u salt salt-ssh --roster-file=/tmp/roster --priv=/srv/susemanager/salt/salt_ssh/mgr_ssh_id <minionid> <cmd> 
  • In order for the proxies to work properly SUSE Manager must have the fully qualified domain name of the proxies
  • The ProxyCommand option is executed by ssh using $SHELL -c ... under the hood. Check the value of the $SHELL environment variable in cause of trouble with this option.
  • To run Salt manually on the minion in the same way that salt-ssh does, execute salt-call on the minion using the thin-dir tarball. To find out the exact command that is executed, make sure log_level is set to trace and look in /var/log/salt/api for a line similar to the one below:
    SALT_ARGV: ['/usr/bin/python2.7', '/tmp/.root_e1883a_salt/salt-call',
     '--retcode-passthrough', '--local', '--metadata', '--out','json', '-l',
      'quiet', '-c', '/tmp/.root_e1883a_salt', '--', 'cmd.run', 'hostname'
    
  • Set LogLevel DEBUG3 in /etc/ssh/ssh_config to see what the ssh client does under the hood
  • Set LogLevel DEBUG3 in /etc/ssh/sshd_config for the server
  • To check if the ssh port is open:
    nc <host> 22
    
  • To dump the tcp packets: tcpdump
  • To use strace on the salt-api process and forks (i.e. ssh) use:
    strace -p <salt-api-pid> -f
    

Client side log file for salt-ssh

The necessary configuration to enable client side logging of salt-ssh was added to /etc/salt/master.d/custom.conf. The log_level can be changed by editing this file.

On the client side, salt-ssh will use /var/log/salt-ssh.log for logging.

Good to Know

Disable SSH strict host key checking during bootstrap process

The checkbox is doing what it say. It just disable the strict checking, means: it does not expect the key already being added before doing a ssh call.

But if a wrong key is in the known hosts file, it raise this warning.

noHostKeys Option

This Option would point known_hosts to /dev/null and all keys are ignored and no new key would be added. But this could lead to errors in future ssh calls, when this option is not set.

So we do not offer this option in Uyuni bootstrap process

Cleanup known_hosts

In Uyuni we salt master is running as user salt. This user has its homedir at /var/lib/salt/ and the used known_hosts file is in the .ssh directory in the homedir.

When "delete System" is called in web UI or via spacecmd (API) the ssh key is removed from the known_hosts file. It is part of the cleanup process. The system must have a "hostname" and it needs to match the name in the known_hosts file. Renaming the client in the DNS can lead to problems.

Clone this wiki locally