You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is rather a success report, not an issue. Please let me know if you would like me to submit a pull request to update docs.
In my case I made it work with podman utilizing pods
I'm using ansible to deploy containers, but syntax is pretty similar to docker compose.
# ansible podman task
- name: "Create sslh-co pod"containers.podman.podman_pod:
name: sslh-co # read as "sslh and company"state: startedports:
- "80:80"
- "443:444"# ... (more ports if needed)network:
- '{{ containers.config.network }}'# other services from this network can access containers in this network for example prometheus can read caddy metrics at sslh-co:2020, also caddy itself can connect to other services to act as a reverse proxy
- name: "Create the sslh container"containers.podman.podman_container:
name: sslhimage: "yrutschle/sslh:latest"pod: sslh-cocapabilities:
- NET_RAW
- NET_BIND_SERVICE
- NET_ADMINsysctl:
net.ipv4.conf.default.route_localnet: 1net.ipv4.conf.all.route_localnet: 1expose:
- 444volume:
# ... (make sure to mount config as you like)command: --transparent -F/etc/sslh/sslh.cfg # parameter --transparent here is needed to trigger configure_iptables in init scriptstate: started
- name: "Create the caddy container"containers.podman.podman_container:
name: caddyimage: "lucaslorentz/caddy-docker-proxy:alpine"# regular caddy or nginx image will also workpod: sslh-coexpose:
- 80
- 443
- 2020# metrics, since caddy-docker-proxy uses :2019 internallyvolume:
# ... (mount your configs and other stuff here)
- "/var/run/podman/podman.sock:/var/run/docker.sock"state: startednotify: podman restart sslh
- name: "Create the SSH proxy to host container"containers.podman.podman_container:
name: ssh-proxyimage: "alpine/socat:latest"pod: sslh-coexpose:
- 222command: TCP-LISTEN:222,fork TCP:host.containers.internal:22state: started
I omitted caddy configs here as it's not important. Some unrelated container configs were also dropped.
In the example above sslh, caddy and ssh-proxy are 3 containers in the same pod, all listening on localhost. SSLH has to listen on 444 because caddy already listens on 443 and it's more complex to reconfigure caddy port because of let's encrypt (caddy itself "thinks" it is bound to your host interface).
Scheme of port mapping is (all containers share same localhost):
host 443 → pod 444 (sslh) → pod 443 (caddy)
podman connects host 443 port to pod's 444 port
sslh listens 444 on localhost, reroutes tls to caddy on localhost:443
caddy listens 443 on localhost (reverse-proxies to other apps on private network)
Reverse-proxied services get correct IP in X-Forwarded-For header.
Takeaways:
net.ipv4.conf.default.route_localnet is setup only in container, not on host, which is nice.
same with iptables and route rules applied by init script in sslh container
sslh proxy to other services in the private network will not work (because transparent mode is enabled) even if that other service does not need real IP
with transparent mode all services that sslh will connect to should be attached to this pod making it listen on localhost of the pod
ports of containers should not clash
ports should be published on pod level, not on containers
containers should not be configured to connect to custom networks
because of the above proxying ssh to host will also not work out of the box, sslh should connect to localhost and not to a random IP. While adding another proxy as a container to the same pod sounds like an overkill I don't see any other solution, so socat is used as an additional proxy.
reverse_proxy by caddy to other containers in the same custom network (as pod is attached to) works (e.g. caddy can connect to other IPs from custom network). socat can also connect to host and/or other containers in custom network.
The text was updated successfully, but these errors were encountered:
This is rather a success report, not an issue. Please let me know if you would like me to submit a pull request to update docs.
In my case I made it work with podman utilizing pods
I'm using ansible to deploy containers, but syntax is pretty similar to docker compose.
I omitted caddy configs here as it's not important. Some unrelated container configs were also dropped.
In the example above
sslh
,caddy
andssh-proxy
are 3 containers in the same pod, all listening onlocalhost
. SSLH has to listen on444
because caddy already listens on443
and it's more complex to reconfigure caddy port because of let's encrypt (caddy itself "thinks" it is bound to your host interface).Scheme of port mapping is (all containers share same
localhost
):443
port to pod's444
portsslh
listens444
onlocalhost
, reroutes tls tocaddy
onlocalhost:443
caddy
listens443
onlocalhost
(reverse-proxies to other apps on private network)Reverse-proxied services get correct IP in
X-Forwarded-For
header.Takeaways:
net.ipv4.conf.default.route_localnet
is setup only in container, not on host, which is nice.init
script in sslh containersslh
proxy to other services in the private network will not work (because transparent mode is enabled) even if that other service does not need real IPsslh
will connect to should be attached to this pod making it listen onlocalhost
of the podsslh
should connect tolocalhost
and not to a random IP. While adding another proxy as a container to the same pod sounds like an overkill I don't see any other solution, sosocat
is used as an additional proxy.reverse_proxy
by caddy to other containers in the same custom network (as pod is attached to) works (e.g. caddy can connect to other IPs from custom network).socat
can also connect to host and/or other containers in custom network.The text was updated successfully, but these errors were encountered: