Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to pull image when use system proxy #14087

Closed
y0zong opened this issue May 3, 2022 · 9 comments
Closed

failed to pull image when use system proxy #14087

y0zong opened this issue May 3, 2022 · 9 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. machine macos MacOS (OSX) related remote Problem is in podman-remote

Comments

@y0zong
Copy link

y0zong commented May 3, 2022

Description

In China we have to use proxy to access docker hub to pull image

proxy setting (this works fine in docker)
http_proxy=socks5://127.0.0.1:7890

access test, success to resolve docker.io

curl -vv registry-1.docker.io
* Uses proxy env variable http_proxy == 'socks5://127.0.0.1:7890'
*   Trying 127.0.0.1:7890...
* SOCKS5 connect to IPv4 34.237.244.67:80 (locally resolved)
* SOCKS5 request granted.
* Connected to 127.0.0.1 (127.0.0.1) port 7890 (#0)
> GET / HTTP/1.1
> Host: registry-1.docker.io
> User-Agent: curl/7.79.1
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
< content-length: 0
< location: https://registry-1.docker.io/
< 
* Connection #0 to host 127.0.0.1 left intact

or (success when use podman login)

podman login docker.io -u y0zong
Password:

but connection refused when pull image

podman-compose -f docker-compose.yml -f docker-compose.without-nginx.yml up -d
['podman', '--version', '']
using podman version: 4.0.2
 ** merged:
 {
  "_dirname": "/Users/orlowang/Projects/tools/matttermost",
  "version": "2.4",
  "services": {
    "postgres": {
      "container_name": "postgres_mattermost",
      "image": "postgres:13-alpine",
      "restart": "unless-stopped",
      "security_opt": [
        "no-new-privileges:true"
      ],
      "pids_limit": 100,
      "read_only": true,
      "tmpfs": [
        "/tmp",
        "/var/run/postgresql"
      ],
      "volumes": [
        "./volumes/db/var/lib/postgresql/data:/var/lib/postgresql/data"
      ],
      "environment": {
        "TZ": null,
        "POSTGRES_USER": null,
        "POSTGRES_PASSWORD": null,
        "POSTGRES_DB": null
      }
    },
    "mattermost": {
      "depends_on": [
        "postgres"
      ],
      "container_name": "mattermost",
      "image": "mattermost/mattermost-enterprise-edition:6.3",
      "restart": "unless-stopped",
      "security_opt": [
        "no-new-privileges:true"
      ],
      "pids_limit": 200,
      "read_only": "false",
      "tmpfs": [
        "/tmp"
      ],
      "volumes": [
        "./volumes/app/mattermost/config:/mattermost/config:rw",
        "./volumes/app/mattermost/data:/mattermost/data:rw",
        "./volumes/app/mattermost/logs:/mattermost/logs:rw",
        "./volumes/app/mattermost/plugins:/mattermost/plugins:rw",
        "./volumes/app/mattermost/client/plugins:/mattermost/client/plugins:rw",
        "./volumes/app/mattermost/bleve-indexes:/mattermost/bleve-indexes:rw"
      ],
      "environment": {
        "TZ": null,
        "MM_SQLSETTINGS_DRIVERNAME": null,
        "MM_SQLSETTINGS_DATASOURCE": null,
        "MM_BLEVESETTINGS_INDEXDIR": null,
        "MM_SERVICESETTINGS_SITEURL": null
      },
      "ports": [
        "8065:8065"
      ]
    }
  }
}
** excluding:  set()
['podman', 'network', 'exists', 'matttermost_default']
podman run --name=postgres_mattermost -d --security-opt no-new-privileges:true --read-only --label io.podman.compose.config-hash=123 --label io.podman.compose.project=matttermost --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=matttermost --label com.docker.compose.project.working_dir=/Users/orlowang/Projects/tools/matttermost --label com.docker.compose.project.config_files=docker-compose.yml,docker-compose.without-nginx.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=postgres -e TZ -e POSTGRES_USER -e POSTGRES_PASSWORD -e POSTGRES_DB --tmpfs /tmp --tmpfs /var/run/postgresql -v /Users/orlowang/Projects/tools/matttermost/volumes/db/var/lib/postgresql/data:/var/lib/postgresql/data --net matttermost_default --network-alias postgres --restart unless-stopped postgres:13-alpine
Resolving "postgres" using unqualified-search registries (/etc/containers/registries.conf.d/999-podman-machine.conf)
Trying to pull docker.io/library/postgres:13-alpine...
Error: initializing source docker://postgres:13-alpine: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:7890: connect: connection refused
exit code: 125
podman start postgres_mattermost
Error: no container with name or ID "postgres_mattermost" found: no such container
exit code: 125
['podman', 'network', 'exists', 'matttermost_default']
podman run --name=mattermost -d --security-opt no-new-privileges:true --read-only --label io.podman.compose.config-hash=123 --label io.podman.compose.project=matttermost --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=matttermost --label com.docker.compose.project.working_dir=/Users/orlowang/Projects/tools/matttermost --label com.docker.compose.project.config_files=docker-compose.yml,docker-compose.without-nginx.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=mattermost -e TZ -e MM_SQLSETTINGS_DRIVERNAME -e MM_SQLSETTINGS_DATASOURCE -e MM_BLEVESETTINGS_INDEXDIR -e MM_SERVICESETTINGS_SITEURL --tmpfs /tmp -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/config:/mattermost/config:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/data:/mattermost/data:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/logs:/mattermost/logs:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/plugins:/mattermost/plugins:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/client/plugins:/mattermost/client/plugins:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/bleve-indexes:/mattermost/bleve-indexes:rw --net matttermost_default --network-alias mattermost -p 8065:8065 --restart unless-stopped mattermost/mattermost-enterprise-edition:6.3
Resolving "mattermost/mattermost-enterprise-edition" using unqualified-search registries (/etc/containers/registries.conf.d/999-podman-machine.conf)
Trying to pull docker.io/mattermost/mattermost-enterprise-edition:6.3...
Error: initializing source docker://mattermost/mattermost-enterprise-edition:6.3: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:7890: connect: connection refused
exit code: 125
podman start mattermost
Error: no container with name or ID "mattermost" found: no such container
exit code: 125

keypoint in error
pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:7890: connect: connection refused

I don't know if podman use ping result to decide connection status, otherwise ping can be failed but connection is still alive when use socks5 proxy (that's why I use curl -vv test instead ping)

pls help

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug (maybe)

Steps to reproduce the issue:

  1. set system proxy (http_proxy)

  2. podman-compose start project

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version
Client:       Podman Engine
Version:      4.0.2
API Version:  4.0.2
Go Version:   go1.17.8

Built:      Wed Mar  2 22:04:36 2022
OS/Arch:    darwin/amd64

Server:       Podman Engine
Version:      4.0.3
API Version:  4.0.3
Go Version:   go1.18

Built:      Sat Apr  2 02:21:54 2022
OS/Arch:    linux/amd64

Output of podman info --debug:

podman info --debug
host:
  arch: amd64
  buildahVersion: 1.24.3
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.0-2.fc36.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.0, commit: '
  cpus: 1
  distribution:
    distribution: fedora
    variant: coreos
    version: "36"
  eventLogger: journald
  hostname: localhost.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
    uidmap:
    - container_id: 0
      host_id: 502
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
  kernel: 5.17.3-300.fc36.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 1363165184
  memTotal: 2066817024
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.4.4-1.fc36.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.4.4
      commit: 6521fcc5806f20f6187eb933f9f45130c86da230
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/502/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-0.2.beta.0.fc36.x86_64
    version: |-
      slirp4netns version 1.2.0-beta.0
      commit: 477db14a24ff1a3de3a705e51ca2c4c1fe3dda64
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 0
  swapTotal: 0
  uptime: 3h 10m 6.18s (Approximately 0.12 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /var/home/core/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/core/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /run/user/502/containers
  volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
  APIVersion: 4.0.3
  Built: 1648837314
  BuiltTime: Sat Apr  2 02:21:54 2022
  GitCommit: ""
  GoVersion: go1.18
  OsArch: linux/amd64
  Version: 4.0.3

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

mac os Monterey

@openshift-ci
Copy link
Contributor

openshift-ci bot commented May 3, 2022

@y0zong: The label(s) kind/(maybe) cannot be applied, because the repository doesn't have them.

In response to this:

Description

In China we have to use proxy to access docker hub to pull image
proxy setting
http_proxy=socks5://127.0.0.1:7890

access test, success to resolve docker.io

curl -vv registry-1.docker.io
* Uses proxy env variable http_proxy == 'socks5://127.0.0.1:7890'
*   Trying 127.0.0.1:7890...
* SOCKS5 connect to IPv4 34.237.244.67:80 (locally resolved)
* SOCKS5 request granted.
* Connected to 127.0.0.1 (127.0.0.1) port 7890 (#0)
> GET / HTTP/1.1
> Host: registry-1.docker.io
> User-Agent: curl/7.79.1
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
< content-length: 0
< location: https://registry-1.docker.io/
< 
* Connection #0 to host 127.0.0.1 left intact

or (success when use podman login)

podman login docker.io -u y0zong
Password:

but connection refused when pull image

podman-compose -f docker-compose.yml -f docker-compose.without-nginx.yml up -d
['podman', '--version', '']
using podman version: 4.0.2
** merged:
{
 "_dirname": "/Users/orlowang/Projects/tools/matttermost",
 "version": "2.4",
 "services": {
   "postgres": {
     "container_name": "postgres_mattermost",
     "image": "postgres:13-alpine",
     "restart": "unless-stopped",
     "security_opt": [
       "no-new-privileges:true"
     ],
     "pids_limit": 100,
     "read_only": true,
     "tmpfs": [
       "/tmp",
       "/var/run/postgresql"
     ],
     "volumes": [
       "./volumes/db/var/lib/postgresql/data:/var/lib/postgresql/data"
     ],
     "environment": {
       "TZ": null,
       "POSTGRES_USER": null,
       "POSTGRES_PASSWORD": null,
       "POSTGRES_DB": null
     }
   },
   "mattermost": {
     "depends_on": [
       "postgres"
     ],
     "container_name": "mattermost",
     "image": "mattermost/mattermost-enterprise-edition:6.3",
     "restart": "unless-stopped",
     "security_opt": [
       "no-new-privileges:true"
     ],
     "pids_limit": 200,
     "read_only": "false",
     "tmpfs": [
       "/tmp"
     ],
     "volumes": [
       "./volumes/app/mattermost/config:/mattermost/config:rw",
       "./volumes/app/mattermost/data:/mattermost/data:rw",
       "./volumes/app/mattermost/logs:/mattermost/logs:rw",
       "./volumes/app/mattermost/plugins:/mattermost/plugins:rw",
       "./volumes/app/mattermost/client/plugins:/mattermost/client/plugins:rw",
       "./volumes/app/mattermost/bleve-indexes:/mattermost/bleve-indexes:rw"
     ],
     "environment": {
       "TZ": null,
       "MM_SQLSETTINGS_DRIVERNAME": null,
       "MM_SQLSETTINGS_DATASOURCE": null,
       "MM_BLEVESETTINGS_INDEXDIR": null,
       "MM_SERVICESETTINGS_SITEURL": null
     },
     "ports": [
       "8065:8065"
     ]
   }
 }
}
** excluding:  set()
['podman', 'network', 'exists', 'matttermost_default']
podman run --name=postgres_mattermost -d --security-opt no-new-privileges:true --read-only --label io.podman.compose.config-hash=123 --label io.podman.compose.project=matttermost --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=matttermost --label com.docker.compose.project.working_dir=/Users/orlowang/Projects/tools/matttermost --label com.docker.compose.project.config_files=docker-compose.yml,docker-compose.without-nginx.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=postgres -e TZ -e POSTGRES_USER -e POSTGRES_PASSWORD -e POSTGRES_DB --tmpfs /tmp --tmpfs /var/run/postgresql -v /Users/orlowang/Projects/tools/matttermost/volumes/db/var/lib/postgresql/data:/var/lib/postgresql/data --net matttermost_default --network-alias postgres --restart unless-stopped postgres:13-alpine
Resolving "postgres" using unqualified-search registries (/etc/containers/registries.conf.d/999-podman-machine.conf)
Trying to pull docker.io/library/postgres:13-alpine...
Error: initializing source docker://postgres:13-alpine: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:7890: connect: connection refused
exit code: 125
podman start postgres_mattermost
Error: no container with name or ID "postgres_mattermost" found: no such container
exit code: 125
['podman', 'network', 'exists', 'matttermost_default']
podman run --name=mattermost -d --security-opt no-new-privileges:true --read-only --label io.podman.compose.config-hash=123 --label io.podman.compose.project=matttermost --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=matttermost --label com.docker.compose.project.working_dir=/Users/orlowang/Projects/tools/matttermost --label com.docker.compose.project.config_files=docker-compose.yml,docker-compose.without-nginx.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=mattermost -e TZ -e MM_SQLSETTINGS_DRIVERNAME -e MM_SQLSETTINGS_DATASOURCE -e MM_BLEVESETTINGS_INDEXDIR -e MM_SERVICESETTINGS_SITEURL --tmpfs /tmp -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/config:/mattermost/config:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/data:/mattermost/data:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/logs:/mattermost/logs:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/plugins:/mattermost/plugins:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/client/plugins:/mattermost/client/plugins:rw -v /Users/orlowang/Projects/tools/matttermost/volumes/app/mattermost/bleve-indexes:/mattermost/bleve-indexes:rw --net matttermost_default --network-alias mattermost -p 8065:8065 --restart unless-stopped mattermost/mattermost-enterprise-edition:6.3
Resolving "mattermost/mattermost-enterprise-edition" using unqualified-search registries (/etc/containers/registries.conf.d/999-podman-machine.conf)
Trying to pull docker.io/mattermost/mattermost-enterprise-edition:6.3...
Error: initializing source docker://mattermost/mattermost-enterprise-edition:6.3: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:7890: connect: connection refused
exit code: 125
podman start mattermost
Error: no container with name or ID "mattermost" found: no such container
exit code: 125

keypoint in error
pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:7890: connect: connection refused

I don't know if podman use ping result to decide connection status, otherwise ping can be failed but connection is still alive when use socks5 proxy (that's why I use curl -vv test instead ping)

pls help

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug (maybe)

Steps to reproduce the issue:

  1. set system proxy (http_proxy)

  2. podman-compose start project

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version
Client:       Podman Engine
Version:      4.0.2
API Version:  4.0.2
Go Version:   go1.17.8

Built:      Wed Mar  2 22:04:36 2022
OS/Arch:    darwin/amd64

Server:       Podman Engine
Version:      4.0.3
API Version:  4.0.3
Go Version:   go1.18

Built:      Sat Apr  2 02:21:54 2022
OS/Arch:    linux/amd64

Output of podman info --debug:

podman info --debug
host:
 arch: amd64
 buildahVersion: 1.24.3
 cgroupControllers:
 - cpu
 - io
 - memory
 - pids
 cgroupManager: systemd
 cgroupVersion: v2
 conmon:
   package: conmon-2.1.0-2.fc36.x86_64
   path: /usr/bin/conmon
   version: 'conmon version 2.1.0, commit: '
 cpus: 1
 distribution:
   distribution: fedora
   variant: coreos
   version: "36"
 eventLogger: journald
 hostname: localhost.localdomain
 idMappings:
   gidmap:
   - container_id: 0
     host_id: 1000
     size: 1
   - container_id: 1
     host_id: 100000
     size: 1000000
   uidmap:
   - container_id: 0
     host_id: 502
     size: 1
   - container_id: 1
     host_id: 100000
     size: 1000000
 kernel: 5.17.3-300.fc36.x86_64
 linkmode: dynamic
 logDriver: journald
 memFree: 1363165184
 memTotal: 2066817024
 networkBackend: netavark
 ociRuntime:
   name: crun
   package: crun-1.4.4-1.fc36.x86_64
   path: /usr/bin/crun
   version: |-
     crun version 1.4.4
     commit: 6521fcc5806f20f6187eb933f9f45130c86da230
     spec: 1.0.0
     +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
 os: linux
 remoteSocket:
   exists: true
   path: /run/user/502/podman/podman.sock
 security:
   apparmorEnabled: false
   capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
   rootless: true
   seccompEnabled: true
   seccompProfilePath: /usr/share/containers/seccomp.json
   selinuxEnabled: true
 serviceIsRemote: true
 slirp4netns:
   executable: /usr/bin/slirp4netns
   package: slirp4netns-1.2.0-0.2.beta.0.fc36.x86_64
   version: |-
     slirp4netns version 1.2.0-beta.0
     commit: 477db14a24ff1a3de3a705e51ca2c4c1fe3dda64
     libslirp: 4.6.1
     SLIRP_CONFIG_VERSION_MAX: 3
     libseccomp: 2.5.3
 swapFree: 0
 swapTotal: 0
 uptime: 3h 10m 6.18s (Approximately 0.12 days)
plugins:
 log:
 - k8s-file
 - none
 - passthrough
 - journald
 network:
 - bridge
 - macvlan
 volume:
 - local
registries:
 search:
 - docker.io
store:
 configFile: /var/home/core/.config/containers/storage.conf
 containerStore:
   number: 0
   paused: 0
   running: 0
   stopped: 0
 graphDriverName: overlay
 graphOptions: {}
 graphRoot: /var/home/core/.local/share/containers/storage
 graphStatus:
   Backing Filesystem: xfs
   Native Overlay Diff: "true"
   Supports d_type: "true"
   Using metacopy: "false"
 imageCopyTmpDir: /var/tmp
 imageStore:
   number: 0
 runRoot: /run/user/502/containers
 volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
 APIVersion: 4.0.3
 Built: 1648837314
 BuiltTime: Sat Apr  2 02:21:54 2022
 GitCommit: ""
 GoVersion: go1.18
 OsArch: linux/amd64
 Version: 4.0.3

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

mac os Monterey

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label May 3, 2022
@github-actions github-actions bot added macos MacOS (OSX) related remote Problem is in podman-remote labels May 3, 2022
@vrothberg
Copy link
Member

Thanks for reaching out, @y0zong!

set system proxy (http_proxy)

Where do you set the proxy? Do you set it inside the podman machine?

@y0zong
Copy link
Author

y0zong commented May 3, 2022

Where do you set the proxy? Do you set it inside the podman machine?

no, on host (mac os in my case), maybe it's no need to set proxy inside the podman machine when host is already in proxy?

@Luap99
Copy link
Member

Luap99 commented May 3, 2022

The proxy is set correctly: proxyconnect tcp: dial tcp 127.0.0.1:7890: connect: connection refused

The problem is that your proxy is listening on 127.0.0.1. Obviously 127.0.0.1 inside the VM is a different address and therefore it cannot connect to the proxy on you actual host.

If you set http_proxy=socks5://host.containers.internal:7890 before you run podman machine init it should work.
Maybe podman should just s/127.0.0.1/host.containers.internal and s/localhost/host.containers.internal automatically when it copies the proxy value from the host.

@Luap99 Luap99 added the machine label May 3, 2022
@y0zong
Copy link
Author

y0zong commented May 3, 2022

If you set http_proxy=socks5://host.containers.internal:7890 before you run podman machine init it should work.

thank for point out this @Luap99 , and the problem is should I add 127.0.0.1 host.containers.internal to my host file on host os? because http_proxy=socks5://host.containers.internal:7890 will break my proxy and before podman machine init proxy still need to work for podman to pull fedora

I think it's better that podman automatically map host proxy value to machine so it can read it correctly

@Luap99
Copy link
Member

Luap99 commented May 3, 2022

if you run http_proxy=socks5://host.containers.internal:7890 podman machine init it will only change the proxy var for this single command not your system.

@y0zong
Copy link
Author

y0zong commented May 3, 2022

I just test it and I think you are right, but I don't know why http_proxy=socks5://host.containers.internal:7890 podman machine init this command still hit the same error but it works as expected when I add 127.0.0.1 host.containers.internal to my host file

however it works now

add item blow to host file

127.0.0.1 host.containers.internal

and change proxy to

http_proxy=socks5://host.containers.internal:7890

then everything works fine

much thanks @Luap99 for your help and I close this issues for the problem is solved, but I still stay concerned if some day podman can do this itself and no need to change proxy setting

@y0zong y0zong closed this as completed May 3, 2022
@towry
Copy link

towry commented Oct 26, 2022

update:

I figured out this issue, it seems podman copy all current shell's env into the machine when I do podman machine init or podman machine start? Anyway, I opened a new shell and make sure http_proxy env is not set then stop & start podman machine, everything works now 💯 .

original post:

I don't have $http_proxy var setting on my host zsh terminal but still got this issue.

$> podman pull alpine

Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
Error: initializing source docker://alpine:latest: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:1081: connect: connection refused

After I ssh into the machine:

> podman machine ssh

Connecting to vm podman-machine-default. To close connection, use `~.` or `exit`
Fedora CoreOS 36.20221014.2.0
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/tag/coreos

[core@localhost ~]$ echo $http_proxy
http://127.0.0.1:1081
[core@localhost ~]$

How is this happen, why the machine have http_proxy set by default ???

@yckbilly1929
Copy link

update:

I figured out this issue, it seems podman copy all current shell's env into the machine when I do podman machine init or podman machine start? Anyway, I opened a new shell and make sure http_proxy env is not set then stop & start podman machine, everything works now 💯 .

original post:

I don't have $http_proxy var setting on my host zsh terminal but still got this issue.

$> podman pull alpine

Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
Error: initializing source docker://alpine:latest: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp 127.0.0.1:1081: connect: connection refused

After I ssh into the machine:

> podman machine ssh

Connecting to vm podman-machine-default. To close connection, use `~.` or `exit`
Fedora CoreOS 36.20221014.2.0
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/tag/coreos

[core@localhost ~]$ echo $http_proxy
http://127.0.0.1:1081
[core@localhost ~]$

How is this happen, why the machine have http_proxy set by default ???

faced with the same issue, and figured out that it was set in /etc/systemd/system.conf.d/default-env.conf with the following default value, which seems to override the value I manually set at /etc/systemd/system.conf.d/10-default-env.conf as suggested here

[Manager]
#Got from QEMU FW_CFG
DefaultEnvironment=http_proxy="http://127.0.0.1:1081" https_proxy="http://127.0.0.1:1081"

BlackHole1 added a commit to BlackHole1/podman that referenced this issue Jun 20, 2023
When the `machine start` command is executed, Podman automatically retrieves the current host's `*_PROXY` environment variable and assigns it directly to the virtual machine in QEMU. However, most `*_PROXY` variables are set with `127.0.0.1` or `localhost`, such as `127.0.0.1:8888`. This causes failures in network-related operations within the virtual machine due to incorrect proxy settings.

Fixes: containers#14087
Signed-off-by: Black-Hole1 <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 3, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 3, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. machine macos MacOS (OSX) related remote Problem is in podman-remote
Projects
None yet
Development

No branches or pull requests

5 participants