Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dns: DnsResolverImpl keeps using a "broken" c-ares channel #4543

Closed
ramaraochavali opened this issue Sep 27, 2018 · 30 comments · Fixed by #9899
Closed

dns: DnsResolverImpl keeps using a "broken" c-ares channel #4543

ramaraochavali opened this issue Sep 27, 2018 · 30 comments · Fixed by #9899
Assignees
Labels
Milestone

Comments

@ramaraochavali
Copy link
Contributor

We have a STRICT_DNS type of a cluster defined in bootstrap config. In one of our test Pods, the membership count of this cluster became zero. This is understandable because the DNS resolution might have resulted in zero hosts. However this remained like this for quite a long time and after killing the container, Envoy is able to successfully resolve the DNS.

I have taken debug logs when Envoy is not able to resolve this. I see the following line

"source/common/network/dns_impl.cc:118] DNS request timed out 4 times",,

And I see these lines repeatedly
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds",,
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds",,
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds",,
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds",,
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds",,
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds",,
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds",,
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 22 milliseconds"

So at this point I am not very clear if it is Envoy issue or container DNS issue - as container restart resolved the issue.
Has any one seen similar issues with DNS? and another question is it the DNS resolution timer behaviour correct in the sense it is trying to resolve 0 milliseconds?

@mattklein123 mattklein123 added the question Questions that are neither investigations, bugs, nor enhancements label Sep 27, 2018
@jasonmartens
Copy link

I have just experienced a very similar situation to this. Our proxy was deployed in an environment experiencing a lot of DNS failures, and at some point all DNS lookups just stopped working. We fixed the DNS issue, but the envoy instances never recovered and we had to kill them and restart. The new instances worked just fine.

We also had logs similar to the above:

[2018-10-06 00:22:39.202][15][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds
[2018-10-06 00:22:39.202][15][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds
[2018-10-06 00:22:39.202][15][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 4949 milliseconds
[2018-10-06 00:22:41.160][15][debug][main] source/server/server.cc:119] flushing stats
[2018-10-06 00:22:44.152][15][debug][upstream] source/common/network/dns_impl.cc:118] DNS request timed out 4 times
[2018-10-06 00:22:44.152][15][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 3866 milliseconds

It seems like after some number of DNS failures, the async resolver gets into some bad state and is unable to resolve things permanently.

We are also using STRICT_DNS with active health checks. Using Envoy 1.7.0 from the published docker image.

@jasonmartens
Copy link

Looking a little more, the sequence of events in our situation is:

  1. DNS starts failing, we don't notice because Envoy keeps the hosts in the cluster.
  2. Active health checks fail on those clusters, gradually all hosts are removed from the clusters.
  3. Some time later (maybe an hour?) the DNS issues are fixed, but even though manual lookups are working on the Envoy host, the DNS request timed out 4 times log messages continue and no hosts are added back to the clusters. This state persisted until the envoy hosts were restarted.

I also have full debug logs from one instance while it was in this state if it's helpful.

@dio
Copy link
Member

dio commented Oct 6, 2018

@jasonmartens, sorry, are you testing on master now?

@mattklein123
Copy link
Member

It sounds like there might be a bug here in how we are interacting with c-ares but I'm not sure. I would definitely try on current master and see if we can come up with a repro.

@ramaraochavali
Copy link
Contributor Author

ramaraochavali commented Oct 9, 2018

@dio The problem I described above happened with master only but may be a few weeks old build.

@dio
Copy link
Member

dio commented Oct 9, 2018

@ramaraochavali got it, I'll take a look at it.

@jasonmartens
Copy link

I was not testing on master, using 1.7.0 from the Envoy docker image repo.

@mattklein123
Copy link
Member

@dio, @htuch made the DNS resolver use c-ares a long time ago, and the code really hasn't changed since then. The timeout handling is complicated in that library so I would probably start with some auditing of all the timeout code. I suspect there might be some case in which we aren't handling timeouts properly. IIRC c-areas has default timeouts in place, but I would check that also. @htuch might also have some ideas.

@htuch
Copy link
Member

htuch commented Oct 11, 2018

Possibly the timeout handling in DnsResolverImpl::PendingResolution::onAresHostCallback is not correct. It looks like it isn't posting failure back to the dispatcher when there is a failure and timeouts is non-zero.

@dio
Copy link
Member

dio commented Oct 11, 2018

@mattklein123 @htuch got it. Let see what I can do to help.

@mattklein123 mattklein123 added bug and removed question Questions that are neither investigations, bugs, nor enhancements labels Oct 11, 2018
@mattklein123 mattklein123 added this to the 1.9.0 milestone Oct 11, 2018
@ramaraochavali
Copy link
Contributor Author

@dio just a ping. Did you find any thing on this?

@stale
Copy link

stale bot commented Dec 8, 2018

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.

@stale stale bot added the stale stalebot believes this issue/PR has not been touched recently label Dec 8, 2018
@ramaraochavali
Copy link
Contributor Author

@dio were you able to spend time on this? Any thing you found?

@stale stale bot removed the stale stalebot believes this issue/PR has not been touched recently label Dec 8, 2018
@mattklein123 mattklein123 added the help wanted Needs help! label Dec 14, 2018
@mattklein123 mattklein123 modified the milestones: 1.9.0, 1.10.0 Dec 14, 2018
@mattklein123 mattklein123 modified the milestones: 1.10.0, 1.11.0 Mar 11, 2019
@gatesking
Copy link

I meet the same problem, Any progress for this?

@dio
Copy link
Member

dio commented Mar 26, 2019

@ramaraochavali @gatesking sorry that I haven't got anything. Will update you when I have it. OTOH if you want to help, that will be nice!

@silencehe09
Copy link

I had the same problem.
If the cluster with type "strict_dns" can't be resolved successfully by dns, it would take a long time (DNS request timed out 4 times) for envoy to reach the status of "all clusters initialized" and being ready to accept connections. Are there any settings about " strict dns resolution timeout " ? Or any mechanisms can be used to accelerate envoy's startup for readiness?

envoy static_resources config( service1 can be resolved by dns ,while service2 can't be) :

static_resources:
  listeners:
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 10000
    filter_chains:
    - filters:        
      - name: envoy.http_connection_manager
        typed_config:                       
          "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
          codec_type: auto
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
              - "*"
              routes:
              - match:
                  prefix: "/service/1/"
                route:
                  cluster: service1
              - match:
                  prefix: "/service/2"
                route:
                  cluster: service2
          http_filters:
          - name: envoy.router
            typed_config: {}    
  clusters:
  - name: service1
    connect_timeout: 2s
    type: strict_dns
    lb_policy: round_robin
    load_assignment:
      cluster_name: service1
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: foo-service
                port_value: 80
  - name: service2
    connect_timeout: 0.25s
    type: strict_dns
    lb_policy: round_robin
    load_assignment:
      cluster_name: service2
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: bar-service
                port_value: 80
admin:
  access_log_path: "/dev/stdout"
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 8001

some logs(It takes almost 75s to start up):

[2019-04-22 19:07:52.810][98941][debug][config] [source/extensions/filters/network/http_connection_manager/config.cc:312]     config: {}
[2019-04-22 19:07:52.810][98941][debug][config] [source/server/listener_manager_impl.cc:627] add active listener: name=101a3091-b02e-4ccc-811b-6d7907231331, hash=5487848201015756333, address=0.0.0.0:10000
[2019-04-22 19:07:52.810][98941][info][config] [source/server/configuration_impl.cc:85] loading tracing configuration
[2019-04-22 19:07:52.810][98941][info][config] [source/server/configuration_impl.cc:105] loading stats sink configuration
[2019-04-22 19:07:52.810][98941][info][main] [source/server/server.cc:481] starting main dispatch loop
[2019-04-22 19:07:52.810][98941][debug][main] [source/common/event/dispatcher_impl.cc:169] running server.dispatcher on thread 98941
[2019-04-22 19:07:52.810][98945][debug][grpc] [source/common/grpc/google_async_client_impl.cc:41] completionThread running
[2019-04-22 19:07:53.070][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-04-22 19:07:53.208][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-04-22 19:07:57.813][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:07:58.210][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 10000 milliseconds
[2019-04-22 19:08:08.212][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 20000 milliseconds
[2019-04-22 19:08:28.213][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 40000 milliseconds
[2019-04-22 19:09:08.213][98941][debug][upstream] [source/common/network/dns_impl.cc:118] DNS request timed out 4 times
[2019-04-22 19:09:08.213][98941][debug][upstream] [source/common/upstream/upstream_impl.cc:747] initializing secondary cluster service2 completed
[2019-04-22 19:09:08.213][98941][debug][init] [source/common/init/manager_impl.cc:45] init manager Cluster service2 contains no targets
[2019-04-22 19:09:08.213][98941][debug][init] [source/common/init/watcher_impl.cc:14] init manager Cluster service2 initialized, notifying ClusterImplBase
[2019-04-22 19:09:08.213][98941][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:92] cm init: init complete: cluster=service2 primary=0 secondary=0
[2019-04-22 19:09:08.213][98941][info][upstream] [source/common/upstream/cluster_manager_impl.cc:137] cm init: all clusters initialized
[2019-04-22 19:09:08.213][98941][info][main] [source/server/server.cc:465] all clusters initialized. initializing init manager
[2019-04-22 19:09:08.213][98941][debug][init] [source/common/init/manager_impl.cc:45] init manager Server contains no targets
[2019-04-22 19:09:08.213][98941][debug][init] [source/common/init/watcher_impl.cc:14] init manager Server initialized, notifying RunHelper
[2019-04-22 19:09:08.213][98941][info][config] [source/server/listener_manager_impl.cc:1005] all dependencies initialized. starting workers
[2019-04-22 19:09:08.213][98955][debug][main] [source/server/worker_impl.cc:98] worker entering dispatch loop
[2019-04-22 19:09:08.213][98955][debug][main] [source/common/event/dispatcher_impl.cc:169] running worker_0.dispatcher on thread 98955
[2019-04-22 19:09:08.213][98955][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:819] adding TLS initial cluster service1
[2019-04-22 19:09:08.213][98955][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:819] adding TLS initial cluster service2
[2019-04-22 19:09:08.213][98955][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:980] membership update for TLS cluster service1 added 1 removed 0
[2019-04-22 19:09:08.213][98956][debug][grpc] [source/common/grpc/google_async_client_impl.cc:41] completionThread running
[2019-04-22 19:09:13.215][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:13.216][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-04-22 19:09:14.028][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-04-22 19:09:14.151][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-04-22 19:09:18.217][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:19.152][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 10000 milliseconds
[2019-04-22 19:09:23.219][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:28.221][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:29.154][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 20000 milliseconds
[2019-04-22 19:09:33.222][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:38.223][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:43.225][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:48.225][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:49.155][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 40000 milliseconds
[2019-04-22 19:09:53.214][98941][info][main] [source/server/drain_manager_impl.cc:63] shutting down parent after drain
[2019-04-22 19:09:53.226][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:58.227][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:03.228][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:08.228][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:13.316][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:18.317][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:23.318][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:28.318][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:29.157][98941][debug][upstream] [source/common/network/dns_impl.cc:118] DNS request timed out 4 times
[2019-04-22 19:10:33.319][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:34.163][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 4998 milliseconds
[2019-04-22 19:10:38.276][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-04-22 19:10:38.322][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:38.385][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-04-22 19:10:43.324][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:43.388][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 10000 milliseconds
[2019-04-22 19:10:48.325][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:53.326][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:53.391][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 20000 milliseconds
[2019-04-22 19:10:58.329][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:11:03.330][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:11:08.332][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:11:13.332][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:11:13.393][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 40000 milliseconds
[2019-04-22 19:11:18.334][98941][debug][main] [source/server/server.cc:147] flushing stats

envoy admin endpoint of /server_info:

{
 "version": "4304b18819546f585e1e51c52fa5df0f01831633/1.11.0-dev/Clean/RELEASE/BoringSSL",
 "state": "PRE_INITIALIZING",
 "command_line_options": {
  "base_id": "0",
  "concurrency": 1,
  "config_path": "/root/hpf/envoy/front-envoy-jwt.yaml",
  "config_yaml": "",
  "allow_unknown_fields": false,
  "admin_address_path": "",
  "local_address_ip_version": "v4",
  "log_level": "debug",
  "component_log_level": "",
  "log_format": "[%Y-%m-%d %T.%e][%t][%l][%n] %v",
  "log_path": "",
  "hot_restart_version": false,
  "service_cluster": "front-proxy",
  "service_node": "",
  "service_zone": "",
  "mode": "Serve",
  "max_stats": "16384",
  "max_obj_name_len": "60",
  "disable_hot_restart": false,
  "enable_mutex_tracing": false,
  "restart_epoch": 0,
  "cpuset_threads": false,
  "file_flush_interval": "10s",
  "drain_time": "30s",
  "parent_shutdown_time": "45s"
 },
 "uptime_current_epoch": "64s",
 "uptime_all_epochs": "64s"
}

@mattklein123 mattklein123 assigned mattklein123 and unassigned dio May 10, 2019
@mattklein123
Copy link
Member

Is this still an issue for anyone watching this issue? I investigated and I couldn't find anything obviously wrong. It's possible this has been fixed somehow along the way.

@mattklein123 mattklein123 removed the help wanted Needs help! label Jul 3, 2019
@mattklein123 mattklein123 removed this from the 1.11.0 milestone Jul 3, 2019
@ramaraochavali
Copy link
Contributor Author

It is possible that it might have been resolved along the way - We can close this and possibly revisit if someone complains about it.

@avereha
Copy link

avereha commented Jul 12, 2019

I just found this issue on one envoy 1.10.0 instance.

From what we noticed in the past:

  • it happens if DNS is down/not accessible for a while.
  • it happens on small percentage of instances, even if DNS is down for all(<1%).
  • issue continues even if DNS is accessible again.

Comparing two instances with identical configuration, here is what I noticed:

"Good" instance has only:
Setting DNS resolution timer for 5000 millisecond

"Bad" instance has:
Setting DNS resolution timer for 0 milliseconds: 20-30 times
DNS request timed out 4 times: 2-3 times.
Around this "timed out" message I see 4-5 messages like this:
Setting DNS resolution timer for 4988 milliseconds
Setting DNS resolution timer for 5010 milliseconds
Setting DNS resolution timer for 4985 milliseconds
Setting DNS resolution timer for 10005 milliseconds
Setting DNS resolution timer for 2 milliseconds

@mattklein123 mattklein123 reopened this Jul 12, 2019
@mattklein123 mattklein123 added the help wanted Needs help! label Jul 12, 2019
@mattklein123
Copy link
Member

I suspect there is some race condition here potentially within c-ares, but I'm not sure. Reopening and marking help wanted.

@junr03
Copy link
Member

junr03 commented Jan 30, 2020

Envoy Mobile has the same issue in iOS.

Steps to repro:
From an Envoy Mobile clone

  1. Build the iOS library: bazel build --config=ios //:ios_dist
  2. Turn laptop's wifi off.
  3. Run the iOS example app: bazel run //examples/swift/hello_world:app --config=ios
  4. Envoy will start. DNS resolution will happen but the response will be empty.
  5. Turn wifi back on.
  6. Even after 5+ minutes DNS resolution still returns an empty response.

Config used:
This is repro'ed with clusters with both STRICT and LOGICAL DNS. As well as the dynamic forward proxy. The DNS refresh rate was configured to be 5s.

I am going to be looking at this issue as the setup above repros this issue 100% of the time.

@mattklein123 mattklein123 added this to the 1.14.0 milestone Jan 30, 2020
@junr03
Copy link
Member

junr03 commented Jan 31, 2020

Did some late night digging yesterday and arrived at an explanation:

When c-ares initializes a channel (trimming irrelevant details):

  1. It populates the DNS servers it will use to resolve queries from different places in ares_init_options.
  2. One of the functions is init_by_resolv_conf which has platform specific code.
  3. For iOS it falls into #elif defined(CARES_USE_LIBRESOLV) which uses res_getservers to get the addresses of DNS servers.
  4. In the absence of connectivity res_getservers returns AF_UNSPEC for the server address’ family.
  5. That means that the channel’s only server is then populated by init_by_defaults which uses INADDR_LOOPBACK:NAMESERVER_PORT as the servers address.
    There is obviously no guarantee that a DNS server is going to be running on loopback, and on the phone it is definitely not. In addition once a channel has been initialized it never re-resolves its server set, so even when connectivity is regained, the channel still only has the one default server.

Solution:

  1. Patch c-ares to "reinitialize" a channel based on certain conditions. After I understood the problem I dug through c-ares to see if this functionality was already available. It is not. However, there was a PR gethostbyname: reload resolv.conf values if file changed c-ares/c-ares#272 that attempted to do this, albeit for only one platform, and on only one public function. That work could be finished in order to solve this issue. Opened an issue to track: channel: re-resolved servers under certain circumstances c-ares/c-ares#301
  2. In Envoy's DnsResolverImpl detect when it is likely that DNS resolution is failing due to a "busted" channel and recreate it. dns: destroy/reinitialize c-ares channel on ARES_ECONNREFUSED #9899

@junr03
Copy link
Member

junr03 commented Jan 31, 2020

By the way, it is worth noting that this would affect any cluster that uses DnsResolverImpl, so I am going to update the title to reflect that.

@junr03 junr03 removed the help wanted Needs help! label Jan 31, 2020
@junr03 junr03 changed the title Issue with Strict Dns Cluster dns: DnsResolverImpl keeps using a "broken" c-ares channel Jan 31, 2020
junr03 added a commit that referenced this issue Feb 6, 2020
Description: this PR adds logic to the DnsResolverImpl to destroy and re-initialize its c-ares channel under certain circumstances. A better option would require work in c-ares c-ares/c-ares#301.
Risk Level: med changes in low-level DNS resolution.
Testing: unit tests

Fixes #4543

Signed-off-by: Jose Nino <[email protected]>
@Alan-buaa
Copy link

I use 1.11.0, still has this issue

@junr03
Copy link
Member

junr03 commented Mar 24, 2021

Yes, 1.11.0 was released before this commit went in. I believe 1.14.0 is the first version where this is fixed.

@Alan-buaa
Copy link

Yes, 1.11.0 was released before this commit went in. I believe 1.14.0 is the first version where this is fixed.

I upgrade the ambassador, now it use 1.15.1, still has this problem.

@Alan-buaa
Copy link

Yes, 1.11.0 was released before this commit went in. I believe 1.14.0 is the first version where this is fixed.

I upgrade the ambassador, now it use 1.15.1, still has this problem.

I have resolved this issue, not envoy's issue. It is the DNS resolution performance problem of the k8s's cluster

@gaopeiliang
Copy link

gaopeiliang commented Aug 4, 2022

#9899

envoy check c-ares ARES_ECONNREFUSED status and reinit channel to cover /etc/resolv.conf DNS server change ......

but another questions :

some times DNS server down a while; and DNS recover envoy can't recover auto ?

            if ((status != ARES_SUCCESS) || (sendreq->data_storage == NULL))
              {
                /* We encountered an error (probably a timeout, suggesting the
                 * DNS server we're talking to is probably unreachable,
                 * wedged, or severely overloaded) or we couldn't copy the
                 * request, so mark the connection as broken. When we get to
                 * process_broken_connections() we'll close the connection and
                 * try to re-send requests to another server.
                 */
               server->is_broken = 1;
               /* Just to be paranoid, zero out this sendreq... */
               sendreq->data = NULL;
               sendreq->len = 0;
             }

c-area will close conn when some request not success; and reopen new conn on next request; so when dns server recover it will resolve complete also ....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.