Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: High CPU Usage for Double-L3/L4 Proxy #1424

Closed
tehranian opened this issue Aug 9, 2017 · 2 comments
Closed

perf: High CPU Usage for Double-L3/L4 Proxy #1424

tehranian opened this issue Aug 9, 2017 · 2 comments
Labels

Comments

@tehranian
Copy link
Contributor

(See thread at: https://groups.google.com/d/msg/envoy-users/cOoLLnI0GS0/PReYVVETCQAJ)

I was doing some performance profiling of Envoy. One test case stood out: Envoy-to-Envoy L3/L4 proxy.

The setup of the tests:

  • Using iperf3 to test two hosts (iperf3 client & server) in the same AZ in AWS.
  • Single-Envoy: iperf-client -> Local Envoy -----> iperf-server yields ~line speed w/ 10% CPU usage on the Local Envoy instance (good).
  • Double-Envoys: iperf-client -> Local Envoy -----> Remote Envoy -> iperf-server yields ~line speed w/ 10% CPU Usage on the Egress Envoy instance (good), but 40-50% CPU usage on the Ingress Envoy.

I'm able to reproduce with a configuration for the Remote Envoy as simple as this:

{
    "listeners": [{
        "name": "iperf-listener",
        "address": "tcp://0.0.0.0:15201",
        "filters": [{
            "type": "read",
            "name": "tcp_proxy",
            "config": {
                "stat_prefix": "iperf3-listener",
                "route_config": {"routes": [{"cluster": "iperf"}]}
            }
        }]
    }],
    "admin": "...",
    "cluster_manager": {
        "clusters": [{
            "name": "iperf",
            "type": "static",
            "connect_timeout_ms": 250,
            "lb_type": "round_robin",
            "hosts": [{"url": "tcp://127.0.0.1:5201"}]
        }]
    },
    "statsd_udp_ip_address": "127.0.0.1:8126"
}

My intention in that configuration is to route traffic from the public :15201 to the local :5201, which the iperf-server process is listening on.

Per suggestion in the thread, I did see CPU% drop to merely 30% on the ingress Envoy, after commenting out the following line: https://github.com/lyft/envoy/blob/cc53479c2ee01af03d549ee9c5f60bb1660f1672/source/common/filter/tcp_proxy.cc#L227

@mattklein123
Copy link
Member

Related to #232

@mattklein123
Copy link
Member

Going to close this given how old it is and how many changes have been made. We can revisit if there is new interest and new data.

jpsim pushed a commit that referenced this issue Nov 28, 2022
Adding a cancel stream test to mirror swift's cancel stream test: https://github.com/envoyproxy/envoy-mobile/blob/f61c743f0a6c63b5bc8a27cc1415a324bea8c41e/test/swift/integration/CancelStreamTest.swift

Found that the custom yaml configuration also has a bug in it where it does not appropriately register platform and native filters when a custom yaml is used. Luckily this is the first usage of the custom yaml. The JNI structure is different from the objective-c structure but I think I was able to mimic it as close as possible by just requiring an `EnvoyConfiguration`.

The API changes are here: https://github.com/envoyproxy/envoy-mobile/pull/1424/files#diff-c4d0f50e81c31efc078d029708e09a001ea3e466c6b38141c027fad21c6b70abR30-R34

Signed-off-by: Alan Chiu <[email protected]>

For an explanation of how to fill out the fields, please see the relevant section
in [PULL_REQUESTS.md](https://github.com/envoyproxy/envoy/blob/master/PULL_REQUESTS.md)

Description: kotlin: add cancel stream test and fix custom yaml path
Risk Level: low
Testing: integration test
Docs Changes: n/a
Release Notes: n/a
[Optional Fixes #Issue]
[Optional Deprecated:]

Signed-off-by: JP Simard <[email protected]>
jpsim pushed a commit that referenced this issue Nov 29, 2022
Adding a cancel stream test to mirror swift's cancel stream test: https://github.com/envoyproxy/envoy-mobile/blob/f61c743f0a6c63b5bc8a27cc1415a324bea8c41e/test/swift/integration/CancelStreamTest.swift

Found that the custom yaml configuration also has a bug in it where it does not appropriately register platform and native filters when a custom yaml is used. Luckily this is the first usage of the custom yaml. The JNI structure is different from the objective-c structure but I think I was able to mimic it as close as possible by just requiring an `EnvoyConfiguration`.

The API changes are here: https://github.com/envoyproxy/envoy-mobile/pull/1424/files#diff-c4d0f50e81c31efc078d029708e09a001ea3e466c6b38141c027fad21c6b70abR30-R34

Signed-off-by: Alan Chiu <[email protected]>

For an explanation of how to fill out the fields, please see the relevant section
in [PULL_REQUESTS.md](https://github.com/envoyproxy/envoy/blob/master/PULL_REQUESTS.md)

Description: kotlin: add cancel stream test and fix custom yaml path
Risk Level: low
Testing: integration test
Docs Changes: n/a
Release Notes: n/a
[Optional Fixes #Issue]
[Optional Deprecated:]

Signed-off-by: JP Simard <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants