Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add configurable replicas and antiaffinity to gateway deployment for HA #553

Open
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

NohaIhab
Copy link
Contributor

@NohaIhab NohaIhab commented Sep 10, 2024

closes #496

Testing

  1. Using multipass, launch 2 VMs. The first VM will act as Node 1 and will contain the juju controller, the second VM will be act as Node 2.
    1.1 Launch first VM
multipass launch -c 4 -m 8Gb -d 60Gb -n juju-node-1

1.2 Launch second VM

multipass launch -c 2 -m 4Gb -d 30Gb -n other-vm
  1. In each VM, install microk8s and enable add-ons:
sudo snap install microk8s --channel=1.29-strict/stable
sudo microk8s enable dns rbac hostpath-storage ingress metallb:10.64.140.43-10.64.140.49
  1. Create a multi-node cluster by Joining Node 2 to Node 1
    3.1 In the first VM, run:
sudo microk8s add-node

you will get an output similar to:

From the node you wish to join to this cluster, run the following:
microk8s join 10.59.190.3:25000/ac6e9652018d8d00ac34b3bdb76a9e2c/e237b86cb6d8

Use the '--worker' flag to join a node as a worker not running the control plane, eg:
microk8s join 10.59.190.3:25000/ac6e9652018d8d00ac34b3bdb76a9e2c/e237b86cb6d8 --worker

If the node you are adding is not reachable through the default interface you can use one of the following:
microk8s join 10.59.190.3:25000/ac6e9652018d8d00ac34b3bdb76a9e2c/e237b86cb6d8

3.2 Copy the join command from the first VM, and run it with sudo in the second VM. For example:

sudo microk8s join 10.59.190.3:25000/ac6e9652018d8d00ac34b3bdb76a9e2c/e237b86cb6d8
WARNING: Hostpath storage is enabled and is not suitable for multi node clusters.

Contacting cluster at 10.59.190.3
Waiting for this node to finish joining the cluster. .. .. .. ..  
Successfully joined the cluster.

note the hostpath storage warning does not affect our test, this does not simulate a production deployment.

3.3 Make sure you can see both nodes with microk8s. You might need to wait a few minutes after the join command finished to see both nodes.

microk8s kubectl get nodes

Example output:

microk8s kubectl get nodes -A
NAME       STATUS   ROLES    AGE   VERSION
juju-vm    Ready    <none>   12m   v1.29.8
other-vm   Ready    <none>   78s   v1.29.8

  1. Install Juju and bootstrap Microk8s in the first VM
sudo snap install juju --channel=3.4/stable

mkdir -p /home/ubuntu/.local/share
sudo usermod -a -G snap_microk8s ubuntu
newgrp snap_microk8s
juju bootstrap microk8s
  1. From the first instance, Add a model and deploy istio charms from this PR, in addition to kubeflow-volumes charm to test the web-app is reachable through the ingress
juju deploy istio-pilot --channel=latest/edge/pr-553 --trust --series focal
juju deploy istio-gateway istio-ingressgateway --channel=latest/edge/pr-553 --trust --series focal --config kind=ingress
juju relate istio-pilot istio-ingressgateway

juju deploy kubeflow-volumes --trust --channel=latest/edge
juju relate istio-pilot kubeflow-volumes

wait until all charms are active

juju status --watch 2s
  1. Check the kubeflow-volumes web-app is reachable before scaling Istio
curl -v http://10.64.140.43/volumes | grep Frontend

Expected output:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying 10.64.140.43:80...
* Connected to 10.64.140.43 (10.64.140.43) port 80
> GET /volumes HTTP/1.1
> Host: 10.64.140.43
> User-Agent: curl/8.5.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< server: istio-envoy
< date: Mon, 23 Sep 2024 10:58:30 GMT
< cache-control: no-cache, no-store, must-revalidate, max-age=0
< content-type: text/html; charset=utf-8
< content-length: 7138
< set-cookie: XSRF-TOKEN=h7gToLWJiSnHHyEiHYjUZZu5Z4kH_CGGpF6PnDI2Cj8; Path=/volumes; SameSite=Strict
< x-envoy-upstream-service-time: 5
< 
{ [7138 bytes data]
100  7138  100  7138    0     0   568k      0 --:--:-- --:--:-- --:--:--  580k
    <title>Frontend</title>
* Connection #0 to host 10.64.140.43 left intact
  1. Change the config to scale istio-ingressgateway deployment
juju config istio-ingressgateway replicas=2
  1. Make sure that there are now 2 istio-ingressgateway Pods running, each on a separate Node:
microk8s kubectl get po -n test-istio -o wide | grep workload
istio-ingressgateway-workload-86d4dd6dff-84g6l   1/1     Running   0          38m   10.1.58.136    juju-inst    <none>           <none>
istio-ingressgateway-workload-86d4dd6dff-j9fhv   1/1     Running   0          47m   10.1.179.133   other-inst   <none>           <none>
  1. Make requests to the kubeflow volumes webapp multiple times, while monitoring the logs of both istio-ingressgateway-workload pods. You should be able to see the requests being distributed across both Pods.
    9.1 Run this repeatedly multiple times, for example 10 times
curl http://10.64.140.43/volumes

9.2 Check the logs of each Pod, example logs:

Pod 1 logs
microk8s kubectl logs -n test-istio istio-ingressgateway-workload-86d4dd6dff-j9fhv
2024-09-23T10:51:02.665237Z	info	FLAG: --concurrency="0"
2024-09-23T10:51:02.665365Z	info	FLAG: --domain="test-istio.svc.cluster.local"
2024-09-23T10:51:02.665410Z	info	FLAG: --help="false"
2024-09-23T10:51:02.665432Z	info	FLAG: --log_as_json="false"
2024-09-23T10:51:02.665460Z	info	FLAG: --log_caller=""
2024-09-23T10:51:02.665490Z	info	FLAG: --log_output_level="default:info"
2024-09-23T10:51:02.665517Z	info	FLAG: --log_rotate=""
2024-09-23T10:51:02.665530Z	info	FLAG: --log_rotate_max_age="30"
2024-09-23T10:51:02.665544Z	info	FLAG: --log_rotate_max_backups="1000"
2024-09-23T10:51:02.665554Z	info	FLAG: --log_rotate_max_size="104857600"
2024-09-23T10:51:02.665562Z	info	FLAG: --log_stacktrace_level="default:none"
2024-09-23T10:51:02.665578Z	info	FLAG: --log_target="[stdout]"
2024-09-23T10:51:02.665604Z	info	FLAG: --meshConfig="./etc/istio/config/mesh"
2024-09-23T10:51:02.665616Z	info	FLAG: --outlierLogPath=""
2024-09-23T10:51:02.665635Z	info	FLAG: --profiling="true"
2024-09-23T10:51:02.665645Z	info	FLAG: --proxyComponentLogLevel="misc:error"
2024-09-23T10:51:02.665696Z	info	FLAG: --proxyLogLevel="warning"
2024-09-23T10:51:02.665715Z	info	FLAG: --serviceCluster="istio-proxy"
2024-09-23T10:51:02.665730Z	info	FLAG: --stsPort="0"
2024-09-23T10:51:02.665764Z	info	FLAG: --templateFile=""
2024-09-23T10:51:02.665791Z	info	FLAG: --tokenManagerPlugin="GoogleTokenExchange"
2024-09-23T10:51:02.665822Z	info	FLAG: --vklog="0"
2024-09-23T10:51:02.665841Z	info	Version 1.22.0-aaf597fbfae607adf4bb4e77538a7ea98995328a-Clean
2024-09-23T10:51:02.665879Z	info	Set max file descriptors (ulimit -n) to: 65536
2024-09-23T10:51:02.666025Z	info	Proxy role	ips=[10.1.179.133] type=router id=istio-ingressgateway-workload-86d4dd6dff-j9fhv.test-istio domain=test-istio.svc.cluster.local
2024-09-23T10:51:02.666325Z	info	Apply mesh config from file accessLogFile: /dev/stdout
defaultConfig:
  discoveryAddress: istiod.test-istio.svc:15012
defaultProviders:
  metrics:
  - prometheus
enablePrometheusMerge: true
rootNamespace: test-istio
trustDomain: cluster.local
2024-09-23T10:51:02.667751Z	warn	concurrency is set to 0, which will use a thread per CPU on the host. However, CPU limit is set lower. This is not recommended and may lead to performance issues. CPU count: 2, CPU Limit: 0.
2024-09-23T10:51:02.667958Z	info	Effective config: binaryPath: /usr/local/bin/envoy
configPath: ./etc/istio/proxy
controlPlaneAuthPolicy: MUTUAL_TLS
discoveryAddress: istiod.test-istio.svc:15012
drainDuration: 45s
proxyAdminPort: 15000
serviceCluster: istio-proxy
statNameLength: 189
statusPort: 15020
terminationDrainDuration: 5s

2024-09-23T10:51:02.667970Z	info	JWT policy is third-party-jwt
2024-09-23T10:51:02.667972Z	info	using credential fetcher of JWT type in cluster.local trust domain
2024-09-23T10:51:02.874303Z	info	Opening status port 15020
2024-09-23T10:51:02.875532Z	info	Workload SDS socket not found. Starting Istio SDS Server
2024-09-23T10:51:02.875618Z	info	CA Endpoint istiod.test-istio.svc:15012, provider Citadel
2024-09-23T10:51:02.876766Z	info	Using CA istiod.test-istio.svc:15012 cert with certs: var/run/secrets/istio/root-cert.pem
2024-09-23T10:51:02.912585Z	info	ads	All caches have been synced up in 247.584113ms, marking server ready
2024-09-23T10:51:02.912935Z	info	xdsproxy	Initializing with upstream address "istiod.test-istio.svc:15012" and cluster "Kubernetes"
2024-09-23T10:51:02.914297Z	info	Pilot SAN: [istiod.test-istio.svc]
2024-09-23T10:51:02.916255Z	info	sds	Starting SDS grpc server
2024-09-23T10:51:02.916527Z	info	starting Http service at 127.0.0.1:15004
2024-09-23T10:51:02.916954Z	info	Starting proxy agent
2024-09-23T10:51:02.917003Z	info	Envoy command: [-c etc/istio/proxy/envoy-rev.json --drain-time-s 45 --drain-strategy immediate --local-address-ip-version v4 --file-flush-interval-msec 1000 --disable-hot-restart --allow-unknown-static-fields -l warning --component-log-level misc:error]
2024-09-23T10:51:02.970784Z	warning	envoy main external/envoy/source/server/server.cc:835	Usage of the deprecated runtime key overload.global_downstream_max_connections, consider switching to `envoy.resource_monitors.downstream_connections` instead.This runtime key will be removed in future.	thread=14
2024-09-23T10:51:02.971272Z	warning	envoy main external/envoy/source/server/server.cc:928	There is no configured limit to the number of allowed active downstream connections. Configure a limit in `envoy.resource_monitors.downstream_connections` resource monitor.	thread=14
2024-09-23T10:51:02.977855Z	info	xdsproxy	connected to delta upstream XDS server: istiod.test-istio.svc:15012	id=1
2024-09-23T10:51:02.999076Z	info	ads	ADS: new connection for node:istio-ingressgateway-workload-86d4dd6dff-j9fhv.test-istio-1
2024-09-23T10:51:03.000077Z	info	ads	ADS: new connection for node:istio-ingressgateway-workload-86d4dd6dff-j9fhv.test-istio-2
2024-09-23T10:51:03.026413Z	info	cache	generated new workload certificate	latency=113.471451ms ttl=23h59m59.97359172s
2024-09-23T10:51:03.027022Z	info	cache	Root cert has changed, start rotating root cert
2024-09-23T10:51:03.027047Z	info	ads	XDS: Incremental Pushing ConnectedEndpoints:2 Version:
2024-09-23T10:51:03.027109Z	info	cache	returned workload trust anchor from cache	ttl=23h59m59.972892295s
2024-09-23T10:51:03.027226Z	info	cache	returned workload certificate from cache	ttl=23h59m59.972775864s
2024-09-23T10:51:03.027377Z	info	cache	returned workload trust anchor from cache	ttl=23h59m59.972624149s
2024-09-23T10:51:03.027696Z	info	ads	SDS: PUSH request for node:istio-ingressgateway-workload-86d4dd6dff-j9fhv.test-istio resources:1 size:4.0kB resource:default
2024-09-23T10:51:03.027922Z	info	ads	SDS: PUSH request for node:istio-ingressgateway-workload-86d4dd6dff-j9fhv.test-istio resources:1 size:1.1kB resource:ROOTCA
2024-09-23T10:51:03.028038Z	info	cache	returned workload trust anchor from cache	ttl=23h59m59.971963473s
2024-09-23T10:51:03.368150Z	info	Readiness succeeded in 708.328754ms
2024-09-23T10:51:03.369089Z	info	Envoy proxy is ready
[2024-09-23T10:58:14.830Z] "GET /volumes HTTP/1.1" 200 - via_upstream - "-" 0 7138 4 4 "10.1.58.128" "curl/8.5.0" "20ba0a05-072d-47b1-853a-a66927af0082" "10.64.140.43" "10.1.179.135:5000" outbound|5000||kubeflow-volumes.test-istio.svc.cluster.local 10.1.179.133:56976 10.1.179.133:8080 10.1.58.128:27352 - -
[2024-09-23T10:58:30.864Z] "GET /volumes HTTP/1.1" 200 - via_upstream - "-" 0 7138 5 5 "10.1.58.128" "curl/8.5.0" "79d1a97e-c96a-4538-99a4-5355b5c430a7" "10.64.140.43" "10.1.179.135:5000" outbound|5000||kubeflow-volumes.test-istio.svc.cluster.local 10.1.179.133:48494 10.1.179.133:8080 10.1.58.128:1463 - -
[2024-09-23T11:00:37.190Z] "GET /volumes HTTP/1.1" 200 - via_upstream - "-" 0 7138 19 18 "10.1.58.128" "curl/8.5.0" "7c42561e-a36f-4269-8335-2afb96d83975" "10.64.140.43" "10.1.179.135:5000" outbound|5000||kubeflow-volumes.test-istio.svc.cluster.local 10.1.179.133:52836 10.1.179.133:8080 10.1.58.128:3114 - -
[2024-09-23T11:02:02.924Z] "GET /volumes HTTP/1.1" 200 - via_upstream - "-" 0 7138 3 2 "10.1.58.128" "curl/8.5.0" "9233f389-7aa9-4b35-a463-1e4c8abda659" "10.64.140.43" "10.1.179.135:5000" outbound|5000||kubeflow-volumes.test-istio.svc.cluster.local 10.1.179.133:39242 10.1.179.133:8080 10.1.58.128:8502 - -
Pod 2 logs
microk8s kubectl logs -n test-istio istio-ingressgateway-workload-86d4dd6dff-84g6l
2024-09-23T10:59:15.972871Z	info	FLAG: --concurrency="0"
2024-09-23T10:59:15.972886Z	info	FLAG: --domain="test-istio.svc.cluster.local"
2024-09-23T10:59:15.972889Z	info	FLAG: --help="false"
2024-09-23T10:59:15.972891Z	info	FLAG: --log_as_json="false"
2024-09-23T10:59:15.972892Z	info	FLAG: --log_caller=""
2024-09-23T10:59:15.972894Z	info	FLAG: --log_output_level="default:info"
2024-09-23T10:59:15.972897Z	info	FLAG: --log_rotate=""
2024-09-23T10:59:15.972899Z	info	FLAG: --log_rotate_max_age="30"
2024-09-23T10:59:15.972900Z	info	FLAG: --log_rotate_max_backups="1000"
2024-09-23T10:59:15.972902Z	info	FLAG: --log_rotate_max_size="104857600"
2024-09-23T10:59:15.972903Z	info	FLAG: --log_stacktrace_level="default:none"
2024-09-23T10:59:15.972908Z	info	FLAG: --log_target="[stdout]"
2024-09-23T10:59:15.972909Z	info	FLAG: --meshConfig="./etc/istio/config/mesh"
2024-09-23T10:59:15.972911Z	info	FLAG: --outlierLogPath=""
2024-09-23T10:59:15.972912Z	info	FLAG: --profiling="true"
2024-09-23T10:59:15.972914Z	info	FLAG: --proxyComponentLogLevel="misc:error"
2024-09-23T10:59:15.972915Z	info	FLAG: --proxyLogLevel="warning"
2024-09-23T10:59:15.972916Z	info	FLAG: --serviceCluster="istio-proxy"
2024-09-23T10:59:15.972918Z	info	FLAG: --stsPort="0"
2024-09-23T10:59:15.972919Z	info	FLAG: --templateFile=""
2024-09-23T10:59:15.972920Z	info	FLAG: --tokenManagerPlugin="GoogleTokenExchange"
2024-09-23T10:59:15.972922Z	info	FLAG: --vklog="0"
2024-09-23T10:59:15.972924Z	info	Version 1.22.0-aaf597fbfae607adf4bb4e77538a7ea98995328a-Clean
2024-09-23T10:59:15.972928Z	info	Set max file descriptors (ulimit -n) to: 65536
2024-09-23T10:59:15.973045Z	info	Proxy role	ips=[10.1.58.136] type=router id=istio-ingressgateway-workload-86d4dd6dff-84g6l.test-istio domain=test-istio.svc.cluster.local
2024-09-23T10:59:15.973122Z	info	Apply mesh config from file accessLogFile: /dev/stdout
defaultConfig:
  discoveryAddress: istiod.test-istio.svc:15012
defaultProviders:
  metrics:
  - prometheus
enablePrometheusMerge: true
rootNamespace: test-istio
trustDomain: cluster.local
2024-09-23T10:59:15.974482Z	warn	concurrency is set to 0, which will use a thread per CPU on the host. However, CPU limit is set lower. This is not recommended and may lead to performance issues. CPU count: 2, CPU Limit: 0.
2024-09-23T10:59:15.974633Z	info	Effective config: binaryPath: /usr/local/bin/envoy
configPath: ./etc/istio/proxy
controlPlaneAuthPolicy: MUTUAL_TLS
discoveryAddress: istiod.test-istio.svc:15012
drainDuration: 45s
proxyAdminPort: 15000
serviceCluster: istio-proxy
statNameLength: 189
statusPort: 15020
terminationDrainDuration: 5s

2024-09-23T10:59:15.974643Z	info	JWT policy is third-party-jwt
2024-09-23T10:59:15.974646Z	info	using credential fetcher of JWT type in cluster.local trust domain
2024-09-23T10:59:16.178584Z	info	Workload SDS socket not found. Starting Istio SDS Server
2024-09-23T10:59:16.178654Z	info	CA Endpoint istiod.test-istio.svc:15012, provider Citadel
2024-09-23T10:59:16.178709Z	info	Using CA istiod.test-istio.svc:15012 cert with certs: var/run/secrets/istio/root-cert.pem
2024-09-23T10:59:16.179201Z	info	Opening status port 15020
2024-09-23T10:59:16.224062Z	info	ads	All caches have been synced up in 251.421593ms, marking server ready
2024-09-23T10:59:16.224361Z	info	xdsproxy	Initializing with upstream address "istiod.test-istio.svc:15012" and cluster "Kubernetes"
2024-09-23T10:59:16.225619Z	info	Pilot SAN: [istiod.test-istio.svc]
2024-09-23T10:59:16.226596Z	info	Starting proxy agent
2024-09-23T10:59:16.226683Z	info	Envoy command: [-c etc/istio/proxy/envoy-rev.json --drain-time-s 45 --drain-strategy immediate --local-address-ip-version v4 --file-flush-interval-msec 1000 --disable-hot-restart --allow-unknown-static-fields -l warning --component-log-level misc:error]
2024-09-23T10:59:16.229143Z	info	sds	Starting SDS grpc server
2024-09-23T10:59:16.229345Z	info	starting Http service at 127.0.0.1:15004
2024-09-23T10:59:16.291330Z	info	cache	generated new workload certificate	latency=67.072428ms ttl=23h59m59.708672443s
2024-09-23T10:59:16.291488Z	info	cache	Root cert has changed, start rotating root cert
2024-09-23T10:59:16.291556Z	info	ads	XDS: Incremental Pushing ConnectedEndpoints:0 Version:
2024-09-23T10:59:16.291615Z	info	cache	returned workload trust anchor from cache	ttl=23h59m59.708386074s
2024-09-23T10:59:16.313438Z	warning	envoy main external/envoy/source/server/server.cc:835	Usage of the deprecated runtime key overload.global_downstream_max_connections, consider switching to `envoy.resource_monitors.downstream_connections` instead.This runtime key will be removed in future.	thread=13
2024-09-23T10:59:16.313754Z	warning	envoy main external/envoy/source/server/server.cc:928	There is no configured limit to the number of allowed active downstream connections. Configure a limit in `envoy.resource_monitors.downstream_connections` resource monitor.	thread=13
2024-09-23T10:59:16.319465Z	info	xdsproxy	connected to delta upstream XDS server: istiod.test-istio.svc:15012	id=1
2024-09-23T10:59:16.340425Z	info	ads	ADS: new connection for node:istio-ingressgateway-workload-86d4dd6dff-84g6l.test-istio-1
2024-09-23T10:59:16.340624Z	info	cache	returned workload certificate from cache	ttl=23h59m59.659378572s
2024-09-23T10:59:16.340846Z	info	ads	ADS: new connection for node:istio-ingressgateway-workload-86d4dd6dff-84g6l.test-istio-2
2024-09-23T10:59:16.340932Z	info	cache	returned workload trust anchor from cache	ttl=23h59m59.659069248s
2024-09-23T10:59:16.341316Z	info	ads	SDS: PUSH request for node:istio-ingressgateway-workload-86d4dd6dff-84g6l.test-istio resources:1 size:4.0kB resource:default
2024-09-23T10:59:16.341378Z	info	ads	SDS: PUSH request for node:istio-ingressgateway-workload-86d4dd6dff-84g6l.test-istio resources:1 size:1.1kB resource:ROOTCA
2024-09-23T10:59:16.526564Z	info	Readiness succeeded in 559.412791ms
2024-09-23T10:59:16.528359Z	info	Envoy proxy is ready
[2024-09-23T11:01:01.351Z] "GET /volumes HTTP/1.1" 200 - via_upstream - "-" 0 7138 2 2 "10.59.190.3" "curl/8.5.0" "040bb50c-e3f7-4ed9-b980-6c09e0b7c7cc" "10.64.140.43" "10.1.179.135:5000" outbound|5000||kubeflow-volumes.test-istio.svc.cluster.local 10.1.58.136:59054 10.1.58.136:8080 10.59.190.3:23786 - -
[2024-09-23T11:02:36.622Z] "GET /volumes HTTP/1.1" 200 - via_upstream - "-" 0 7138 5 4 "10.59.190.3" "curl/8.5.0" "58d508b0-b7fa-47e6-a91a-01f1c07d8b09" "10.64.140.43" "10.1.179.135:5000" outbound|5000||kubeflow-volumes.test-istio.svc.cluster.local 10.1.58.136:39048 10.1.58.136:8080 10.59.190.3:35705 - -
[2024-09-23T11:02:37.790Z] "GET /volumes HTTP/1.1" 200 - via_upstream - "-" 0 7138 5 4 "10.59.190.3" "curl/8.5.0" "5271b1bf-ad4f-4b19-be22-24f6c711bc27" "10.64.140.43" "10.1.179.135:5000" outbound|5000||kubeflow-volumes.test-istio.svc.cluster.local 10.1.58.136:39064 10.1.58.136:8080 10.59.190.3:52750 - -

Copy link
Contributor

@DnPlas DnPlas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @NohaIhab, just a tiny comment on the tests, other than that lgtm and in accordance with the spec. I'm missing the testing, once I complete it, I'll come back with the results.

tests/test_bundle.py Outdated Show resolved Hide resolved
Copy link
Contributor

@DnPlas DnPlas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @NohaIhab, just a tiny comment.

I tested this PR by deploying the charms from the branch and configuring the replicas to 2 so I saw the two ingress Pods being scheduled and the ingress functionality being split between the two.

tests/test_bundle.py Outdated Show resolved Hide resolved
Copy link
Contributor

@DnPlas DnPlas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @NohaIhab !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement Istio HA Configuration
2 participants