Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

grafana and req logging configuration to work behind istio ingress gateway #650

Merged
merged 9 commits into from
Jul 8, 2019
2 changes: 1 addition & 1 deletion examples/centralised-logging/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Check that it now recognises the seldon CRD by running `kubectl get sdep`.
Now a model:

```
helm install --name seldon-single-model ../../helm-charts/seldon-single-model/ --set engine.env.LOG_MESSAGES_EXTERNALLY="true"
helm install --name seldon-single-model ../../helm-charts/seldon-single-model/ --set engine.env.LOG_MESSAGES_EXTERNALLY="false"
```

And the loadtester:
Expand Down
51 changes: 51 additions & 0 deletions examples/centralised-logging/full-setup-existing-kubeflow.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Assumes existing cluster with kubeflow's istio gateway
# Will put services behind kubeflow istio gateway

./kubeflow/knative-setup-existing-istio.sh

sleep 5

kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller

kubectl rollout status -n kube-system deployment/tiller-deploy

helm install --name seldon-core ../../helm-charts/seldon-core-operator/ --namespace seldon-system --set istio.gateway="kubeflow-gateway.kubeflow.svc.cluster.local" --set istio.enabled="true"

kubectl rollout status -n seldon-system statefulset/seldon-operator-controller-manager

sleep 5

helm install --name seldon-single-model ../../helm-charts/seldon-single-model/ --set engine.env.LOG_MESSAGES_EXTERNALLY="true" --set model.annotations."seldon\.io/istio-gateway"="kubeflow-gateway.kubeflow.svc.cluster.local"

kubectl label nodes $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') role=locust --overwrite
helm install --name seldon-core-loadtesting ../../helm-charts/seldon-core-loadtesting/ --set locust.host=http://seldon-single-model-seldon-single-model:8000 --set oauth.enabled=false --set oauth.key=oauth-key --set oauth.secret=oauth-secret --set locust.hatchRate=1 --set locust.clients=1 --set loadtest.sendFeedback=0 --set locust.minWait=0 --set locust.maxWait=0 --set replicaCount=1

helm install --name seldon-core-analytics ../../helm-charts/seldon-core-analytics/ -f ./kubeflow/seldon-analytics-kubeflow.yaml

helm install --name elasticsearch elasticsearch --version 7.1.1 --namespace=logs --set service.type=ClusterIP --set antiAffinity="soft" --repo https://helm.elastic.co
kubectl rollout status statefulset/elasticsearch-master -n logs

helm install fluentd-elasticsearch --name fluentd --namespace=logs -f fluentd-values.yaml --repo https://kiwigrid.github.io
helm install kibana --version 7.1.1 --name=kibana --namespace=logs --set service.type=ClusterIP -f ./kubeflow/kibana-values.yaml --repo https://helm.elastic.co

kubectl apply -f ./kubeflow/virtualservice-kibana.yaml
kubectl apply -f ./kubeflow/virtualservice-elasticsearch.yaml

kubectl rollout status deployment/kibana-kibana -n logs

kubectl apply -f ./request-logging/seldon-request-logger.yaml
kubectl label namespace default knative-eventing-injection=enabled
sleep 3
kubectl -n default get broker default
kubectl apply -f ./request-logging/trigger.yaml

ISTIO_INGRESS=$(kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

echo 'kubeflow dashboard at:'
echo "$ISTIO_INGRESS"
echo 'grafana running at:'
echo "$ISTIO_INGRESS/grafana/"
echo 'kibana running at:'
echo "$ISTIO_INGRESS/kibana/"
19 changes: 19 additions & 0 deletions examples/centralised-logging/kubeflow/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Setup on existing kubeflow

## Installation

The request logging setup includes knative, which includes istio. If you've an existing kubeflow, you can instead use the kubeflow istio.

For kubeflow cluster setup and installation see - we recommend installing with istio into an existing cluster:

https://www.kubeflow.org/docs/started/getting-started-k8s/

To setup seldon and supporting services on top of kubeflow, using its istio, run ./full-setup-existing-kubeflow.sh from the centralised-logging dir.

## Accessing services

The final output of the full-setup-existing-kubeflow.sh script includes URLs to access services such as kibana and grafana.

The path to seldon services can be found by inspecting the prefix section of `kubectl get vs -n default seldon-single-model-seldon-single-model-http -o yaml`

You can curl a service directly within the cluster - there is an example in the [request logging README](../request-logging/README.md).
3 changes: 3 additions & 0 deletions examples/centralised-logging/kubeflow/kibana-values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
extraEnvs:
- name: SERVER_BASEPATH
value: "/kibana"
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
#this assumes installing to cloud and istio already installed e.g. with kubeflow

kubectl apply --selector knative.dev/crd-install=true \
--filename https://github.com/knative/serving/releases/download/v0.6.0/serving.yaml \
--filename https://github.com/knative/eventing/releases/download/v0.6.0/release.yaml \
--filename https://raw.githubusercontent.com/knative/serving/v0.6.0/third_party/config/build/clusterrole.yaml

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.6.0/serving.yaml --selector networking.knative.dev/certificate-provider!=cert-manager \
--filename https://github.com/knative/build/releases/download/v0.6.0/build.yaml \
--filename https://github.com/knative/eventing/releases/download/v0.6.0/release.yaml \
--filename https://raw.githubusercontent.com/knative/serving/v0.6.0/third_party/config/build/clusterrole.yaml

kubectl label namespace default istio-injection=enabled

kubectl apply -f https://github.com/knative/eventing/releases/download/v0.6.0/eventing.yaml
kubectl apply -f https://github.com/knative/eventing/releases/download/v0.6.0/in-memory-channel.yaml
#kafka if you have a kafka cluster setup already
#kubectl apply -f https://github.com/knative/eventing/releases/download/v0.6.0/kafka.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
grafana_prom_service_type: ClusterIP
grafana_prom_admin_password: admin
grafana_anonymous_auth: true
grafana:
virtualservice:
enabled: true
#trailing dash important and should be used when accessing
prefix: "/grafana/"
gateways:
- kubeflow-gateway.kubeflow.svc.cluster.local
extraEnv:
#replace with KF gateway URI
GF_SERVER_ROOT_URL: "%(protocol)s://%(domain)s/grafana"
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: elasticsearch
namespace: logs
spec:
gateways:
- kubeflow-gateway.kubeflow.svc.cluster.local
hosts:
- '*'
http:
- match:
- uri:
prefix: /elasticsearch/
rewrite:
uri: /
route:
- destination:
host: elasticsearch-master
port:
number: 9200
21 changes: 21 additions & 0 deletions examples/centralised-logging/kubeflow/virtualservice-kibana.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kibana
namespace: logs
spec:
gateways:
- kubeflow-gateway.kubeflow.svc.cluster.local
hosts:
- '*'
http:
- match:
- uri:
prefix: /kibana/
rewrite:
uri: /
route:
- destination:
host: kibana-kibana
port:
number: 5601
11 changes: 8 additions & 3 deletions examples/centralised-logging/request-logging/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ The seldon-request-logger implementation is replaceable and the type of the mess

## Setup

Create minikube cluster with knative recommendations for resource - https://knative.dev/v0.3-docs/install/knative-with-minikube/
Create minikube cluster with knative recommendations for resource - https://knative.dev/docs/install/knative-with-minikube/

Run knative-setup.sh
Run knative-setup-minikube.sh for minikube. Otherwise follow the [knative installation](https://knative.dev/docs/install/) for your cloud provider.

Run `kubectl apply -f seldon-request-logger.yaml`

Expand All @@ -42,7 +42,12 @@ kubectl apply -f ./trigger.yaml

## Running and Seeing logs

Follow the EFK minikube setup from [centralised logging guide](../README.md).
Follow the EFK minikube setup from [centralised logging guide](../README.md) but in the step to deploy the model deploy with:
```
helm install --name seldon-single-model ../../helm-charts/seldon-single-model/ --set engine.env.LOG_MESSAGES_EXTERNALLY="true"
```

(If you've already installed then you can first remove with `helm delete seldon-single-model --purge` or do an upgrade instead of an install.)

This time when you install the loadtester, requests should get filtered through the to seldon-request-logger and from there to elastic.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio-crds.yaml &&
curl -L https://github.com/knative/serving/releases/download/v0.3.0/istio.yaml \
kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.5.2/third_party/istio-1.0.7/istio-crds.yaml &&
curl -L https://raw.githubusercontent.com/knative/serving/v0.5.2/third_party/istio-1.0.7/istio.yaml \
| sed 's/LoadBalancer/NodePort/' \
| kubectl apply --filename -

Expand All @@ -15,7 +15,7 @@ kubectl rollout status -n istio-system deployment/istio-sidecar-injector
kubectl rollout status -n istio-system deployment/istio-galley
kubectl rollout status -n istio-system deployment/istio-pilot

curl -L https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml \
curl -L https://github.com/knative/serving/releases/download/v0.6.0/serving.yaml \
| sed 's/LoadBalancer/NodePort/' \
| kubectl apply --filename -

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,12 @@
}
}
},
{{- range $key, $value := .Values.grafana.extraEnv }}
{
"name": "{{ $key }}",
"value": "{{ $value }}"
},
{{- end}}
{{ if .Values.grafana_anonymous_auth }}
{
"name": "GF_AUTH_ANONYMOUS_ENABLED",
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
{{- if .Values.grafana.virtualservice.enabled }}
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-prom
spec:
hosts:
- "*"
{{- with .Values.grafana.virtualservice.gateways }}
gateways:
{{ toYaml . | indent 2 }}
{{- end }}
http:
- match:
- uri:
prefix: {{ .Values.grafana.virtualservice.prefix }}
rewrite:
uri: /
route:
- destination:
port:
number: 80
host: grafana-prom
{{- end }}
9 changes: 9 additions & 0 deletions helm-charts/seldon-core-analytics/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,15 @@ alertmanager:
grafana_prom_service_type: NodePort
grafana_prom_admin_password: admin
grafana_anonymous_auth: false
grafana:
virtualservice:
enabled: false
prefix: "/grafana/"
gateways:
- kubeflow-gateway.kubeflow.svc.cluster.local
#if using prefix with virtualservice (off by default) then below gives grafana prefix
# extraEnv:
# GF_SERVER_ROOT_URL: "%(protocol)s://%(domain)s/grafana"
persistence:
enabled: false
rbac:
Expand Down