Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

new request logging #1369

Merged
merged 29 commits into from
Feb 12, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
488b7d7
initial changes towards new request logging
ryandawsonuk Jan 24, 2020
86ac096
can link a req to resp by id but needs more work
ryandawsonuk Jan 27, 2020
67469f3
small steps towards making updates idempotent
ryandawsonuk Jan 28, 2020
2f13b91
handle concurrency
ryandawsonuk Jan 30, 2020
becdce5
cleanup elements processing
ryandawsonuk Jan 30, 2020
5dbe02f
include extra headers
ryandawsonuk Jan 30, 2020
d92f6e2
sdepname and namespace in logged doc entry
ryandawsonuk Jan 30, 2020
cd5b9f6
env vars and more metadata
ryandawsonuk Jan 31, 2020
14ba838
notes on stuff to do future
ryandawsonuk Jan 31, 2020
55706af
note about shadows
ryandawsonuk Jan 31, 2020
a8e5560
update tests
ryandawsonuk Feb 3, 2020
d15c27a
trying to get running in cluster - knative broker giving unknown enco…
ryandawsonuk Feb 4, 2020
f6bf624
missed the values file
ryandawsonuk Feb 4, 2020
be70834
busybox curl not working - need to debug in logger
ryandawsonuk Feb 4, 2020
ea1696b
dummy reqs go through now
ryandawsonuk Feb 5, 2020
e8e2b5e
running now
ryandawsonuk Feb 5, 2020
181c3f0
use constants
ryandawsonuk Feb 6, 2020
8a4a8e8
change use of indexes
ryandawsonuk Feb 6, 2020
02b733a
note on access elastic api
ryandawsonuk Feb 6, 2020
07c5d4f
clean up README a bit
ryandawsonuk Feb 6, 2020
71c44e7
kind example now working all the way through
ryandawsonuk Feb 7, 2020
5642407
match latest kubeflow knative
ryandawsonuk Feb 10, 2020
81c5da7
small install tweaks
ryandawsonuk Feb 10, 2020
34c8536
handle checks for existing
ryandawsonuk Feb 11, 2020
eac0cce
add cluster local gateway only if missing
ryandawsonuk Feb 11, 2020
f7f2cec
new analytics chart sets ports differently
ryandawsonuk Feb 11, 2020
f1de416
exp backoff and longer retry cycle
ryandawsonuk Feb 12, 2020
0d60a3e
fix longstanding slip in docs after comparing to prom
ryandawsonuk Feb 12, 2020
558f5ba
different check for whether eventing setup
ryandawsonuk Feb 12, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,9 @@ wrappers/python/fbs/
examples/models/onnx_resnet50/cpu_codegen/Function_0_codegen.cpp
examples/models/onnx_resnet50/resnet.onnx

#logging example
examples/centralised-logging/request-logging/istio-1.1.6

#openapi
engine/src/main/resources/static/seldon.json
api-frontend/src/main/resources/static/seldon.json
Expand Down
4 changes: 2 additions & 2 deletions doc/source/analytics/analytics.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ The metrics are:

**Prediction Requests**

* ```seldon_api_executor_server_requests_duration_seconds_(bucket,count,sum) ``` : Requests to the service orchestrator from an ingress, e.g. API gateway or Ambassador
* ```seldon_api_executor_client_requests_duration_seconds_(bucket,count,sum) ``` : Requests from the service orchestrator to a component, e.g., a model
* ```seldon_api_executor_server_requests_seconds_(bucket,count,sum) ``` : Requests to the service orchestrator from an ingress, e.g. API gateway or Ambassador
* ```seldon_api_executor_client_requests_seconds_(bucket,count,sum) ``` : Requests from the service orchestrator to a component, e.g., a model

Each metric has the following key value pairs for further filtering which will be taken from the SeldonDeployment custom resource that is running:

Expand Down
50 changes: 35 additions & 15 deletions examples/centralised-logging/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,29 @@

Here we will set up EFK (elasticsearch, fluentd/fluentbit, kibana) as a stack to gather logs from SeldonDeployments and make them searchable.

This demo is aimed at minikube.
This demo is aimed at KIND or minikube but can also work with a cloud provider. Uses helm v3.

Alternatives are available and if you are running in cloud then you can consider a managed service from your cloud provider.

If you just want to bootstrap a full logging and request tracking setup for minikube, run ./full-setup.sh. That includes the [request logging setup](./request-logging/README.md)

## Setup
## Setup Elastic - KIND

Start cluster

```
kind create cluster --config kind_config.yaml --image kindest/node:v1.15.6
```

Install elastic with KIND config:

```
kubectl create namespace logs
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
helm install elasticsearch elasticsearch --version 7.5.2 --namespace=logs -f elastic-kind.yaml --repo https://helm.elastic.co
```

## Setup Elastic - Minikube

Start Minikube with flags as shown:

Expand All @@ -22,11 +38,10 @@ Install elasticsearch with minikube configuration:

```
kubectl create namespace logs
helm install elasticsearch elasticsearch --version 7.5.2 --namespace=logs -f elastic-minikube.yaml --repo https://helm.elastic.co
```

```
helm install elasticsearch elasticsearch --version 7.1.1 --namespace=logs -f elastic-minikube.yaml --repo https://helm.elastic.co
```
## Fluentd and Kibana

Then fluentd as a collection agent (chosen in preference to fluentbit - see notes at end):

Expand All @@ -37,7 +52,7 @@ helm install fluentd fluentd-elasticsearch --namespace=logs -f fluentd-values.ya
And kibana UI:

```
helm install kibana kibana --version 7.1.1 --namespace=logs --set service.type=NodePort --repo https://helm.elastic.co
helm install kibana kibana --version 7.5.2 --namespace=logs --set service.type=NodePort --repo https://helm.elastic.co
```

## Generating Logging
Expand All @@ -57,15 +72,16 @@ Check that it now recognises the seldon CRD by running `kubectl get sdep`.
Now a model:

```
helm install seldon-single-model ../../helm-charts/seldon-single-model/ --set engine.env.LOG_MESSAGES_EXTERNALLY="false"
helm install seldon-single-model ../../helm-charts/seldon-single-model/
```

And the loadtester:
And the loadtester (first line is only needed for KIND):

```
kubectl label nodes kind-worker role=locust --overwrite
kubectl label nodes $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') role=locust --overwrite

helm install seldon-core-loadtesting ../../helm-charts/seldon-core-loadtesting/ --set locust.host=http://seldon-single-model-seldon-single-model:8000 --set oauth.enabled=false --set oauth.key=oauth-key --set oauth.secret=oauth-secret --set locust.hatchRate=1 --set locust.clients=1 --set loadtest.sendFeedback=0 --set locust.minWait=0 --set locust.maxWait=0 --set replicaCount=1
helm install seldon-core-loadtesting ../../helm-charts/seldon-core-loadtesting/ --set locust.host=http://seldon-single-model-seldon-single-model-seldon-single-model:8000 --set oauth.enabled=false --set oauth.key=oauth-key --set oauth.secret=oauth-secret --set locust.hatchRate=1 --set locust.clients=1 --set loadtest.sendFeedback=0 --set locust.minWait=1000 --set locust.maxWait=1000 --set replicaCount=1
```

## Inspecting Logging and Search for Requests
Expand All @@ -75,11 +91,17 @@ To find kibana URL
```
echo $(minikube ip)":"$(kubectl get svc kibana-kibana -n logs -o=jsonpath='{.spec.ports[?(@.port==5601)].nodePort}')
```
Or if not on minikube then port-forward to `localhost:5601`:
```
kubectl port-forward svc/kibana-kibana -n logs 5601:5601
```

If you want to check the elastic API with postman then also run `kubectl port-forward svc/elasticsearch-master -n logs 9200:9200`

When Kibana appears for the first time there will be a brief animation while it initializes.
On the Welcome page click Explore on my own.
From the top-left or from the `Visualize and Explore Data` panel select the `Discover` item.
In the form field Index pattern enter logstash-*
In the form field Index pattern enter *
It should read "Success!" and Click the `> Next` step button on the right.
In the next form select timestamp from the dropdown labeled `Time Filter` field name.
From the bottom-right of the form select `Create index pattern`.
Expand All @@ -88,13 +110,11 @@ From the top-left or the home screen's `Visualize and Explore Data` panel, selec
The log list will appear.
Refine the list a bit by selecting `log` near the bottom the left-hand Selected fields list.
When you hover over or click on the word `log`, click the `Add` button to the right of the label.
You can create a filter using the `Add Filter` button under `Search`. The field can be `kubernetes.labels.seldon-app` and the value can be an 'is' match on `seldon-single-model-seldon-single-model`.

The custom fields in the request bodies may not currently be in the index. If you hover over one in a request you may see `No cached mapping for this field`.
You can create a filter using the `Add Filter` button under `Search`. The field can be `kubernetes.labels.seldon-app` and the value can be an 'is' match on `seldon-single-model-seldon-single-model-seldon-single-model`.

To add mappings, go to `Management` at the bottom-left and then `Index Patterns`. Hit `Refresh` on the index created earlier. The number of fields should increase and `request.data.names` should be present.
To add mappings, go to `Management` at the bottom-left and then `Index Patterns`. Hit `Refresh` on the index created earlier. The number of fields should increase.

Now we can go back and add a further filter for `data.names` with the operator `exists`. We can add further filters if we want, such as the presence of a feature name or the presence of a feature value.
Now we can go back and add further filters if we want.

![picture](./kibana-custom-search.png)

Expand Down
45 changes: 45 additions & 0 deletions examples/centralised-logging/elastic-kind.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
---
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"

# Shrink default JVM heap.
esJavaOpts: "-Xmx256m -Xms256m"

podAnnotations:
fluentbit.io/exclude: "true"

replicas: 1

# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "200m"
memory: "512M"
limits:
cpu: "1500m"
memory: "1024M"

# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-path"
resources:
requests:
storage: 400M
extraInitContainers: |
- name: create
image: busybox:1.28
command: ['mkdir', '-p', '/usr/share/elasticsearch/data/nodes/']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
- name: file-permissions
image: busybox:1.28
command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/']
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
37 changes: 30 additions & 7 deletions examples/centralised-logging/full-setup-existing-kubeflow.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,35 @@ set -o xtrace

# Assumes existing cluster with kubeflow's istio gateway
# Will put services behind kubeflow istio gateway
brokercrd=$(kubectl get crd inmemorychannels.messaging.knative.dev -o jsonpath='{.metadata.name}') || true
# First check what parts of knative are present
autoscaler=$(kubectl get deployment -n knative-serving autoscaler -o jsonpath='{.metadata.name}') || true
if [[ $autoscaler == 'autoscaler' ]] ; then
echo "knative serving already installed"
else
./request-logging/install_knative.sh
fi

imc=$(kubectl get deployment -n knative-eventing imc-controller -o jsonpath='{.metadata.name}') || true

if [[ $brokercrd == 'inmemorychannels.messaging.knative.dev' ]] ; then
echo "knative already installed"
if [[ $imc == 'imc-controller' ]] ; then
echo "knative eventing already installed"
else
./kubeflow/knative-setup-existing-istio.sh
kubectl apply --selector knative.dev/crd-install=true --filename https://github.com/knative/eventing/releases/download/v0.11.0/eventing.yaml
sleep 5
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.11.0/eventing.yaml
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.11.0/in-memory-channel.yaml
fi

#istio for knative needs to have cluster-local-gateway
#script installs any missing istio components (but leaves existing ones)
cd request-logging
./install_istio.sh
cd ..

sleep 5

kubectl create namespace seldon-system || echo "namespace seldon-system exists"
helm upgrade --install seldon-core ../../helm-charts/seldon-core-operator/ --namespace seldon-system --set istio.gateway="kubeflow-gateway.kubeflow.svc.cluster.local" --set istio.enabled="true" --set engine.logMessagesExternally="true" --set certManager.enabled="true"
helm upgrade --install seldon-core ../../helm-charts/seldon-core-operator/ --namespace seldon-system --set istio.gateway="kubeflow-gateway.kubeflow.svc.cluster.local" --set istio.enabled="true" --set certManager.enabled="true"

kubectl rollout status -n seldon-system deployment/seldon-controller-manager

Expand All @@ -29,22 +46,28 @@ sleep 5
helm upgrade --install seldon-core-analytics ../../helm-charts/seldon-core-analytics/ --namespace default -f ./kubeflow/seldon-analytics-kubeflow.yaml

kubectl create namespace logs || echo "namespace logs exists"
helm upgrade --install elasticsearch elasticsearch --version 7.5.0 --namespace=logs --set service.type=ClusterIP --set antiAffinity="soft" --repo https://helm.elastic.co
helm upgrade --install elasticsearch elasticsearch --version 7.5.2 --namespace=logs --set service.type=ClusterIP --set antiAffinity="soft" --repo https://helm.elastic.co
kubectl rollout status statefulset/elasticsearch-master -n logs

helm upgrade --install fluentd fluentd-elasticsearch --namespace=logs -f fluentd-values.yaml --repo https://kiwigrid.github.io
helm upgrade --install kibana kibana --version 7.5.0 --namespace=logs --set service.type=ClusterIP -f ./kubeflow/kibana-values.yaml --repo https://helm.elastic.co
helm upgrade --install kibana kibana --version 7.5.2 --namespace=logs --set service.type=ClusterIP -f ./kubeflow/kibana-values.yaml --repo https://helm.elastic.co

kubectl apply -f ./kubeflow/virtualservice-kibana.yaml
kubectl apply -f ./kubeflow/virtualservice-elasticsearch.yaml

kubectl rollout status deployment/kibana-kibana -n logs

#have to delete logger if existing as otherwise get 'expected exactly one, got both' err if existing resource is v1alpha1
kubectl delete -f ./request-logging/seldon-request-logger.yaml || true
kubectl apply -f ./request-logging/seldon-request-logger.yaml
# remove and recreate broker if already have one to activate eventing
kubectl delete broker -n default default || true
kubectl label namespace default knative-eventing-injection- --overwrite=true
kubectl label namespace default knative-eventing-injection=enabled --overwrite=true
#sleep 3
sleep 6
kubectl -n default get broker default

kubectl apply -f ./request-logging/trigger.yaml

ISTIO_INGRESS=$(kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
Expand Down
4 changes: 2 additions & 2 deletions examples/centralised-logging/full-setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,12 @@ helm install --name seldon-core-loadtesting ../../helm-charts/seldon-core-loadte


kubectl create namespace logs || echo "namespace logs exists"
helm install --name elasticsearch elasticsearch --version 7.5.0 --namespace=logs -f elastic-minikube.yaml --repo https://helm.elastic.co
helm install --name elasticsearch elasticsearch --version 7.5.2 --namespace=logs -f elastic-minikube.yaml --repo https://helm.elastic.co
kubectl rollout status statefulset/elasticsearch-master -n logs
kubectl patch svc elasticsearch-master -n logs -p '{"spec": {"type": "LoadBalancer"}}'

helm install fluentd-elasticsearch --name fluentd --namespace=logs -f fluentd-values.yaml --repo https://kiwigrid.github.io
helm install kibana --version 7.5.0 --name=kibana --namespace=logs --set service.type=NodePort --repo https://helm.elastic.co
helm install kibana --version 7.5.2 --name=kibana --namespace=logs --set service.type=NodePort --repo https://helm.elastic.co

kubectl rollout status deployment/kibana-kibana -n logs

Expand Down
32 changes: 32 additions & 0 deletions examples/centralised-logging/kind_config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 30080
hostPort: 8003
- containerPort: 31380
hostPort: 8004
kubeadmConfigPatches:
- |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
metadata:
name: config
kubeReserved:
cpu: "300m"
memory: "300Mi"
ephemeral-storage: "1Gi"
kubeReservedCgroup: "/kube-reserved"
systemReserved:
cpu: "300m"
memory: "300Mi"
ephemeral-storage: "1Gi"
evictionHard:
memory.available: "200Mi"
nodefs.available: "10%"
featureGates:
DynamicKubeletConfig: true
RotateKubeletServerCertificate: true

Original file line number Diff line number Diff line change
@@ -1,17 +1,3 @@
#this assumes installing to cloud and istio already installed e.g. with kubeflow

kubectl apply --selector knative.dev/crd-install=true \
--filename https://github.com/knative/serving/releases/download/v0.8.0/serving.yaml \
--filename https://github.com/knative/eventing/releases/download/v0.8.0/release.yaml \
--filename https://github.com/knative/serving/releases/download/v0.8.0/monitoring.yaml

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.8.0/serving.yaml \
--filename https://github.com/knative/eventing/releases/download/v0.8.0/release.yaml \
--filename https://github.com/knative/serving/releases/download/v0.8.0/monitoring.yaml

kubectl label namespace default istio-injection=enabled

kubectl apply -f https://github.com/knative/eventing/releases/download/v0.8.0/eventing.yaml

#kafka if you have a kafka cluster setup already
#kubectl apply -f https://github.com/knative/eventing/releases/download/v0.8.0/kafka.yaml
./../request-logging/install_knative.sh
Original file line number Diff line number Diff line change
@@ -1,15 +1,6 @@
grafana_prom_service_type: ClusterIP
grafana_prom_admin_password: admin
grafana_anonymous_auth: true
grafana:
virtualservice:
enabled: true
#trailing dash important and should be used when accessing
prefix: "/grafana/"
gateways:
- kubeflow-gateway.kubeflow.svc.cluster.local
extraEnv:
#replace with KF gateway URI
GF_SERVER_ROOT_URL: "%(protocol)s://%(domain)s/grafana"
nodeExporter:
port: 9200
prometheus:
nodeExporter:
hostNetwork: false
service:
hostPort: 9200
servicePort: 9200
Loading