Reconstruct OpenAPI Specifications from real-time workload traffic seamlessly.
- Not all applications have an OpenAPI specification available
- How can we get this for legacy or external applications?
- Detect whether microservices still use deprecated APIs (a.k.a. Zombie APIs)
- Detect whether microservices use undocumented APIs (a.k.a. Shadow APIs)
- Generate OpenAPI specifications without code instrumentation or modifying existing workloads (seamless documentation)
- Capture all API traffic in an existing environment using a service mesh framework (e.g. Istio)
- Construct an OpenAPI specification by observing API traffic or upload a reference OpenAPI spec
- Review, modify and approve automatically generated OpenAPI specs
- Alert on any differences between the approved API specification and the API calls observed at runtime; detects shadow & zombie APIs
- UI dashboard to audit and monitor the findings
docker build -t <your repo>/apiclarity .
docker push <your repo>/apiclarity
# Modify the image name of the APIClarity deployment in ./deployment/apiclarity.yaml
make ui
make backend
-
Make sure that Istio is installed and running in your cluster. See the Official installation instructions for more information.
-
Deploy APIClarity in K8s. It will be deployed in a new namespace
apiclarity
:kubectl apply -f deployment/apiclarity.yaml
-
Verify that APIClarity is running:
$ kubectl get pods -n apiclarity NAME READY STATUS RESTARTS AGE apiclarity-5df5fd6d98-h8v7t 1/1 Running 2 15m mysql-6ffc46b7f-bggrv 1/1 Running 0 15m
-
Initialize and pull the
wasm-filters
submodule:git submodule init wasm-filters git submodule update wasm-filters cd wasm-filters
-
Deploy the Envoy Wasm filter for capturing the traffic:
Run the Wasm deployment script for selected namespaces to allow traffic tracing.
Tracing is accomplished by patching the Istio sidecars within the pods to load the APIClarity Wasm filter. So ensure Istio sidecar injection is enabled for all namespaces you intend to trace before deploying anything to that namespace.
The script will automatically:
- Deploy the Wasm filter binary as a config map
- Deploy the Istio Envoy filter to use the Wasm binary
- Patch all deployment annotations within the selected namespaces to mount the Wasm binary
./deploy.sh <namespace1> <namespace2> ...
Note: To build the Wasm filter from source instead of using the pre-built binary, please follow the instructions in the wasm-filters repository.
-
Port forward to APIClarity UI:
kubectl port-forward -n apiclarity svc/apiclarity 9999:8080
-
Open APIClarity UI in the browser: http://localhost:9999/
-
Generate some traffic in the applications in the traced namespaces and check the APIClarity UI :)
A good demo application to try APIClarity with is the Sock Shop Demo.
To deploy the Sock Shop Demo follow these steps:
-
Create the
sock-shop
namespace and enable Istio injection:kubectl create namespace sock-shop kubectl label namespaces sock-shop istio-injection=enabled
-
Deploy the Sock Shop Demo to your cluster:
kubectl apply -f https://raw.githubusercontent.com/microservices-demo/microservices-demo/master/deploy/kubernetes/complete-demo.yaml
-
From the APIClarity git repository deploy the Wasm filter in the
sock-shop
namespace:cd apiclarity/wasm-filters ./deploy.sh sock-shop
-
Find the NodePort to access the Sock Shop Demo App
$ kubectl describe svc front-end -n sock-shop [...] NodePort: <unset> 30001/TCP [...]
Use this port together with your node IP to access the demo webshop and run some transactions to generate data to review on the APIClarity dashboard.
-
Build UI & backend locally as described above:
make ui && make backend
-
Copy the built site:
cp -r ./ui/build ./site
-
Run backend and frontend locally using demo data:
FAKE_TRACES=true FAKE_TRACES_PATH=./backend/pkg/test/trace_files \ ENABLE_DB_INFO_LOGS=true ./backend/bin/backend run
-
Open APIClarity UI in the browser: http://localhost:8080/
Pull requests and bug reports are welcome.
For larger changes please create an Issue in GitHub first to discuss your proposed changes and possible implications.