Skip to content

Commit

Permalink
feat: Add Signoz as a Datastore (#2935)
Browse files Browse the repository at this point in the history
* Adding Signoz example with OTLP datastore

* Update README

* Adding support for Signoz data store

* Adding initial docs for Signoz

* Adding Signoz on Examples
  • Loading branch information
danielbdias authored Jul 13, 2023
1 parent 06ba21a commit c124db7
Show file tree
Hide file tree
Showing 35 changed files with 2,488 additions and 2 deletions.
1 change: 1 addition & 0 deletions .github/workflows/pull-request.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -220,6 +220,7 @@ jobs:
- tracetest-tempo
- tracetest-no-tracing
- tracetest-provisioning-env
- tracetest-signoz
steps:
- name: Checkout
uses: actions/checkout@v3
Expand Down
1 change: 1 addition & 0 deletions api/dataStores.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -194,6 +194,7 @@ components:
awsxray,
honeycomb,
azureappinsights,
signoz
]
SupportedClients:
type: string
Expand Down
2 changes: 2 additions & 0 deletions cli/openapi/model_supported_data_stores.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

99 changes: 99 additions & 0 deletions docs/docs/configuration/connecting-to-data-stores/signoz.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# Signoz

If you want to use [Signoz](https://signoz.io/) as the trace data store, you'll configure the OpenTelemetry Collector to receive traces from your system and then send them to both Tracetest and Signoz. And, you don't have to change your existing pipelines to do so.

:::tip
Examples of configuring Tracetest with Signoz can be found in the [`examples` folder of the Tracetest GitHub repo](https://github.com/kubeshop/tracetest/tree/main/examples).
:::

## Configuring OpenTelemetry Collector to Send Traces to both Signoz and Tracetest

In your OpenTelemetry Collector config file:

- Set the `exporter` to `otlp/tracetest`
- Set the `endpoint` to your Tracetest instance on port `4317`

:::tip
If you are running Tracetest with Docker, and Tracetest's service name is `tracetest`, then the endpoint might look like this `http://tracetest:4317`
:::

Additionally, add another config:

- Set the `exporter` to `otlp/signoz`
- Set the `endpoint` to your Signoz instance on port `4317`

```yaml
# collector.config.yaml

# If you already have receivers declared, you can just ignore
# this one and still use yours instead.
receivers:
otlp:
protocols:
grpc:
http:

processors:
batch:
timeout: 100ms

exporters:
logging:
logLevel: debug
# OTLP for Tracetest
otlp/tracetest:
endpoint: tracetest:4317 # Send traces to Tracetest. Read more in docs here: https://docs.tracetest.io/configuration/connecting-to-data-stores/opentelemetry-collector
tls:
insecure: true
# OTLP for Signoz
otlp/signoz:
endpoint: address-to-your-signoz-server:4317 # Send traces to Signoz. Read more in docs here: https://signoz.io/docs/tutorial/opentelemetry-binary-usage-in-virtual-machine/#opentelemetry-collector-configuration
tls:
insecure: true
service:
pipelines:
traces/tracetest: # Pipeline to send data to Tracetest
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp/tracetest]
traces/signoz: # Pipeline to send data to Signoz
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp/signoz]
```
## Configure Tracetest to Use Signoz as a Trace Data Store
Configure your Tracetest instance to expose an `otlp` endpoint to make it aware it will receive traces from the OpenTelemetry Collector. This will expose Tracetest's trace receiver on port `4317`.

## Connect Tracetest to Signoz with the Web UI

In the Web UI, (1) open Settings, and, on the (2) Configure Data Store tab, select (3) Signoz.

<!-- TODO: create this image using the same standard as the other stores -->
![Signoz](../img/Signoz-settings.png)

## Connect Tracetest to Signoz with the CLI

Or, if you prefer using the CLI, you can use this file config.

```yaml
type: DataStore
spec:
name: Signoz pipeline
type: signoz
default: true
```

Proceed to run this command in the terminal and specify the file above.

```bash
tracetest apply datastore -f my/data-store/file/location.yaml
```

<!--
TODO: create a tutorial for signoz
:::tip
To learn more, [read the recipe on running a sample app with Signoz and Tracetest](../../examples-tutorials/recipes/running-tracetest-with-signoz.md).
:::
-->
4 changes: 4 additions & 0 deletions examples/tracetest-signoz/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
signoz/data/alertmanager/*
signoz/data/clickhouse/*
signoz/data/signoz/*
signoz/data/zookeeper-1/*
10 changes: 10 additions & 0 deletions examples/tracetest-signoz/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Tracetest + Signoz

This repository objective is to show how you can configure your Tracetest instance to connect to Signoz and use it as its tracing backend.

## Steps

1. [Install the tracetest CLI](https://docs.tracetest.io/installing/)
2. Run `tracetest configure --endpoint http://localhost:11633` on a terminal
3. Run the project by using docker-compose: `docker-compose up` (Linux) or `docker compose up` (Mac)
4. Test if it works by running: `tracetest test run -d tracetest/tests/list-tests.yaml`. This would trigger a test that will send and retrieve spans from the Signoz instance that is running on your machine.
174 changes: 174 additions & 0 deletions examples/tracetest-signoz/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
version: '3'
services:
tracetest:
image: kubeshop/tracetest:${TAG:-latest}
platform: linux/amd64
volumes:
- type: bind
source: ./tracetest/tracetest-config.yaml
target: /app/tracetest.yaml
- type: bind
source: ./tracetest/tracetest-provision.yaml
target: /app/provision.yaml
command: --provisioning-file /app/provision.yaml
ports:
- 11633:11633
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on:
postgres:
condition: service_healthy
otel-collector:
condition: service_started
healthcheck:
test: [ "CMD", "wget", "--spider", "localhost:11633" ]
interval: 1s
timeout: 3s
retries: 60
environment:
TRACETEST_DEV: ${TRACETEST_DEV}

postgres:
image: postgres:14
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
healthcheck:
test: pg_isready -U "$$POSTGRES_USER" -d "$$POSTGRES_DB"
interval: 1s
timeout: 5s
retries: 60

otel-collector:
image: otel/opentelemetry-collector:0.54.0
command:
- "--config"
- "/otel-local-config.yaml"
volumes:
- ./tracetest/collector.config.yaml:/otel-local-config.yaml
ports:
- 4317:4317
depends_on:
signoz-otel-collector:
condition: service_started
signoz-otel-collector-metrics:
condition: service_started

###################################################################################################################################################################################################
# Signoz setup
###################################################################################################################################################################################################
zookeeper-1:
image: bitnami/zookeeper:3.7.1
container_name: zookeeper-1
hostname: zookeeper-1
user: root
volumes:
- ./signoz/data/zookeeper-1:/bitnami/zookeeper
environment:
- ZOO_SERVER_ID=1
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_AUTOPURGE_INTERVAL=1

clickhouse:
restart: on-failure
image: clickhouse/clickhouse-server:22.8.8-alpine
tty: true
depends_on:
- zookeeper-1
logging:
options:
max-size: 50m
max-file: "3"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "localhost:8123/ping"]
interval: 30s
timeout: 5s
retries: 3
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
container_name: clickhouse
hostname: clickhouse
volumes:
- ./signoz/clickhouse-config.xml:/etc/clickhouse-server/config.xml
- ./signoz/clickhouse-users.xml:/etc/clickhouse-server/users.xml
- ./signoz/custom-function.xml:/etc/clickhouse-server/custom-function.xml
- ./signoz/clickhouse-cluster.xml:/etc/clickhouse-server/config.d/cluster.xml
- ./signoz/data/clickhouse/:/var/lib/clickhouse/
- ./signoz/user_scripts:/var/lib/clickhouse/user_scripts/

alertmanager:
image: signoz/alertmanager:${ALERTMANAGER_TAG:-0.23.1}
volumes:
- ./signoz/data/alertmanager:/data
depends_on:
query-service:
condition: service_healthy
restart: on-failure
command:
- --queryService.url=http://query-service:8085
- --storage.path=/data

query-service:
image: signoz/query-service:${DOCKER_TAG:-0.22.0}
command: ["-config=/root/config/prometheus.yml"]
volumes:
- ./signoz/prometheus.yml:/root/config/prometheus.yml
- ./signoz/data/signoz/:/var/lib/signoz/
environment:
- ClickHouseUrl=tcp://clickhouse:9000/?database=signoz_traces
- ALERTMANAGER_API_PREFIX=http://alertmanager:9093/api/
- SIGNOZ_LOCAL_DB_PATH=/var/lib/signoz/signoz.db
- DASHBOARDS_PATH=/root/config/dashboards
- STORAGE=clickhouse
- GODEBUG=netdns=go
- TELEMETRY_ENABLED=true
- DEPLOYMENT_TYPE=docker-standalone-amd
restart: on-failure
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "localhost:8080/api/v1/health"]
interval: 30s
timeout: 5s
retries: 3
depends_on:
clickhouse:
condition: service_healthy

frontend:
image: signoz/frontend:${DOCKER_TAG:-0.22.0}
restart: on-failure
depends_on:
- alertmanager
- query-service
ports:
- 3301:3301
volumes:
- ./signoz/common/nginx-config.conf:/etc/nginx/conf.d/default.conf

signoz-otel-collector:
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-0.79.2}
command: ["--config=/etc/otel-collector-config.yaml", "--feature-gates=-pkg.translator.prometheus.NormalizeName"]
user: root # required for reading docker container logs
volumes:
- ./signoz/otel-collector-config.yaml:/etc/otel-collector-config.yaml
- /var/lib/docker/containers:/var/lib/docker/containers:ro
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux
- DOCKER_MULTI_NODE_CLUSTER=false
- LOW_CARDINAL_EXCEPTION_GROUPING=false
restart: on-failure
depends_on:
clickhouse:
condition: service_healthy

signoz-otel-collector-metrics:
image: signoz/signoz-otel-collector:${OTELCOL_TAG:-0.79.2}
command: ["--config=/etc/otel-collector-metrics-config.yaml", "--feature-gates=-pkg.translator.prometheus.NormalizeName"]
volumes:
- ./signoz/otel-collector-metrics-config.yaml:/etc/otel-collector-metrics-config.yaml
restart: on-failure
depends_on:
clickhouse:
condition: service_healthy
35 changes: 35 additions & 0 deletions examples/tracetest-signoz/signoz/alertmanager.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
global:
resolve_timeout: 1m
slack_api_url: 'https://hooks.slack.com/services/xxx'

route:
receiver: 'slack-notifications'

receivers:
- name: 'slack-notifications'
slack_configs:
- channel: '#alerts'
send_resolved: true
icon_url: https://avatars3.githubusercontent.com/u/3380462
title: |-
[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] {{ .CommonLabels.alertname }} for {{ .CommonLabels.job }}
{{- if gt (len .CommonLabels) (len .GroupLabels) -}}
{{" "}}(
{{- with .CommonLabels.Remove .GroupLabels.Names }}
{{- range $index, $label := .SortedPairs -}}
{{ if $index }}, {{ end }}
{{- $label.Name }}="{{ $label.Value -}}"
{{- end }}
{{- end -}}
)
{{- end }}
text: >-
{{ range .Alerts -}}
*Alert:* {{ .Annotations.title }}{{ if .Labels.severity }} - `{{ .Labels.severity }}`{{ end }}
*Description:* {{ .Annotations.description }}
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}
11 changes: 11 additions & 0 deletions examples/tracetest-signoz/signoz/alerts.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
groups:
- name: ExampleCPULoadGroup
rules:
- alert: HighCpuLoad
expr: system_cpu_load_average_1m > 0.1
for: 0m
labels:
severity: warning
annotations:
summary: High CPU load
description: "CPU load is > 0.1\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
Loading

0 comments on commit c124db7

Please sign in to comment.