Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

4.9 docs #253

Merged
merged 6 commits into from
Aug 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
86 changes: 86 additions & 0 deletions .github/workflows/publish-version-4.8.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
name: Publish version 4.8

env:
doc_versionnumber: "4.8"

on:
push:
branches:
- release-4.8
workflow_dispatch:

jobs:
build:
name: Build
runs-on: ubuntu-latest

permissions:
contents: write
pages: write
id-token: write

concurrency:
group: "pages"
cancel-in-progress: false

environment:
name: github-pages-test
url: ${{ steps.deployment.outputs.page_url }}

steps:
- name: Checkout code
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
with:
ref: release-4.8
submodules: 'recursive'

- name: Set up Pages
id: pages
uses: actions/configure-pages@1f0c5cde4bc74cd7e1254d0cb4de8d49e9068c7d # v4.0.0

- name: Set up Hugo
uses: peaceiris/actions-hugo@16361eb4acea8698b220b76c0d4e84e1fd22c61d # v2.6.0
with:
hugo-version: '0.110.0'
extended: true

- name: Set up Node
uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2
with:
node-version: 18

- name: Install dependencies
run: |
cd themes/docsy
npm install

- name: Set up PostCSS
run: npm install --save-dev autoprefixer postcss-cli postcss

- name: Build
run: hugo --environment production --baseURL ${{ steps.pages.outputs.base_url }}/${{ env.doc_versionnumber }}/

# - name: Upload artifact
# uses: actions/upload-pages-artifact@64bcae551a7b18bcb9a09042ddf1960979799187 # v1.0.8
# with:
# path: ./public/

- name: Checkout code to update
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
with:
ref: 'gh-pages-test'
path: 'tmp/gh-pages'
# - name: Display file structure
# run: ls -R
- name: Copy built site to GH pages
run: |
rm -rf tmp/gh-pages/${{ env.doc_versionnumber }}
mkdir -p tmp/gh-pages/${{ env.doc_versionnumber }}
mv public/* tmp/gh-pages/${{ env.doc_versionnumber }}
- name: Commit & Push changes
uses: actions-js/push@master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
message: 'Publish updated docs for ${{ env.doc_versionnumber }}, ${{ github.event.repository.pushed_at}}'
branch: 'gh-pages-test'
directory: 'tmp/gh-pages'
8 changes: 6 additions & 2 deletions config/_default/config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -169,9 +169,13 @@ twitter = "AxoflowIO"
#######################
# Add your release versions here
[[params.versions]]
version = "latest (4.8.0)"
version = "latest (4.9.0)"
githubbranch = "master"
url = ""
[[params.versions]]
version = "4.8"
githubbranch = "release-4.8"
url = "/4.8/"
[[params.versions]]
version = "4.7"
githubbranch = "release-4.7"
Expand Down Expand Up @@ -204,7 +208,7 @@ twitter = "AxoflowIO"
# Cascade version number to every doc page (needed to create sections for pagefind search)
# Update this parameter when creating a new version
[[cascade]]
body_attribute = 'data-pagefind-filter="section:4.8"'
body_attribute = 'data-pagefind-filter="section:4.9"'
[cascade._target]
path = '/docs/**'

Expand Down
41 changes: 41 additions & 0 deletions content/docs/configuration/crds/v1beta1/common_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,9 @@ Metrics defines the service monitor endpoints
### prometheusRules (bool, optional) {#metrics-prometheusrules}


### prometheusRulesOverride ([]PrometheusRulesOverride, optional) {#metrics-prometheusrulesoverride}


### serviceMonitor (bool, optional) {#metrics-servicemonitor}


Expand All @@ -50,6 +53,44 @@ Metrics defines the service monitor endpoints



## PrometheusRulesOverride

### alert (string, optional) {#prometheusrulesoverride-alert}

Name of the alert. Must be a valid label value. Only one of `record` and `alert` must be set.


### annotations (map[string]string, optional) {#prometheusrulesoverride-annotations}

Annotations to add to each alert. Only valid for alerting rules.


### expr (*intstr.IntOrString, optional) {#prometheusrulesoverride-expr}

PromQL expression to evaluate.


### for (*v1.Duration, optional) {#prometheusrulesoverride-for}

Alerts are considered firing once they have been returned for this long. +optional


### keep_firing_for (*v1.NonEmptyDuration, optional) {#prometheusrulesoverride-keep_firing_for}

KeepFiringFor defines how long an alert will continue firing after the condition that triggered it has cleared. +optional


### labels (map[string]string, optional) {#prometheusrulesoverride-labels}

Labels to add or overwrite.


### record (string, optional) {#prometheusrulesoverride-record}

Name of the time series to output to. Must be a valid metric name. Only one of `record` and `alert` must be set.



## BufferMetrics

BufferMetrics defines the service monitor endpoints
Expand Down
1 change: 1 addition & 0 deletions content/docs/configuration/crds/v1beta1/fluentbit_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -749,6 +749,7 @@ Configurable TTL for K8s cached namespace metadata. (15m)

Include Kubernetes namespace labels on every record

Default: On

### Regex_Parser (string, optional) {#filterkubernetes-regex_parser}

Expand Down
8 changes: 8 additions & 0 deletions content/docs/configuration/crds/v1beta1/logging_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,14 @@ Namespace for cluster wide configuration resources like ClusterFlow and ClusterO
Default flow for unmatched logs. This Flow configuration collects all logs that didn't matched any other Flow.


### enableDockerParserCompatibilityForCRI (bool, optional) {#loggingspec-enabledockerparsercompatibilityforcri}

Enables a log parser that is compatible with the docker parser. This has the following benefits:

- automatically parses JSON logs using the Merge_Log feature
- downstream parsers can use the `log` field instead of the `message` field, just like with the docker runtime
- the `concat` and `parser` filters are automatically set back to use the `log` field.

### enableRecreateWorkloadOnImmutableFieldChange (bool, optional) {#loggingspec-enablerecreateworkloadonimmutablefieldchange}

EnableRecreateWorkloadOnImmutableFieldChange enables the operator to recreate the fluentbit daemonset and the fluentd statefulset (and possibly other resource in the future) in case there is a change in an immutable field that otherwise couldn't be managed with a simple update.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,6 @@ Enumerate all loggings with all the destination namespaces expanded

## LoggingRoute

LoggingRoute (experimental)
Connects a log collector with log aggregators from other logging domains and routes relevant logs based on watch namespaces

### (metav1.TypeMeta, required) {#loggingroute-}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ SyslogNGOutputSpec defines the desired state of SyslogNGOutput
### elasticsearch (*output.ElasticsearchOutput, optional) {#syslogngoutputspec-elasticsearch}


### elasticsearch-datastream (*output.ElasticsearchDatastreamOutput, optional) {#syslogngoutputspec-elasticsearch-datastream}

Available in Logging operator version 4.9 and later.

### file (*output.FileOutput, optional) {#syslogngoutputspec-file}


Expand All @@ -37,6 +41,11 @@ Available in Logging operator version 4.4 and later.
### mongodb (*output.MongoDB, optional) {#syslogngoutputspec-mongodb}


### opentelemetry (*output.OpenTelemetryOutput, optional) {#syslogngoutputspec-opentelemetry}

Available in Logging operator version 4.9 and later.


### openobserve (*output.OpenobserveOutput, optional) {#syslogngoutputspec-openobserve}

Available in Logging operator version 4.5 and later.
Expand Down
4 changes: 4 additions & 0 deletions content/docs/configuration/plugins/outputs/forward.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,10 @@ Server definitions at least one is required [Server](#fluentd-server)
The threshold for chunk flush performance check. Parameter type is float, not time, default: 20.0 (seconds) If chunk flush takes longer time than this threshold, fluentd logs warning message and increases metric fluentd_output_status_slow_flush_count.


### time_as_integer (bool, optional) {#forwardoutput-time_as_integer}

Format forwarded events time as an epoch Integer with second resolution. Useful when forwarding to old ( <= 0.12 ) Fluentd servers.

### tls_allow_self_signed_cert (bool, optional) {#forwardoutput-tls_allow_self_signed_cert}

Allow self signed certificates or not.
Expand Down
7 changes: 6 additions & 1 deletion content/docs/configuration/plugins/outputs/kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ spec:
## Configuration
## Kafka

Send your logs to Kafka
Send your logs to Kafka. Set `use_rdkafka` to `true` to use the rdkafka2 client, which offers higher performance than ruby-kafka.

### ack_timeout (int, optional) {#kafka-ack_timeout}

Expand Down Expand Up @@ -240,6 +240,11 @@ Use default for unknown topics

Default: false

### use_rdkafka (bool, optional) {#kafka-use_rdkafka}

Use rdkafka2 instead of the legacy kafka2 output plugin. This plugin requires fluentd image version v1.16-4.9-full or higher.


### username (*secret.Secret, optional) {#kafka-username}

Username when using PLAIN/SCRAM SASL authentication
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
title: Elasticsearch datastream
weight: 200
generated_file: true
---

## Overview

Based on the [ElasticSearch datastream destination of AxoSyslog](https://axoflow.com/docs/axosyslog-core/chapter-destinations/configuring-destinations-elasticsearch-datastream/).

Available in Logging operator version 4.9 and later.

## Example

{{< highlight yaml >}}
apiVersion: logging.banzaicloud.io/v1beta1
kind: SyslogNGOutput
metadata:
name: elasticsearch-datastream
spec:
elasticsearch-datastream:
url: "https://elastic-endpoint:9200/my-data-stream/_bulk"
user: "username"
password:
valueFrom:
secretKeyRef:
name: elastic
key: password
{{</ highlight >}}


## Configuration
## ElasticsearchDatastreamOutput

### (HTTPOutput, required) {#elasticsearchdatastreamoutput-}


### disk_buffer (*DiskBuffer, optional) {#elasticsearchdatastreamoutput-disk_buffer}

This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the [Syslog-ng DiskBuffer options](../disk_buffer/).

Default: false

### record (string, optional) {#elasticsearchdatastreamoutput-record}

Arguments to the `$format-json()` template function. Default: `"--scope rfc5424 --exclude DATE --key ISODATE @timestamp=${ISODATE}"`



Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
---
title: OpenTelemetry output
weight: 200
generated_file: true
---

## Overview

Sends messages over OpenTelemetry GRPC. For details on the available options of the output, see the [documentation of AxoSyslog](https://axoflow.com/docs/axosyslog-core/chapter-destinations/opentelemetry/).

Available in Logging operator version 4.9 and later.

## Example

A simple example sending logs over OpenTelemetry GRPC to a remote OpenTelemetry endpoint:

{{< highlight yaml >}}
kind: SyslogNGOutput
apiVersion: logging.banzaicloud.io/v1beta1
metadata:
name: otlp
spec:
opentelemetry:
url: otel-server
port: 4379
{{</ highlight >}}



## Configuration
## OpenTelemetryOutput

### (Batch, required) {#opentelemetryoutput-}

Batching parameters

<!-- FIXME -->


### auth (*Auth, optional) {#opentelemetryoutput-auth}

Authentication configuration, see the [documentation of the AxoSyslog syslog-ng distribution](https://axoflow.com/docs/axosyslog-core/chapter-destinations/destination-syslog-ng-otlp/#auth).


### channel_args (filter.ArrowMap, optional) {#opentelemetryoutput-channel_args}

Add GRPC Channel arguments https://axoflow.com/docs/axosyslog-core/chapter-destinations/opentelemetry/#channel-args
<!-- FIXME -->


### compression (*bool, optional) {#opentelemetryoutput-compression}

Enable or disable compression.

Default: false

### disk_buffer (*DiskBuffer, optional) {#opentelemetryoutput-disk_buffer}

This option enables putting outgoing messages into the disk buffer of the destination to avoid message loss in case of a system failure on the destination side. For details, see the [Syslog-ng DiskBuffer options](../disk_buffer/).

Default: false

### url (string, required) {#opentelemetryoutput-url}

Specifies the hostname or IP address and optionally the port number of the web service that can receive log data via HTTP. Use a colon (:) after the address to specify the port number of the server. For example: `http://127.0.0.1:8000`



16 changes: 16 additions & 0 deletions content/docs/image-versions.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,22 @@ weight: 750

Logging operator uses the following image versions.

## Logging operator version 4.9

| Image repository | GitHub repository | Version |
| -------- | --- | -- |
| ghcr.io/kube-logging/node-exporter | https://github.com/kube-logging/node-exporter-image | v0.7.1 |
| ghcr.io/kube-logging/config-reloader | https://github.com/kube-logging/config-reloader | v0.0.5 |
| ghcr.io/kube-logging/fluentd-drain-watch | https://github.com/kube-logging/fluentd-drain-watch | v0.2.1 |
| k8s.gcr.io/pause | | 3.2 |
| docker.io/busybox | https://github.com/docker-library/busybox | latest |
| ghcr.io/axoflow/axosyslog | https://github.com/axoflow/axosyslog/ | 4.8.0 |
| docker.io/fluent/fluent-bit | https://github.com/fluent/fluent-bit | 3.0.4 |
| ghcr.io/kube-logging/fluentd | https://github.com/kube-logging/fluentd-images | v1.16-4.9-full |
| ghcr.io/axoflow/axosyslog-metrics-exporter | https://github.com/axoflow/axosyslog-metrics-exporter | 0.0.2 |
| ghcr.io/kube-logging/syslogng-reload | https://github.com/kube-logging/syslogng-reload-image | v1.4.0 |
| ghcr.io/kube-logging/eventrouter | https://github.com/kube-logging/eventrouter | 0.4.0 |

## Logging operator version 4.8

| Image repository | GitHub repository | Version |
Expand Down
Loading
Loading