Skip to content

Commit

Permalink
Merge branch 'next' into chore/go-kong-0.18.0
Browse files Browse the repository at this point in the history
  • Loading branch information
mmorel-35 authored Jun 16, 2021
2 parents e4270a0 + 8049db4 commit 9c0efbc
Show file tree
Hide file tree
Showing 8 changed files with 321 additions and 10 deletions.
4 changes: 4 additions & 0 deletions .github/workflows/2x-integration-test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@ on:
jobs:
integration-test-dbless:
runs-on: ubuntu-latest
env:
KONG_TEST_ENVIRONMENT: true
steps:
- name: setup golang
uses: actions/setup-go@v2
Expand All @@ -34,6 +36,8 @@ jobs:
working-directory: ./railgun
integration-test-postgres:
runs-on: ubuntu-latest
env:
KONG_TEST_ENVIRONMENT: true
steps:
- name: setup golang
uses: actions/setup-go@v2
Expand Down
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ require (
github.com/hashicorp/go-uuid v1.0.2
github.com/kong/deck v1.7.0
github.com/kong/go-kong v0.19.0
github.com/kong/kubernetes-testing-framework v0.0.11
github.com/kong/kubernetes-testing-framework v0.0.12
github.com/lithammer/dedent v1.1.0
github.com/magiconair/properties v1.8.4 // indirect
github.com/mitchellh/mapstructure v1.4.1
Expand Down
4 changes: 2 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -473,8 +473,8 @@ github.com/kong/deck v1.7.0 h1:uo4NdcChHoM9sb2Z8YPAuFyobv8bxaQuayA00tzN5js=
github.com/kong/deck v1.7.0/go.mod h1:o2letQaSpXVnDNoXehEibOF6q7v46qtbsKOCC+1owAw=
github.com/kong/go-kong v0.19.0 h1:PCgxU9KsLD7eOt4xwGthdI7G4X61V5NVE0u9s1z+62U=
github.com/kong/go-kong v0.19.0/go.mod h1:HyNtOxzh/tzmOV//ccO5NAdmrCnq8b86YUPjmdy5aog=
github.com/kong/kubernetes-testing-framework v0.0.11 h1:zsHug7QDG0qV5yGjpadD/DVWlt1VfluciK703DjIBh0=
github.com/kong/kubernetes-testing-framework v0.0.11/go.mod h1:dktDNwvzFxH+MHnQwxLxbz6WJaeKnAGK1TsRyfs1YBY=
github.com/kong/kubernetes-testing-framework v0.0.12 h1:shUm/Nmz8yFHzCm3eKe1AWgFagw8L3jYDkX1WIJankg=
github.com/kong/kubernetes-testing-framework v0.0.12/go.mod h1:dktDNwvzFxH+MHnQwxLxbz6WJaeKnAGK1TsRyfs1YBY=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
Expand Down
134 changes: 134 additions & 0 deletions pkg/util/debug_logging.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
package util

import (
"io"
"time"

"github.com/sirupsen/logrus"
)

// -----------------------------------------------------------------------------
// Public - Reduced Redudancy Debug Logging
// -----------------------------------------------------------------------------

// MakeDebugLoggerWithReducedRedudancy is a logrus.Logger that "stifles" repetitive logs.
//
// The "stifling" mechanism is triggered by one of two conditions the result of which is
// that the "stifled" log entry will be dropped entirely.
//
// The conditions checked are:
//
// 1. This logger will drop log entries where an identical log entry has posted within the
// last "redundantLogEntryBackoff". For example, you could set this to "time.Second * 3"
// and the result would be that if the logger had already logged an identical message
// within the previous 3 seconds it will be dropped.
//
// 2. This logger will "stifle" redudant entries which are logged consecutively a number of
// times equal to the provided "redudantLogEntryAllowedConsecutively" number. For example,
// you could set this to 3 and then if the last 3 log entries emitted were the same message
// further entries of the same message would be dropped.
//
// The caller can choose to set either argument to "0" to disable that check, but setting both
// to zero will result in no redundancy reduction.
//
// NOTE: Please consider this logger a "debug" only logging implementation.
// This logger was originally created to help reduce the noise coming from the controller
// during integration tests for better human readability, so keep in mind it was built for
// testing environments if you're currently reading this and you're considering using it
// somewhere that would produce production environment logs: there's significant
// performance overhead triggered by the logging hooks this adds.
func MakeDebugLoggerWithReducedRedudancy(writer io.Writer, formatter logrus.Formatter,
redudantLogEntryAllowedConsecutively int, redundantLogEntryBackoff time.Duration,
) *logrus.Logger {
// setup the logger with debug level logging
log := logrus.New()
log.Level = logrus.DebugLevel
log.Formatter = formatter
log.Out = writer

// set up the reduced redudancy logging hook
log.Hooks.Add(newReducedRedundancyLogHook(redundantLogEntryBackoff, redudantLogEntryAllowedConsecutively, formatter))
return log
}

// -----------------------------------------------------------------------------
// Private - Reduced Redudancy Debug Logging
// -----------------------------------------------------------------------------

// reducedRedudancyLogHook is a logrus.Hook that reduces redudant log entries.
type reducedRedudancyLogHook struct {
backoff time.Duration
consecutiveAllowed int
consecutivePosted int
formatter logrus.Formatter
lastMessage string
timeWindow map[string]bool
timeWindowStart time.Time
}

func newReducedRedundancyLogHook(
backoff time.Duration,
consecutive int,
formatter logrus.Formatter,
) *reducedRedudancyLogHook {
return &reducedRedudancyLogHook{
backoff: backoff,
consecutiveAllowed: consecutive,
formatter: formatter,
timeWindowStart: time.Now(),
timeWindow: map[string]bool{},
}
}

func (r *reducedRedudancyLogHook) Fire(entry *logrus.Entry) error {
defer func() { r.lastMessage = entry.Message }()

// to make this hook work we override the logger formatter to the nilFormatter
// for some entries, but we also need to reset it here to ensure the default.
entry.Logger.Formatter = r.formatter

// if the current entry has the exact same message as the last entry, check the
// consecutive posting rules for this entry to see whether it should be dropped.
if r.consecutiveAllowed > 0 && entry.Message == r.lastMessage {
r.consecutivePosted++
if r.consecutivePosted >= r.consecutiveAllowed {
entry.Logger.SetFormatter(&nilFormatter{})
return nil
}
} else {
r.consecutivePosted = 0
}

// determine whether or not the previous time window is still valid and if not create
// a new time window and return.
if time.Now().After(r.timeWindowStart.Add(r.backoff)) {
r.timeWindow = map[string]bool{}
r.timeWindowStart = time.Now()
return nil
}

// if we're here then the time window is still valid, we need to determine if the
// current entry would be considered redundant during this time window.
// if the entry has not yet been seen during this time window, we record it so that
// future checks can find it.
if _, ok := r.timeWindow[entry.Message]; ok {
entry.Logger.SetFormatter(&nilFormatter{})
}
r.timeWindow[entry.Message] = true

return nil
}

func (r *reducedRedudancyLogHook) Levels() []logrus.Level {
return logrus.AllLevels
}

// -----------------------------------------------------------------------------
// Private - Nil Logging Formatter
// -----------------------------------------------------------------------------

type nilFormatter struct{}

func (n *nilFormatter) Format(entry *logrus.Entry) ([]byte, error) {
return nil, nil
}
127 changes: 127 additions & 0 deletions pkg/util/debug_logging_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
package util

import (
"bytes"
"encoding/json"
"strings"
"testing"
"time"

"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)

func TestDebugLoggerStiflesConsecutiveEntries(t *testing.T) {
// initialize the debug logger with no backoff time, but a limit of 3 consecutive redudant entries
buf := new(bytes.Buffer)
log := MakeDebugLoggerWithReducedRedudancy(buf, &logrus.JSONFormatter{}, 3, time.Millisecond*0)
assert.True(t, log.IsLevelEnabled(logrus.DebugLevel))
assert.False(t, log.IsLevelEnabled(logrus.TraceLevel))

// spam the logger with redudant entries and validate that only 3 entries (the limit) were actually emitted
for i := 0; i < 100; i++ {
log.Info("test")
}
lines := strings.Split(strings.TrimSpace(buf.String()), "\n")
assert.Len(t, lines, 3)

// validate the logging data integrity
for _, line := range lines {
var entry map[string]string
require.NoError(t, json.Unmarshal([]byte(line), &entry))
assert.Equal(t, "test", entry["msg"])
}
}

func TestDebugLoggerResetsConsecutiveEntries(t *testing.T) {
// initialize the debug logger with no backoff time, but a limit of 5 consecutive redudant entries
buf := new(bytes.Buffer)
log := MakeDebugLoggerWithReducedRedudancy(buf, &logrus.JSONFormatter{}, 5, time.Millisecond*0)
assert.True(t, log.IsLevelEnabled(logrus.DebugLevel))
assert.False(t, log.IsLevelEnabled(logrus.TraceLevel))

// spam the logger with redudant entries and validate that only 3 entries (the limit) were actually emitted
for i := 0; i < 100; i++ {
if i%5 == 0 {
log.Info("break")
continue
}
log.Info("test")
}
lines := strings.Split(strings.TrimSpace(buf.String()), "\n")
assert.Len(t, lines, 100)

// validate the logging data integrity
for i, line := range lines {
var entry map[string]string
require.NoError(t, json.Unmarshal([]byte(line), &entry))
if i%5 == 0 {
assert.Equal(t, "break", entry["msg"])
} else {
assert.Equal(t, "test", entry["msg"])
}
}
}

func TestDebugLoggerStiflesEntriesWhichAreTooFrequent(t *testing.T) {
// initialize the debug logger with no consecutive entry backoff, but a time backoff of 30m
buf := new(bytes.Buffer)
log := MakeDebugLoggerWithReducedRedudancy(buf, &logrus.JSONFormatter{}, 0, time.Minute*30)

// spam the logger, but validate that only one entry gets printed within the backoff timeframe
for i := 0; i < 100; i++ {
log.Debug("unique")
}
lines := strings.Split(strings.TrimSpace(buf.String()), "\n")
assert.Len(t, lines, 1)

// validate the log entry
var entry map[string]string
require.NoError(t, json.Unmarshal([]byte(lines[0]), &entry))
assert.Equal(t, "unique", entry["msg"])
}

func TestDebugLoggerStopsStiflingEntriesAfterBackoffExpires(t *testing.T) {
// setup backoffs and determine start/stop times
start := time.Now()
backoff := time.Millisecond * 100
stop := start.Add(backoff)

// initialize the debug logger with no consecutive entry backoff, but a time based backoff
buf := new(bytes.Buffer)
log := MakeDebugLoggerWithReducedRedudancy(buf, &logrus.JSONFormatter{}, 0, backoff)

// spam the logger and validate that the testing environment didn't take longer than 100ms to process this
for i := 0; i < 100; i++ {
log.Debug("unique")
}
assert.True(t, time.Now().Before(stop),
"validate that the resource contention in the testing environment is not overt")

// verify that a new backoff period started and that all lines beyond the original were stifled
lines := strings.Split(strings.TrimSpace(buf.String()), "\n")
require.Len(t, lines, 1)

// validate the log data integrity
var entry map[string]interface{}
require.NoError(t, json.Unmarshal([]byte(lines[0]), &entry))
assert.Equal(t, "unique", entry["msg"])

// wait until the backoff time is up and validate that it will allow one entry of the previous
// redundant log entry to be emitted now that the backoff is over.
time.Sleep(backoff)
for i := 0; i < 1; i++ {
log.Debug("second-unique")
}

// verify that a new backoff period started and that all lines beyond the original were stifled
lines = strings.Split(strings.TrimSpace(buf.String()), "\n")
require.Len(t, lines, 2)

// validate the log data integrity
require.NoError(t, json.Unmarshal([]byte(lines[0]), &entry))
assert.Equal(t, "unique", entry["msg"])
require.NoError(t, json.Unmarshal([]byte(lines[1]), &entry))
assert.Equal(t, "second-unique", entry["msg"])
}
29 changes: 27 additions & 2 deletions railgun/keps/0003-kic-kubebuilder-re-architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,29 @@ This re-architecture is focused on moving the existing APIs, controllers, and li

These among other gains will bring us closer to how upstream works and will reduce the amount of maintenance we have to perform to keep up to date and add features.

On top of transplanting our APIs and adding a new Go API, we also want to transplant the existing monolithic controller onto `controller-runtime` and align ourselves with other controller implementations throughout the community.
On top of transplanting our APIs and adding a new Go API, we also want to transplant the existing monolithic controller onto `controller-runtime`, re-architect our controller code to fit better into a microservices pattern for better maintainability and scalability prospects, and ultimately align ourselves with other controller implementations throughout the community.

### Proxy Caching

In the previous `v1.x` versions of the KIC `client-go` caching was used as the interim spot for Kubernetes object updates in between parsing, translating and POSTing updates to the Kong Admin API. The functionality which supported this had some limitations in configurability, functionality, profiling, K8s status updates, and logging.

Additionally from the caller perspective of the code responsible for this cache, there were several leaking abstractions wherein the caller had to have some awareness of the Kong DSL and use the library at several conversion points between K8s and Kong DSL before submitting the Kong DSL updates to the Kong Admin API.

For `v2.x+` we've created a new implementation of the Proxy Cache under `railgun/internal/proxy` which runs as a discreet server (goroutine) alongside the manager routine and can be used by independent controllers to asynchronously cache updates to Kubernetes objects and do the parsing, translating and updates to the Kong Admin API as part of a single opaque service. this new architecture enables improved operations: the proxy cache server will log itself as an independent component of the KIC, and will have extensive logging particularly when problems from the Kong Admin API arise. This new architecture additionally makes a paradigm shift to start supporting status updates (from the cache server) on Kubernetes objects as reconcilation triggers where we had limited statuses or events available prior.

### Architecture Overview

The previous `v1.x` architecture was monolithic in nature, and the entire stack was run as a single runtime and unit:

TODO: image

For this iteration isn't not feasible to completely rebuild everything as a small and modular service, but we are focused on at least making our upfront Kubernetes controllers modular (and each type has an independent reconciler). The reasons for feasibility mostly have to do with time and scope, and timing with upcoming features from upstream Kong (e.g. RESTful API calls for DBLESS mode was not available at the time of writing).

We intend to re-architect by adding further specificity to some problem domains such that we can separate those concerns into their own libraries or servers, with abstract interfaces and types used as the API between these "microservices":

TODO: image

**NOTE**: ideally in the future a lot of the backend code including the proxy cache and the parser libraries will cease to exist and/or be simplified such that each controller can send Kong updates for itself individually, however this will be done best when the REST API becomes ubiquitous.

[kb]:https://github.com/kubernetes-sigs/kubebuilder
[cr]:https://github.com/kubernetes-sigs/controller-runtime
Expand All @@ -168,7 +190,10 @@ Prior to these efforts only minimal testing for the controller and the API funct
- [Established KIC 2.0 Preview release criteria][ms15]
- KTF fully separated into it's [own repo][ktf]
- integration tests [added][legacy-tests] to test `v1.x` and railgun controllers on every PR from now until release
- First alpha release objectives defined in milestone: https://github.com/Kong/kubernetes-ingress-controller/milestone/15
- first alpha release objectives defined in milestone: https://github.com/Kong/kubernetes-ingress-controller/milestone/15
- research and an experimental revision of the proxy cache functionality undergone: https://github.com/Kong/kubernetes-ingress-controller/pull/1274
- first alpha version was released: https://github.com/Kong/kubernetes-ingress-controller/releases/tag/2.0.0-alpha.1
- `v1beta1.UDPIngress` published: https://github.com/Kong/kubernetes-ingress-controller/pull/1400

[cr]:https://github.com/kubernetes-sigs/controller-runtime
[kb]:https://github.com/kubernetes-sigs/kubebuilder
Expand Down
14 changes: 12 additions & 2 deletions railgun/keps/0004-kong-kubernetes-testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@
status: implementable
---

**NOTE**: this will be considered `implemented` once [Milestone 1][m1] is completed.

[m1]:https://github.com/Kong/kubernetes-testing-framework/milestone/1

# Kong Kubernetes Testing Framework (KTF)

<!-- toc -->
Expand Down Expand Up @@ -49,6 +53,12 @@ Historically the [Kong Kubernetes Ingress Controller (KIC)][kic] used bash scrip

### Goals

- provide _provisioning functionality_ for testing clusters (e.g. `kind`, `minikube`, `GKE`, e.t.c.)
- provide _deployment functionality_ for Kubernetes components to create complete testing environments (e.g. `helm`, `metallb`, e.t.c.)
- provide _deployment functionality_ for Kong components (e.g. deploying Proxy only, deploying Proxy with KIC, version matrix, e.t.c.)
- provide _generators_ to quickly generate default objects commonly used in testing (e.g. `Service`, `Deployment`, e.t.c.)
- provide _mocking functionality_ for the Kong Admin API

### Non-Goals

Due to incongruencies with one of our most prominent upstream tools ([Kind][kind]) we're going to need to skip on creating complete container images for testing environments in favor of writing setup logic aftermarket for existing default images. While being able to move runtime problems to build time would be helpful, we'll potentially need to look at migrating to new tools in some future iteration to follow up.
Expand Down Expand Up @@ -95,8 +105,7 @@ Integration tests written in the testing framework will also serve the purpose o

- [X] Testing Framework Prototype
- [X] Testing Framework plugged into KIC
- [ ] KTF `v0.1.0` milestone completed
- [ ] KTF `v0.1.0` released
- [ ] KTF `v0.1.0` milestone completed & `v0.1.0` released

## Implementation History

Expand All @@ -106,6 +115,7 @@ Integration tests written in the testing framework will also serve the purpose o
- A minimum test [was added to KIC][kic-pr1102] using the new KTF functionality, `v0.0.1` tagged.
- The runbook concept was removed in favor of factory-style cluster provisioning
- We remove the experimental image builder in favor of adding deployment tooling to the test framework, `v0.0.2` tagged.
- Admin API mocking was added in `pkg/kong/fake_admin_api.go` and is now in use by KIC integration tests

[kep1]:/keps/0001-single-kic-multi-gateway.md
[ktf]:https://github.com/kong/kubernetes-testing-framework
Expand Down
Loading

0 comments on commit 9c0efbc

Please sign in to comment.