This repository contains a Kubernetes Operator for use with Kubernetes or RedHat OpenShift Container Platform (OCP). The operator is built using the Operator-SDK from RedHat, and can manage NGINX Plus Load Balancers running outside the container platform.
The Operator is designed to work with NGINX Controller to provide an Application Centric deployment model. The NGINX Controller provides a declarative API which is RBAC enabled and enables you to link roles and environments to NGINX Instances ("Gateways").
Multiple environments and RBAC users can deploy to the same Gateways. The controller builds a best practice NGINX configuration which incorporates all services and pushes it out to NGINX when configuration is changed. This allows multiple "projects" or "namespaces" to deploy configuration to the same or different NGINX instances.
There is a complete example walk through in the examples folder.
The operator configures an external NGINX instance (via controller) to Load Balance onto a Kubernetes Service.
If the service is configured with the NodePort ServiceType, then the external Load Balancer will use the Kubernetes/OCP node IPs with the assigned port. If the ServiceType is not NodePort then the configuration will try to reach the pods directly.
The Operator includes several Custom Resources which it watches for configuration. The CRs largely tie into the Controllers Application Centric View. After deploying the operator it will start watching for these CRs.
There are currently 5 Custom Resources watched by the operator and it also watches Deployment.apps
.
When a resource which is being watched is modifed a reconciliation process happens using the playbook specified in
the watches.yaml
file.
CRD | Description |
---|---|
Controller Configuration, Includes FQDN, Auth details, and the Environment to use. |
|
The Application as represented in NGINX Controller |
|
A Service gateway, The NGINX instances on which components will be deployed |
|
The component is deployed into an Application on a Gateway |
|
A TLS Certificate for use in a Gateway or Component |
The Controller
needs to be configured first because it holds connection information for the NGINX Controller. This
includes the user_email
, fqdn
, environment
and a reference to a Secret
resource holding the user_password
.
All of the other resources require a Controller
resource. The Application
and Certificate
resources have no other
dependencies. The Gateway
may require a Certificate
if the service is using TLS, and the Component
links all
other resources together to form the "Application Endpoint".
The Kubernetes Service
which makes up the workgroup (pool
or upstream
if you prefer) should be named in the
Component
resource under the spec.ingressService
setting. The Service
can be in a different namespace to the CRs
if the Operator is running with ClusterScope, if that is the case then set the spec.ingressNamespace
to match.
The ClusterScope is useful if you want the NGINX instance to load balance onto a shared Router or Ingress Controller. See Operator Scopes below.
The controller uses a declarative API, and the configuration is supplied in a desiredState
object.
You can suppply the desiredState
in the Gateway
,Component
, and Certificate
resources and that state will be
sent to the controller. With the following caveats:
-
Gateway: The desiredState is used as is to create a new gateway object. The gateway will be called
namespace.name
in controller -
Certificate: The
desiredState
can be taken from the resource directly or merged with data from a kubernetesSecret
. Any fields set in the optionalSecret
will take precedence over those given indesiredState
. The field secretCA contains the secret name for the CA certificates (an any intermediate CA as well). -
Component: The
desiredState
can enable any controller functionality, but the ingress gateway will be overwritten by a link to theGateway
set on the resource, and the backend workload group URIs list will be populated from theService
named inspec.ingress
If you want the Operator to update the workgroups in your Component
when your Deployment
is scaled, then you should add a label
to the Component
metadata called deployment-hash
and set it to a SHA224 hash of deployment_Name
dot
deploymentNamespace
. Eg:
$ echo -n "router-default.openshift-ingress" | sha224sum dbb9f04a45371603a573de5160fa7fd32fa765194be63a7f70b066db -
and include that hash in your component resource.
apiVersion: lb.nginx.com/v1alpha1
kind: Component
metadata:
name: my-component
namespace: my-app
labels:
deployment-hash: dbb9f04a45371603a573de5160fa7fd32fa765194be63a7f70b066db
This operator can be used in conjunction with the NGINX Ingress Controller to provide external load balancing onto an NGINX Ingress layer running inside your container platform. This is recommended in large scale deployments because it keeps the bulk of the Application Deliver Controller (ADC) functionality within the container platform.
NGINX Ingress Controller provides several Custom Resources which enable many more advanced features beyond those included with standard Ingress objects or OCP Routes.
While it has become common for Load Balancers to extend Ingress using annotations, that method is error prone and fragments the Ingress spec. Using NGINX Custom Resources instead of Ingress has the following advantages:
-
NGINX Custom Resources are fully validated by the Kubernetes API
-
VirtualServer, VirtualServerRoute, TransportServer, etc all RBAC enabled
-
Routing can be based on anything within the request (header, cookie, method, etc)
-
Blue/Green traffic splitting and Canary testing of application
-
Circuit Breaker patterns
-
Redirects and Error Pages
See the Documentation for more information.
When you deploy this operator with NGINX KIC, you will need to map the Component
to the KIC using a Service
or Route
.
See the Component CR Example below.
The Service can either point to an NGINX Plus Ingress Controller (to provide additional ADC features), or to any other service or route. See the Component CR Example below.
You will need the Operator-SDK and a recent version of Docker installed on your build machine.
If you are playing around on a Codeready Containers setup, then follow these notes instead.
build and push the operator to your repository
make docker-build IMAGE_TAG_BASE=myrepo.example.com:5000/nginx/nginx-lb-operator VERSION=latest make docker-push IMAGE_TAG_BASE=myrepo.example.com:5000/nginx/nginx-lb-operator VERSION=latest
You’re ready to deploy the operator container, but you can also test it locally using the SDK. See Running the Operator Locally if you want to test/debug.
The operator will watch all namespaces
by default, if you only want to watch a subset, such as
an ingress-namespace, and a few other projects, then modify the WATCH_NAMESPACE
parameter in the deployment
manifest to limit them.
For example. Lets assume we have two projects under our control. Each project has it’s own namespace, and they
create Ingress resources consumed by a shared Ingress Controller running in a third namespace.
We might set the WATCH_NAMESPACE
as follows:
env: - name: WATCH_NAMESPACE value: "nginx-ingress,project-101,project-102"
Below is an example for each of the Custom Resources which configure the Application.
The Controller
CRD take a user_email, FQDN, and Environment. It also needs a password stored in a Kubernetes Secret
Such as:
kind: Secret
apiVersion: v1
metadata:
name: dev-controller
data:
user_password: bm90cmVhbGx5bXlwYXNzd29yZAo=
type: Opaque
The Operator will use the user_password
in the Secret
, with the user_email
in the Controller
resource to log in and retrieve
an auth token. The auth token will be cached for 30 minutes, after which time the next reconciliation will perform a new login.
A Controller
resource using the above secret would look like this:
apiVersion: lb.nginx.com/v1alpha1
kind: Controller
metadata:
name: dev-controller
spec:
user_email: "[email protected]"
secret: "dev-controller"
fqdn: "ctrl.nginx.lab"
environment: "ocp-dev-1"
validate_certs: true
The user account and the environment should already exist on the controller. All Applications, Gateways, Components, and Certificates will reference a Controller resource by name and be deployed into the environment specified.
The Application is a simple object, but it groups the components and helps with analytics visualisation
apiVersion: lb.nginx.com/v1alpha1
kind: Application
metadata:
name: my-application
spec:
controller: "dev-controller"
displayName: "My Kubernetes Application"
description: "An application deployed in Kubernetes"
The Gateways object takes a desiredState
whch is sent to controller as is, so you can enable
any features exposed in the Controller API. Check your controller API for more information.
apiVersion: lb.nginx.com/v1alpha1
kind: Gateway
metadata:
name: my-gateway
spec:
controller: "dev-controller"
displayName: "My OCP Gateway"
description: "A gateway deployed by Kubernetes"
desiredState:
ingress:
placement:
instancerefs:
- ref: /infrastructure/locations/unspecified/instances/nginx1
uris:
'http://www.uk.nginx.lab': {}
'http://www.foo.com': {}
The certificate Resource can be specified either by providing the details in the object directly
within the desiredState
or by referencing a Kubernetes Secret in secret
.
apiVersion: lb.nginx.com/v1alpha1 kind: Certificate metadata: name: my-certificate spec: controller: "dev-controller" displayName: "My Kubernetes Certificate" description: "A certificated deployed in Kubernetes" # secret: secret-containing-the-cert # secretCA: secret-containing-ca-secret desiredState: type: PEM caCerts: [] privateKey: |- -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDQYBXFTj1ZdJGH 7IfomkeJfedaIueD01L6X6jj8TvS2xwTRHL4LIkZP882qHs2VfEpgbVi6a96lvWP TRUNCATED TRUNCATED TRUNCATED TRUNCATED TRUNCATED TRUNCATED 6bug7eceyafsFTTEghcNloHWnYBARA3878X5RQkLVUNocrZLkBG2Dn2d3aiEpWww CZ+gbhraYKAflzD6wTJL29D5dLGF5k/88RTN60Gzoaxq7CkvlLwXCZjQSvjEGq5i whJYgXwWvqy5VXxLc5amLXk= -----END PRIVATE KEY----- publicCert: |- -----BEGIN CERTIFICATE----- MIIDpzCCAo+gAwIBAgIUb+NqxHIP0Z15aqy5FY8+bb1vq6IwDQYJKoZIhvcNAQEL 1Xnimah+mQMOuWiJU9W9omet5Y9OemQLHmeSVFbfQXBkTNKGO+2iKtWJNO8+zzT7 TRUNCATED TRUNCATED TRUNCATED TRUNCATED TRUNCATED TRUNCATED 5WZTPiggaDbDAwjK2QP2N933lHxR5JDmkHHH6GHKLWXgYgxY0zx8R2+eFyvxJvGB yaw7SnX8i5mjkgwwGhgTMBnSdf3F9eLcMHPgceMOuTyynpe9SSE9Bck3LykgvQDW InWB8mhlndb/p8ZYVLx9y2LDq1h3iymbnoHM -----END CERTIFICATE-----
When referencing the cert as a kubernetes secret, then it should be an Opaque or tls type and
the certificate details should be stored in tls.key
and tls.crt
.
When referencing the CA cert as a kubernetes secret, then it should be an Opaque type and
the certificate details should be stored in ca.crt
. Multiple certificates ca be stored as part of the CA certificate and intermediate CA certificates.
When creating from literal, create a pem file with the ca.crt field storing all the certificates in PEM format.
kind: Secret apiVersion: v1 metadata: name: my-cert data: tls.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURwekNDQW8rZ0F3SUJBZ0lVYitOcXhISVAw tls.key: >- LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZB type: UEVN type: Opaque
kind: Secret apiVersion: v1 metadata: name: my-ca-cert-secret data: ca.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURwekNDQW8rZ0F3SUJBZ0lVYitOcXhISVAw type: UEVN type: Opaque
and the Certificate would look like this
apiVersion: lb.nginx.com/v1alpha1 kind: Certificate metadata: name: my-certificate spec: controller: "dev-controller" displayName: "My Kubernetes Certificate" description: "A certificated deployed in Kubernetes" secret: my-cert caSecret: my-ca-cert-secret
The Component object also takes a desiredState
, but the operator expects to configure both the ingress→gatewayRefs
using the gateway
provided, and the backend→workloadGroups→group
using the pods or NodePorts found in the ingress*
settings. The workload uris
are built using workload.scheme
and workload.path
The ingressType
can be Service
, Route
, or None
. See the relevant sections below for deploying against a service, route
or with a manually configured workload group (ie none).
When deployed with a service, you must set the ingressType
to Service
, and set the ingressName
to match the service.
If the service is in a different namespace, then you can set the ingressNamespace
to match. The Operator must be runing with
a ClusterRole
if the service is in a different namespace. Eg:
apiVersion: lb.nginx.com/v1alpha1
kind: Component
metadata:
name: my-component
namespace: my-app
spec:
controller: dev-controller
application: my-application
ingressType: Service
ingressName: my-nginx-ingress-controller
ingressNamespace: nginx-igress
If the Ingress service is discovered to be using NodePort
, then the workload groups will be set to the k8s nodes with
the dynamically assigned port. Otherwise the workloads will be set to the pod IP and the workload.targetPort
When deploying with an OpenShift Route, you must set the ingressType
to Route
and the ingressName
to the
name of the Route
resource, again you can set ingressNamespace
if the route is not in the same namespace.
The Operator will look the router for the provided Route
and attempt to locate its pods. So the Operator will need
read access to the namespace in which the Router is running. This can be set with ingressRouterNamespace
but will
default to openshift-ingress
. It is likely that the Operator will need a ClusterRole
account. Eg:
apiVersion: lb.nginx.com/v1alpha1
kind: Component
metadata:
name: my-component
namespace: my-app
spec:
controller: dev-controller
application: my-application
ingressType: Route
ingressName: my-route
ingressRouterNamespace: openshift-ingress
groupName: "my-group-name"
If you are using Codeready Containers The workload.crcOverride
can be set to the IP of your CRC VM.
You can also set the ingressType
to None
in which case the node list will not be generated and the
workload groups in the desiredSpec
will need to be set manually. Eg:
apiVersion: lb.nginx.com/v1alpha1
kind: Component
metadata:
name: my-component
namespace: my-app
spec:
controller: dev-controller
application: my-application
ingressType: None
...
desiredSpec:
backend:
workloadGroups:
group:
uris:
'https://node1.cluster:443/': {}
'https://node2.cluster:443/': {}
loadBalancingMethod:
type: ROUND_ROBIN
apiVersion: lb.nginx.com/v1alpha1
kind: Component
metadata:
name: my-component
spec:
controller: "dev-controller"
application: "my-application"
ingressType: Service
ingressName: "my-nginx-ingress-controller"
ingressNamespace: "my-nginx-ingress-namespace"
gateway: "my-gateway"
workload:
scheme: "http"
path: "/"
targetPort: 443
crcOverride: 192.168.130.11
displayName: "My Component"
groupName: "my-workgroup-name"
description: "A component deployed by Kubernetes"
desiredState:
backend:
monitoring:
response:
status:
match: true
range:
endCode: 302
startCode: 200
uri: /
workloadGroups:
# group uris will be populated from "ingress" pods or nodeports
group:
loadBalancingMethod:
type: ROUND_ROBIN
# ingress gatewayRefs will be populated from "gateway"
ingress:
uris:
/: {}
The above would result in a desiredState
similar to:
"desiredState": {
"ingress": {
"gatewayRefs": [
{
"ref": "/services/environments/ocp-dev-1/gateways/<project>.my-gateway"
}
],
"uris": {
"/": {}
}
},
"backend": {
"workloadGroups": {
"my-workgroup-name": {
"loadBalancingMethod": {
"type": "ROUND_ROBIN"
},
"uris": {
"http://<k8s-node-ip>:<nodeport>/": { }
}
}
},
"monitoring": {
"uri": "/",
"response": {
"status": {
"range": {
"endCode": 302,
"startCode": 200
},
"match": true
}
}
}
}
}