Skip to content

Commit

Permalink
[GLBC] Support backside re-encryption (#519)
Browse files Browse the repository at this point in the history
Support backside re-encryption
  • Loading branch information
nicksardo committed Apr 18, 2017
1 parent 7f37635 commit 642cb74
Show file tree
Hide file tree
Showing 21 changed files with 1,033 additions and 420 deletions.
32 changes: 27 additions & 5 deletions controllers/gce/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -360,15 +360,14 @@ You just instructed the loadbalancer controller to quit, however if it had done

#### Health checks

Currently, all service backends must satisfy *either* of the following requirements to pass the HTTP health checks sent to it from the GCE loadbalancer:
Currently, all service backends must satisfy *either* of the following requirements to pass the HTTP(S) health checks sent to it from the GCE loadbalancer:
1. Respond with a 200 on '/'. The content does not matter.
2. Expose an arbitrary url as a `readiness` probe on the pods backing the Service.

The Ingress controller looks for a compatible readiness probe first, if it finds one, it adopts it as the GCE loadbalancer's HTTP health check. If there's no readiness probe, or the readiness probe requires special HTTP headers, or HTTPS, the Ingress controller points the GCE loadbalancer's HTTP health check at '/'. [This is an example](examples/health_checks/README.md) of an Ingress that adopts the readiness probe from the endpoints as its health check.
The Ingress controller looks for a compatible readiness probe first, if it finds one, it adopts it as the GCE loadbalancer's HTTP(S) health check. If there's no readiness probe, or the readiness probe requires special HTTP headers, the Ingress controller points the GCE loadbalancer's HTTP health check at '/'. [This is an example](examples/health_checks/README.md) of an Ingress that adopts the readiness probe from the endpoints as its health check.

## TLS

You can secure an Ingress by specifying a [secret](http://kubernetes.io/docs/user-guide/secrets) that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller does not support SNI, so it will ignore all but the first cert in the TLS configuration section. The TLS secret must [contain keys](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L2696) named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS, eg:
## Frontend HTTPS
For encrypted communication between the client to the load balancer, you can secure an Ingress by specifying a [secret](http://kubernetes.io/docs/user-guide/secrets) that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller does not support SNI, so it will ignore all but the first cert in the TLS configuration section. The TLS secret must [contain keys](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L2696) named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS, eg:

```yaml
apiVersion: v1
Expand Down Expand Up @@ -399,6 +398,29 @@ spec:

This creates 2 GCE forwarding rules that use a single static ip. Both `:80` and `:443` will direct traffic to your backend, which serves HTTP requests on the target port mentioned in the Service associated with the Ingress.

## Backend HTTPS
For encrypted communication between the load balancer and your Kubernetes service, you need to decorate the the service's port as expecting HTTPS. There's an alpha [Service annotation](examples/backside_https/app.yaml) for specifying the expected protocol per service port. Upon seeing the protocol as HTTPS, the ingress controller will assemble a GCP L7 load balancer with an HTTPS backend-service with a HTTPS health check.

The annotation value is a stringified JSON map of port-name to "HTTPS" or "HTTP". If you do not specify the port, "HTTP" is assumed.
```yaml
apiVersion: v1
kind: Service
metadata:
name: my-echo-svc
annotations:
service.alpha.kubernetes.io/app-protocols: '{"my-https-port":"HTTPS"}'
labels:
app: echo
spec:
type: NodePort
ports:
- port: 443
protocol: TCP
name: my-https-port
selector:
app: echo
```

#### Redirecting HTTP to HTTPS

To redirect traffic from `:80` to `:443` you need to examine the `x-forwarded-proto` header inserted by the GCE L7, since the Ingress does not support redirect rules. In nginx, this is as simple as adding the following lines to your config:
Expand Down
167 changes: 116 additions & 51 deletions controllers/gce/backends/backends.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ import (

compute "google.golang.org/api/compute/v1"
"k8s.io/apimachinery/pkg/util/sets"
api_v1 "k8s.io/client-go/pkg/api/v1"

"k8s.io/ingress/controllers/gce/healthchecks"
"k8s.io/ingress/controllers/gce/instances"
Expand Down Expand Up @@ -75,6 +76,7 @@ type Backends struct {
nodePool instances.NodePool
healthChecker healthchecks.HealthChecker
snapshotter storage.Snapshotter
prober probeProvider
// ignoredPorts are a set of ports excluded from GC, even
// after the Ingress has been deleted. Note that invoking
// a Delete() on these ports will still delete the backend.
Expand All @@ -86,6 +88,12 @@ func portKey(port int64) string {
return fmt.Sprintf("%d", port)
}

// ServicePort for tupling port and protocol
type ServicePort struct {
Port int64
Protocol utils.AppProtocol
}

// NewBackendPool returns a new backend pool.
// - cloud: implements BackendServices and syncs backends with a cloud provider
// - healthChecker: is capable of producing health checks for backends.
Expand Down Expand Up @@ -134,6 +142,11 @@ func NewBackendPool(
return backendPool
}

// Init sets the probeProvider interface value
func (b *Backends) Init(pp probeProvider) {
b.prober = pp
}

// Get returns a single backend.
func (b *Backends) Get(port int64) (*compute.BackendService, error) {
be, err := b.cloud.GetBackendService(b.namer.BeName(port))
Expand All @@ -144,44 +157,34 @@ func (b *Backends) Get(port int64) (*compute.BackendService, error) {
return be, nil
}

func (b *Backends) create(igs []*compute.InstanceGroup, namedPort *compute.NamedPort, name string) (*compute.BackendService, error) {
// Create a new health check
if err := b.healthChecker.Add(namedPort.Port); err != nil {
return nil, err
}
hc, err := b.healthChecker.Get(namedPort.Port)
if err != nil {
return nil, err
func (b *Backends) ensureHealthCheck(sp ServicePort) (string, error) {
hc := b.healthChecker.New(sp.Port, sp.Protocol)
if b.prober != nil {
probe, err := b.prober.GetProbe(sp)
if err != nil {
return "", err
}
if probe != nil {
glog.V(2).Infof("Applying httpGet settings of readinessProbe to health check on port %+v", sp)
applyProbeSettingsToHC(probe, hc)
}
}
errs := []string{}

return b.healthChecker.Sync(hc)
}

func (b *Backends) create(igs []*compute.InstanceGroup, namedPort *compute.NamedPort, hcLink string, protocol utils.AppProtocol, name string) (*compute.BackendService, error) {
var errs []string
// We first try to create the backend with balancingMode=RATE. If this
// fails, it's mostly likely because there are existing backends with
// balancingMode=UTILIZATION. This failure mode throws a googleapi error
// which wraps a HTTP 400 status code. We handle it in the loop below
// and come around to retry with the right balancing mode. The goal is to
// switch everyone to using RATE.
for _, bm := range []BalancingMode{Rate, Utilization} {
backends := getBackendsForIGs(igs)
for _, b := range backends {
switch bm {
case Rate:
b.MaxRate = maxRPS
default:
// TODO: Set utilization and connection limits when we accept them
// as valid fields.
}
b.BalancingMode = string(bm)
}
// Create a new backend
backend := &compute.BackendService{
Name: name,
Protocol: "HTTP",
Backends: backends,
HealthChecks: []string{hc.SelfLink},
Port: namedPort.Port,
PortName: namedPort.Name,
}
if err := b.cloud.CreateBackendService(backend); err != nil {
bs := newBackendService(igs, bm, namedPort, []string{hcLink}, protocol, name)
if err := b.cloud.CreateBackendService(bs); err != nil {
// This is probably a failure because we tried to create the backend
// with balancingMode=RATE when there are already backends with
// balancingMode=UTILIZATION. Just ignore it and retry setting
Expand All @@ -198,31 +201,83 @@ func (b *Backends) create(igs []*compute.InstanceGroup, namedPort *compute.Named
return nil, fmt.Errorf("%v", strings.Join(errs, "\n"))
}

func newBackendService(igs []*compute.InstanceGroup, bm BalancingMode, namedPort *compute.NamedPort, healthCheckLinks []string, protocol utils.AppProtocol, name string) *compute.BackendService {
backends := getBackendsForIGs(igs)
for _, b := range backends {
switch bm {
case Rate:
b.MaxRatePerInstance = maxRPS
default:
// TODO: Set utilization and connection limits when we accept them
// as valid fields.
}
b.BalancingMode = string(bm)
}

return &compute.BackendService{
Name: name,
Protocol: string(protocol),
Backends: backends,
HealthChecks: healthCheckLinks,
Port: namedPort.Port,
PortName: namedPort.Name,
}
}

// Add will get or create a Backend for the given port.
func (b *Backends) Add(port int64) error {
func (b *Backends) Add(p ServicePort) error {
// We must track the port even if creating the backend failed, because
// we might've created a health-check for it.
be := &compute.BackendService{}
defer func() { b.snapshotter.Add(portKey(port), be) }()
defer func() { b.snapshotter.Add(portKey(p.Port), be) }()

igs, namedPort, err := b.nodePool.AddInstanceGroup(b.namer.IGName(), p.Port)
if err != nil {
return err
}

igs, namedPort, err := b.nodePool.AddInstanceGroup(b.namer.IGName(), port)
// Ensure health check for backend service exists
hcLink, err := b.ensureHealthCheck(p)
if err != nil {
return err
}
be, _ = b.Get(port)

pName := b.namer.BeName(p.Port)
be, _ = b.Get(p.Port)
if be == nil {
glog.Infof("Creating backend for %d instance groups, port %v named port %v",
len(igs), port, namedPort)
be, err = b.create(igs, namedPort, b.namer.BeName(port))
glog.V(2).Infof("Creating backend for %d instance groups, port %v named port %v", len(igs), p.Port, namedPort)
be, err = b.create(igs, namedPort, hcLink, p.Protocol, pName)
if err != nil {
return err
}
}

existingHCLink := ""
if len(be.HealthChecks) == 1 {
existingHCLink = be.HealthChecks[0]
}

if be.Protocol != string(p.Protocol) || existingHCLink != hcLink {
glog.V(2).Infof("Updating backend protocol %v (%v) for change in protocol (%v) or health check", pName, be.Protocol, string(p.Protocol))
be.Protocol = string(p.Protocol)
be.HealthChecks = []string{hcLink}
if err = b.cloud.UpdateBackendService(be); err != nil {
return err
}
}

// If previous health check was legacy type, we need to delete it.
if existingHCLink != hcLink && strings.Contains(existingHCLink, "/httpHealthChecks/") {
if err = b.healthChecker.DeleteLegacy(p.Port); err != nil {
return err
}
}

// we won't find any igs till the node pool syncs nodes.
if len(igs) == 0 {
return nil
}
if err := b.edgeHop(be, igs); err != nil {
if err = b.edgeHop(be, igs); err != nil {
return err
}
return err
Expand All @@ -231,7 +286,7 @@ func (b *Backends) Add(port int64) error {
// Delete deletes the Backend for the given port.
func (b *Backends) Delete(port int64) (err error) {
name := b.namer.BeName(port)
glog.Infof("Deleting backend %v", name)
glog.V(2).Infof("Deleting backend service %v", name)
defer func() {
if utils.IsHTTPErrorCode(err, http.StatusNotFound) {
err = nil
Expand All @@ -241,15 +296,11 @@ func (b *Backends) Delete(port int64) (err error) {
}
}()
// Try deleting health checks even if a backend is not found.
if err = b.cloud.DeleteBackendService(name); err != nil &&
!utils.IsHTTPErrorCode(err, http.StatusNotFound) {
if err = b.cloud.DeleteBackendService(name); err != nil && !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
return err
}
if err = b.healthChecker.Delete(port); err != nil &&
!utils.IsHTTPErrorCode(err, http.StatusNotFound) {
return err
}
return nil

return b.healthChecker.Delete(port)
}

// List lists all backends.
Expand Down Expand Up @@ -306,7 +357,7 @@ func (b *Backends) edgeHop(be *compute.BackendService, igs []*compute.InstanceGr
}

// Sync syncs backend services corresponding to ports in the given list.
func (b *Backends) Sync(svcNodePorts []int64) error {
func (b *Backends) Sync(svcNodePorts []ServicePort) error {
glog.V(3).Infof("Sync: backends %v", svcNodePorts)

// create backends for new ports, perform an edge hop for existing ports
Expand All @@ -319,14 +370,14 @@ func (b *Backends) Sync(svcNodePorts []int64) error {
}

// GC garbage collects services corresponding to ports in the given list.
func (b *Backends) GC(svcNodePorts []int64) error {
func (b *Backends) GC(svcNodePorts []ServicePort) error {
knownPorts := sets.NewString()
for _, port := range svcNodePorts {
knownPorts.Insert(portKey(port))
for _, p := range svcNodePorts {
knownPorts.Insert(portKey(p.Port))
}
pool := b.snapshotter.Snapshot()
for port := range pool {
p, err := strconv.Atoi(port)
p, err := strconv.ParseUint(port, 10, 16)
if err != nil {
return err
}
Expand All @@ -345,7 +396,7 @@ func (b *Backends) GC(svcNodePorts []int64) error {
// Shutdown deletes all backends and the default backend.
// This will fail if one of the backends is being used by another resource.
func (b *Backends) Shutdown() error {
if err := b.GC([]int64{}); err != nil {
if err := b.GC([]ServicePort{}); err != nil {
return err
}
return nil
Expand All @@ -365,3 +416,17 @@ func (b *Backends) Status(name string) string {
// TODO: State transition are important, not just the latest.
return hs.HealthStatus[0].HealthState
}

func applyProbeSettingsToHC(p *api_v1.Probe, hc *healthchecks.HealthCheck) {
healthPath := p.Handler.HTTPGet.Path
// GCE requires a leading "/" for health check urls.
if !strings.HasPrefix(healthPath, "/") {
healthPath = "/" + healthPath
}

hc.RequestPath = healthPath
hc.Host = p.Handler.HTTPGet.Host
hc.Description = "Kubernetes L7 health check generated with readiness probe settings."
hc.CheckIntervalSec = int64(p.PeriodSeconds) + int64(healthchecks.DefaultHealthCheckInterval.Seconds())
hc.TimeoutSec = int64(p.TimeoutSeconds)
}
Loading

0 comments on commit 642cb74

Please sign in to comment.