-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Refactor provider's implementers guide
- Loading branch information
1 parent
f6e8b3b
commit 2c51281
Showing
14 changed files
with
279 additions
and
240 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
File renamed without changes.
File renamed without changes
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,8 @@ | ||
# Controllers and Reconciliation | ||
|
||
Right now, you can create objects with our API types, but those objects doesn't make any impact on your mailgun infrastrucrure. | ||
Let's fix that by implementing controllers and reconciliation for your API objects. | ||
|
||
From the [kubebuilder book][controller]: | ||
|
||
> Controllers are the core of Kubernetes, and of any operator. | ||
|
@@ -11,7 +14,8 @@ From the [kubebuilder book][controller]: | |
[controller]: https://book.kubebuilder.io/cronjob-tutorial/controller-overview.html#whats-in-a-controller | ||
|
||
Right now, we can create objects in our API but we won't do anything about it. Let's fix that. | ||
Also in this case, controllers and reconcilers generated by Kubebuilder are just a shell. | ||
It is up to you to fill it with the actual implementation. | ||
|
||
# Let's see the Code | ||
|
||
|
@@ -39,14 +43,16 @@ func (r *MailgunClusterReconciler) Reconcile(ctx context.Context, req ctrl.Reque | |
|
||
## RBAC Roles | ||
|
||
Before looking at `(add) your logic here`, lets focus for a moment on the markers before the Reconcile func. | ||
|
||
The `// +kubebuilder...` lines tell kubebuilder to generate [RBAC] roles so the manager we're writing can access its own managed resources. These should already exist in `controllers/mailguncluster_controller.go`: | ||
|
||
```go | ||
// +kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=mailgunclusters,verbs=get;list;watch;create;update;patch;delete | ||
// +kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=mailgunclusters/status,verbs=get;update;patch | ||
``` | ||
|
||
We also need to add rules that will let it retrieve (but not modify) Cluster API objects. | ||
We also need to add rules that will let it retrieve (but not modify) `Cluster` objects. | ||
So we'll add another annotation for that, right below the other lines: | ||
|
||
```go | ||
|
@@ -55,7 +61,7 @@ So we'll add another annotation for that, right below the other lines: | |
|
||
Make sure to add this annotation to `MailgunClusterReconciler`. | ||
|
||
For `MailgunMachineReconciler`, access to Cluster API `Machine` object is needed, so you must add this annotation in `controllers/mailgunmachine_controller.go`: | ||
Also, for our `MailgunMachineReconciler`, access to Cluster API `Machine` object is needed, so you must add this annotation in `controllers/mailgunmachine_controller.go`: | ||
|
||
```go | ||
// +kubebuilder:rbac:groups=cluster.x-k8s.io,resources=machines;machines/status,verbs=get;list;watch | ||
|
@@ -69,9 +75,10 @@ make manifests | |
|
||
[RBAC]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole | ||
|
||
## State | ||
## Reconciliation | ||
|
||
Let's focus on the `MailgunClusterReconciler` struct first. | ||
|
||
Let's focus on that `struct` first. | ||
First, a word of warning: no guarantees are made about parallel access, both on one machine or multiple machines. | ||
That means you should not store any important state in memory: if you need it, write it into a Kubernetes object and store it. | ||
|
||
|
@@ -87,14 +94,12 @@ type MailgunClusterReconciler struct { | |
} | ||
``` | ||
|
||
## Reconciliation | ||
|
||
Now it's time for our Reconcile function. | ||
Reconcile is only passed a name, not an object, so let's retrieve ours. | ||
|
||
Here's a naive example: | ||
|
||
``` | ||
```go | ||
func (r *MailgunClusterReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { | ||
ctx := context.Background() | ||
_ = r.Log.WithValues("mailguncluster", req.NamespacedName) | ||
|
@@ -108,54 +113,41 @@ func (r *MailgunClusterReconciler) Reconcile(ctx context.Context, req ctrl.Reque | |
} | ||
``` | ||
|
||
By returning an error, we request that our controller will get `Reconcile()` called again. | ||
That may not always be what we want - what if the object's been deleted? So let's check that: | ||
By returning an error, you request that our controller will get `Reconcile()` called again. | ||
That may not always be what you want - what if the object's been deleted? So let's check that: | ||
|
||
``` | ||
var cluster infrav1.MailgunCluster | ||
if err := r.Get(ctx, req.NamespacedName, &cluster); err != nil { | ||
// import apierrors "k8s.io/apimachinery/pkg/api/errors" | ||
if apierrors.IsNotFound(err) { | ||
return ctrl.Result{}, nil | ||
```go | ||
var cluster infrav1.MailgunCluster | ||
if err := r.Get(ctx, req.NamespacedName, &cluster); err != nil { | ||
// import apierrors "k8s.io/apimachinery/pkg/api/errors" | ||
if apierrors.IsNotFound(err) { | ||
return ctrl.Result{}, nil | ||
} | ||
return ctrl.Result{}, err | ||
} | ||
return ctrl.Result{}, err | ||
} | ||
``` | ||
|
||
Now, if this were any old `kubebuilder` project we'd be done, but in our case we have one more object to retrieve. | ||
Now, if this were any old `kubebuilder` project you'd be done, but in our case you have one more object to retrieve. | ||
Cluster API splits a cluster into two objects: the [`Cluster` defined by Cluster API itself][cluster]. | ||
We'll want to retrieve that as well. | ||
Luckily, cluster API [provides a helper for us][getowner]. | ||
|
||
```go | ||
cluster, err := util.GetOwnerCluster(ctx, r.Client, &mg) | ||
if err != nil { | ||
return ctrl.Result{}, err | ||
|
||
} | ||
``` | ||
|
||
### client-go versions | ||
At the time this document was written, `kubebuilder` pulls `client-go` version `1.14.1` into `go.mod` (it looks like `k8s.io/client-go v11.0.1-0.20190409021438-1a26190bd76a+incompatible`). | ||
|
||
If you encounter an error when compiling like: | ||
|
||
``` | ||
../pkg/mod/k8s.io/[email protected]+incompatible/rest/request.go:598:31: not enough arguments in call to watch.NewStreamWatcher | ||
have (*versioned.Decoder) | ||
want (watch.Decoder, watch.Reporter)` | ||
cluster, err := util.GetOwnerCluster(ctx, r.Client, &mg) | ||
if err != nil { | ||
return ctrl.Result{}, err | ||
|
||
} | ||
``` | ||
|
||
You may need to bump `client-go`. At time of writing, that means `1.15`, which looks like: `k8s.io/client-go v11.0.1-0.20190409021438-1a26190bd76a+incompatible`. | ||
|
||
## The fun part | ||
### The fun part | ||
|
||
_More Documentation: [The Kubebuilder Book][book] has some excellent documentation on many things, including [how to write good controllers!][implement]_ | ||
|
||
[book]: https://book.kubebuilder.io/ | ||
[implement]: https://book.kubebuilder.io/cronjob-tutorial/controller-implementation.html | ||
|
||
Now that we have our objects, it's time to do something with them! | ||
Now that you have all the objects you care about, it's time to do something with them! | ||
This is where your provider really comes into its own. | ||
In our case, let's try sending some mail: | ||
|
||
|
@@ -170,7 +162,7 @@ if err != nil { | |
} | ||
``` | ||
|
||
## Idempotency | ||
### Idempotency | ||
|
||
But wait, this isn't quite right. | ||
`Reconcile()` gets called periodically for updates, and any time any updates are made. | ||
|
@@ -180,37 +172,37 @@ This is an important thing about controllers: they need to be idempotent. This m | |
So in our case, we'll store the result of sending a message, and then check to see if we've sent one before. | ||
|
||
```go | ||
if mgCluster.Status.MessageID != nil { | ||
// We already sent a message, so skip reconciliation | ||
if mgCluster.Status.MessageID != nil { | ||
// We already sent a message, so skip reconciliation | ||
return ctrl.Result{}, nil | ||
} | ||
|
||
subject := fmt.Sprintf("[%s] New Cluster %s requested", mgCluster.Spec.Priority, cluster.Name) | ||
body := fmt.Sprintf("Hello! One cluster please.\n\n%s\n", mgCluster.Spec.Request) | ||
|
||
msg := mailgun.NewMessage(mgCluster.Spec.Requester, subject, body, r.Recipient) | ||
_, msgID, err := r.Mailgun.Send(msg) | ||
if err != nil { | ||
return ctrl.Result{}, err | ||
} | ||
|
||
// patch from sigs.k8s.io/cluster-api/util/patch | ||
helper, err := patch.NewHelper(&mgCluster, r.Client) | ||
if err != nil { | ||
return ctrl.Result{}, err | ||
} | ||
mgCluster.Status.MessageID = &msgID | ||
if err := helper.Patch(ctx, &mgCluster); err != nil { | ||
return ctrl.Result{}, errors.Wrapf(err, "couldn't patch cluster %q", mgCluster.Name) | ||
} | ||
|
||
return ctrl.Result{}, nil | ||
} | ||
|
||
subject := fmt.Sprintf("[%s] New Cluster %s requested", mgCluster.Spec.Priority, cluster.Name) | ||
body := fmt.Sprintf("Hello! One cluster please.\n\n%s\n", mgCluster.Spec.Request) | ||
|
||
msg := mailgun.NewMessage(mgCluster.Spec.Requester, subject, body, r.Recipient) | ||
_, msgID, err := r.Mailgun.Send(msg) | ||
if err != nil { | ||
return ctrl.Result{}, err | ||
} | ||
|
||
// patch from sigs.k8s.io/cluster-api/util/patch | ||
helper, err := patch.NewHelper(&mgCluster, r.Client) | ||
if err != nil { | ||
return ctrl.Result{}, err | ||
} | ||
mgCluster.Status.MessageID = &msgID | ||
if err := helper.Patch(ctx, &mgCluster); err != nil { | ||
return ctrl.Result{}, errors.Wrapf(err, "couldn't patch cluster %q", mgCluster.Name) | ||
} | ||
|
||
return ctrl.Result{}, nil | ||
``` | ||
|
||
[cluster]: https://godoc.org/sigs.k8s.io/cluster-api/api/v1beta1#Cluster | ||
[getowner]: https://godoc.org/sigs.k8s.io/cluster-api/util#GetOwnerMachine | ||
|
||
#### A note about the status | ||
### A note about the status | ||
|
||
Usually, the `Status` field should only be values that can be _computed from existing state_. | ||
Things like whether a machine is running can be retrieved from an API, and cluster status can be queried by a healthcheck. | ||
|
@@ -221,55 +213,56 @@ If you have a backup of your cluster and you want to restore it, Kubernetes does | |
|
||
We use the MessageID as a `Status` here to illustrate how one might issue status updates in a real application. | ||
|
||
## Update `main.go` with your new fields | ||
## Update `main.go` | ||
|
||
If you added fields to your reconciler, you'll need to update `main.go`. | ||
Since you added fields to the `MailgunClusterReconciler`, it is now required to update `main.go` to set those fields when | ||
our reconciler is initialized. | ||
|
||
Right now, it probably looks like this: | ||
|
||
```go | ||
if err = (&controllers.MailgunClusterReconciler{ | ||
Client: mgr.GetClient(), | ||
Log: ctrl.Log.WithName("controllers").WithName("MailgunCluster"), | ||
}).SetupWithManager(mgr); err != nil { | ||
setupLog.Error(err, "Unable to create controller", "controller", "MailgunCluster") | ||
os.Exit(1) | ||
} | ||
if err = (&controllers.MailgunClusterReconciler{ | ||
Client: mgr.GetClient(), | ||
Log: ctrl.Log.WithName("controllers").WithName("MailgunCluster"), | ||
}).SetupWithManager(mgr); err != nil { | ||
setupLog.Error(err, "Unable to create controller", "controller", "MailgunCluster") | ||
os.Exit(1) | ||
} | ||
``` | ||
|
||
Let's add our configuration. | ||
We're going to use environment variables for this: | ||
|
||
```go | ||
domain := os.Getenv("MAILGUN_DOMAIN") | ||
if domain == "" { | ||
setupLog.Info("missing required env MAILGUN_DOMAIN") | ||
os.Exit(1) | ||
} | ||
|
||
apiKey := os.Getenv("MAILGUN_API_KEY") | ||
if apiKey == "" { | ||
setupLog.Info("missing required env MAILGUN_API_KEY") | ||
os.Exit(1) | ||
} | ||
|
||
recipient := os.Getenv("MAIL_RECIPIENT") | ||
if recipient == "" { | ||
setupLog.Info("missing required env MAIL_RECIPIENT") | ||
os.Exit(1) | ||
} | ||
|
||
mg := mailgun.NewMailgun(domain, apiKey) | ||
|
||
if err = (&controllers.MailgunClusterReconciler{ | ||
Client: mgr.GetClient(), | ||
Log: ctrl.Log.WithName("controllers").WithName("MailgunCluster"), | ||
Mailgun: mg, | ||
Recipient: recipient, | ||
}).SetupWithManager(mgr); err != nil { | ||
setupLog.Error(err, "Unable to create controller", "controller", "MailgunCluster") | ||
os.Exit(1) | ||
} | ||
domain := os.Getenv("MAILGUN_DOMAIN") | ||
if domain == "" { | ||
setupLog.Info("missing required env MAILGUN_DOMAIN") | ||
os.Exit(1) | ||
} | ||
apiKey := os.Getenv("MAILGUN_API_KEY") | ||
if apiKey == "" { | ||
setupLog.Info("missing required env MAILGUN_API_KEY") | ||
os.Exit(1) | ||
} | ||
recipient := os.Getenv("MAIL_RECIPIENT") | ||
if recipient == "" { | ||
setupLog.Info("missing required env MAIL_RECIPIENT") | ||
os.Exit(1) | ||
} | ||
mg := mailgun.NewMailgun(domain, apiKey) | ||
if err = (&controllers.MailgunClusterReconciler{ | ||
Client: mgr.GetClient(), | ||
Log: ctrl.Log.WithName("controllers").WithName("MailgunCluster"), | ||
Mailgun: mg, | ||
Recipient: recipient, | ||
}).SetupWithManager(mgr); err != nil { | ||
setupLog.Error(err, "Unable to create controller", "controller", "MailgunCluster") | ||
os.Exit(1) | ||
} | ||
``` | ||
|
||
If you have some other state, you'll want to initialize it here! |
Oops, something went wrong.