The GKE Cluster module is used to administer the cluster master for a Google Kubernetes Engine (GKE) Cluster.
The Module is adapted from Gruntwork's GKE Module
-
Modules The modules directory contains the main modules that should be used in your code
- gke-node-pool: Module for creating GKE Node Pools
-
Examples: Example on how to use this module
Report issues/questions/feature requests on in the issues section.
Full contributing guidelines are covered here.
The GKE Cluster, or "cluster master", runs the Kubernetes control plane processes including the Kubernetes API server, scheduler, and core resource controllers.
The master is the unified endpoint for your cluster; it's the "hub" through
which all other components such as nodes interact. Users can interact with the
cluster via Kubernetes API calls, such as by using kubectl
. The GKE cluster
is responsible for running workloads on nodes, as well as scaling/upgrading
nodes.
A "node" is a worker machine in Kubernetes; in GKE, nodes are provisioned as Google Compute Engine VM instances.
GKE Node Pools are a group of nodes who share the same configuration, defined as a NodeConfig. Node pools also control the autoscaling of their nodes, and autoscaling configuration is done inline, alongside the node config definition. A GKE Cluster can have multiple node pools defined.
Node pools are configured directly with the
google_container_node_pool
Terraform resource by providing a reference to the cluster you configured with
this module as the cluster
field.
You must explicitly specify the network and subnetwork of your GKE cluster using
the network
and subnetwork
fields; this module will not implicitly use the
default
network with an automatically generated subnetwork.
The modules in the terraform-google-network
Gruntwork module are a useful tool for configuring your VPC network and
subnetworks in GCP.
A VPC-native cluster is a GKE Cluster that uses alias IP ranges, in that it allocates IP addresses from a block known to GCP. When using an alias range, pod addresses are natively routable within GCP, and VPC networks can ensure that the IP range the cluster uses is reserved.
While using a secondary IP range is recommended in order to to separate cluster master and pod IPs, when using a network in the same project as your GKE cluster you can specify a blank range name to draw alias IPs from your subnetwork's primary IP range. If using a shared VPC network (a network from another GCP project) using an explicit secondary range is required.
See considerations for cluster sizing for more information on sizing secondary ranges for your VPC-native cluster.
In a private cluster, the nodes have internal IP addresses only, which ensures that their workloads are isolated from the public Internet. Private nodes do not have outbound Internet access, but Private Google Access provides private nodes and their workloads with limited outbound access to Google Cloud Platform APIs and services over Google's private network.
If you want your cluster nodes to be able to access the Internet, for example pull images from external container registries, you will have to set up Cloud NAT. See Example GKE Setup for further information.
You can create a private cluster by setting enable_private_nodes
to true
. Note that with a private cluster, setting
the master CIDR range with master_ipv4_cidr_block
is also required.
In a private cluster, the master has two endpoints:
-
Private endpoint: This is the internal IP address of the master, behind an internal load balancer in the master's VPC network. Nodes communicate with the master using the private endpoint. Any VM in your VPC network, and in the same region as your private cluster, can use the private endpoint.
-
Public endpoint: This is the external IP address of the master. You can disable access to the public endpoint by setting
enable_private_endpoint
totrue
.
You can relax the restrictions by authorizing certain address ranges to access the endpoints with the input variable
master_authorized_networks_config
.
Stackdriver Kubernetes Engine Monitoring is enabled by default using this module. It provides improved support for both Stackdriver Monitoring and Stackdriver Logging in your cluster, including a GKE-customized Stackdriver Console with fine-grained breakdown of resources including namespaces and pods. Learn more with the official documentation
Although Stackdriver Kubernetes Engine Monitoring is enabled by default, you can use the legacy Stackdriver options by modifying your configuration. See the differences between GKE Stackdriver versions for the differences between legacy Stackdriver and Stackdriver Kubernetes Engine Monitoring.
Prometheus monitoring for your cluster is ready to go through GCP's Stackdriver Kubernetes Engine Monitoring service. If you've configured your GKE cluster with Stackdriver Kubernetes Engine Monitoring, you can follow Google's guide to using Prometheus to configure your cluster with Prometheus.
Private clusters have the following restrictions and limitations:
- The size of the RFC 1918 block for the cluster master must be /28.
- The nodes in a private cluster must run Kubernetes version 1.8.14-gke.0 or later.
- You cannot convert an existing, non-private cluster to a private cluster.
- Each private cluster you create uses a unique VPC Network Peering.
- Deleting the VPC peering between the cluster master and the cluster nodes, deleting the firewall rules that allow ingress traffic from the cluster master to nodes on port 10250, or deleting the default route to the default Internet gateway, causes a private cluster to stop functioning.
If you want to enable Google Groups for use with RBAC, you have to provide a G Suite domain name using input variable var.gsuite_domain_name
. If a
value is provided, the cluster will be initialised with a security group gke-security-groups@[yourdomain.com]
.
In G Suite, you will have to:
- Create a G Suite Google Group in your domain, named gke-security-groups@[yourdomain.com]. The group must be named exactly gke-security-groups.
- Create groups, if they do not already exist, that represent groups of users or groups who should have different permissions on your clusters.
- Add these groups (not users) to the membership of gke-security-groups@[yourdomain.com].
After the cluster has been created, you are ready to create Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings that reference your G Suite Google Groups. Note that you cannot enable this feature on existing clusters.
For more information, see https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#google-groups-for-gke.
Name | Version |
---|---|
terraform | >= 0.12.26 |
Name | Version |
---|---|
n/a | |
google-beta | n/a |
No modules.
Name | Type |
---|---|
google-beta_google_container_cluster.cluster | resource |
google_container_engine_versions.location | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
alternative_default_service_account | Alternative Service Account to be used by the Node VMs. If not specified, the default compute Service Account will be used. Provide if the default Service Account is no longer available. | string |
null |
no |
basic_auth_password | The password used for basic auth; set both this and basic_auth_username to "" to disable basic auth. |
string |
"" |
no |
basic_auth_username | The username used for basic auth; set both this and basic_auth_password to "" to disable basic auth. |
string |
"" |
no |
cluster_secondary_range_name | The name of the secondary range within the subnetwork for the cluster to use | string |
n/a | yes |
description | The description of the cluster | string |
"" |
no |
disable_public_endpoint | Control whether the master's internal IP address is used as the cluster endpoint. If set to 'true', the master can only be accessed from internal IP addresses. | bool |
false |
no |
enable_client_certificate_authentication | Whether to enable authentication by x509 certificates. With ABAC disabled, these certificates are effectively useless. | bool |
false |
no |
enable_legacy_abac | Whether to enable legacy Attribute-Based Access Control (ABAC). RBAC has significant security advantages over ABAC. | bool |
false |
no |
enable_network_policy | Whether to enable Kubernetes NetworkPolicy on the master, which is required to be enabled to be used on Nodes. | bool |
true |
no |
enable_private_nodes | Control whether nodes have internal IP addresses only. If enabled, all nodes are given only RFC 1918 private addresses and communicate with the master via private networking. | bool |
false |
no |
enable_pubsub_notification | Option to enable GKE pub sub notification | bool |
null |
no |
enable_vertical_pod_autoscaling | Whether to enable Vertical Pod Autoscaling | string |
false |
no |
enable_workload_identity | Enable Workload Identity on the cluster | bool |
false |
no |
gsuite_domain_name | The domain name for use with Google security groups in Kubernetes RBAC. If a value is provided, the cluster will be initialized with security group gke-security-groups@[yourdomain.com] . |
string |
null |
no |
horizontal_pod_autoscaling | Whether to enable the horizontal pod autoscaling addon | bool |
true |
no |
http_load_balancing | Whether to enable the http (L7) load balancing addon | bool |
true |
no |
identity_namespace | Workload Identity Namespace. Default sets project based namespace [project_id].svc.id.goog | string |
null |
no |
ip_masq_link_local | Whether to masquerade traffic to the link-local prefix (169.254.0.0/16). | bool |
false |
no |
ip_masq_resync_interval | The interval at which the agent attempts to sync its ConfigMap file from the disk. | string |
"60s" |
no |
kubernetes_version | The Kubernetes version of the masters. If set to 'latest' it will pull latest available version in the selected region. | string |
"latest" |
no |
location | The location (region or zone) to host the cluster in | string |
n/a | yes |
logging_service | The logging service that the cluster should write logs to. Available options include logging.googleapis.com/kubernetes, logging.googleapis.com (legacy), and none | string |
"logging.googleapis.com/kubernetes" |
no |
maintenance_start_time | Time window specified for daily maintenance operations in RFC3339 format | string |
"05:00" |
no |
master_authorized_networks_config | The desired configuration options for master authorized networks. Omit the nested cidr_blocks attribute to disallow external access (except the cluster node IPs, which GKE automatically whitelists) ### example format ### master_authorized_networks_config = [{ cidr_blocks = [{ cidr_block = "10.0.0.0/8" display_name = "example_network" }], }] |
list(any) |
[] |
no |
master_ipv4_cidr_block | The IP range in CIDR notation to use for the hosted master network. This range will be used for assigning internal IP addresses to the master or set of masters, as well as the ILB VIP. This range must not overlap with any other ranges in use within the cluster's network. | string |
"" |
no |
monitoring_service | The monitoring service that the cluster should write metrics to. Automatically send metrics from pods in the cluster to the Stackdriver Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting. Available options include monitoring.googleapis.com/kubernetes, monitoring.googleapis.com (legacy), and none | string |
"monitoring.googleapis.com/kubernetes" |
no |
name | The name of the cluster | string |
n/a | yes |
network | A reference (self link) to the VPC network to host the cluster in | string |
n/a | yes |
network_project | The project ID of the shared VPC's host (for shared vpc support) | string |
"" |
no |
non_masquerade_cidrs | List of strings in CIDR notation that specify the IP address ranges that do not use IP masquerading. | list(string) |
[ |
no |
project | The project ID to host the cluster in | string |
n/a | yes |
pubsub_topic | Pub sub topic to publish GKE notifications | string |
null |
no |
release_channel | (Optional) The release channel to get upgrades of your GKE clusters from | string |
null |
no |
resource_labels | The GCE resource labels (a map of key/value pairs) to be applied to the cluster. | map(any) |
{} |
no |
secrets_encryption_kms_key | The Cloud KMS key to use for the encryption of secrets in etcd, e.g: projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key | string |
null |
no |
services_secondary_range_name | The name of the secondary range within the subnetwork for the services to use | string |
null |
no |
stub_domains | Map of stub domains and their resolvers to forward DNS queries for a certain domain to an external DNS server | map(string) |
{} |
no |
subnetwork | A reference (self link) to the subnetwork to host the cluster in | string |
n/a | yes |
Name | Description |
---|---|
client_certificate | Public certificate used by clients to authenticate to the cluster endpoint. |
client_key | Private key used by clients to authenticate to the cluster endpoint. |
cluster_ca_certificate | The public certificate that is the root of trust for the cluster. |
endpoint | The IP address of the cluster master. This is private is disable_public_access it true |
master_version | The Kubernetes master version. |
name | The name of the cluster master. This output is used for interpolation with node pools, other modules. |
private_endpoint | The Private IP address of the cluster master. |
public_endpoint | The Public IP address of the cluster master. |