This example demonstrates secure deployment of a simple website on GCP using Terraform to create the infrastructure (Project, VPC, Load Balanced MIG cluster etc.) and Chef to configure the VMs.
- Terraform is a popular tool for creation and management of Infrastructure as Code.
- Chef is a popular tool for configuration management of systems and applications.
- Private Google Access on VPC Subnets is enabled so that the VM instances with private IPs can reach Google APIs and Services.
- Firewall rules are used to control inbound traffic through the Load Balancer only and no traffic goes to the VMs directly.
The below resources are created using Terraform modules:
- Project
- VPC Network (subnets, routes, firewall rules)
- Service Account
- Managed Instance Group (MIG)
- Global HTTPS LB
- GCS Bucket
A basic understanding of the below tools will be essential to understand this example:
- Terraform (must be installed already)
- Terraform GCP Provider
- Chef
- Git (must be installed already)
- Cloud SDK (must be installed already)
$ git clone https://github.com/r4hulgupta/simple-website-tf-chef.git
$ cd simple-website-tf-chef/
If not done so already, initialize your gcloud to set appropriate project and authentication to GCP.
$ gcloud init
<... Follow the interactive prompts to set values for project, zone defaults etc. ...>
$ gcloud auth login
$ gcloud auth application-default login
We will use a shell script to do initial seed project setup.
Feel free to skip running this script if you already have a project with that you can use as seed project and if that project already has the below configuration.
You may need the below permissions to be able to run this script:
- At Organization level:
- roles/billing.user
- roles/iam.securityAdmin
- roles/owner
The script will perform below actions:
- Create a new seed project
- Assign billing account to seed project
- Create GCS bucket for Terraform state
- Create a CSR repo for source code management
- Create a GCP IAM service account to use with Terraform to perform deployment
- Create a service account key and save it locally
- Grant permissions to deployment service account
- On the organization level
- roles/resourcemanager.organizationViewer
- roles/resourcemanager.projectCreator
- roles/billing.user
- roles/compute.networkAdmin
- roles/iam.serviceAccountAdmin
- On the seed project level
- roles/compute.instanceAdmin.v1
- roles/storage.admin
- roles/resourcemanager.projectIamAdmin
- On the organization level
- Enable required APIs
- cloudresourcemanager.googleapis.com
- cloudbilling.googleapis.com
- iam.googleapis.com
- admin.googleapis.com
- sourcerepo.googleapis.com
- storage-api.googleapis.com
Edit /helpers/setup_seed_prj.env file to set some variables that will be used by the script to setup your seed project.
ORG_ID="<YOUR_ORG_ID>"
BILLING_ACCOUNT="<YOUR_BILLING_ACCOUNT_ID>"
TF_BUCKET_NAME="<DESIRED_BUCKET_NAME>"
SEED_PROJECT="<DESIRED_PROJECT_NAME>"
SA_NAME="<DESIRED_SERVICE_ACCOUNT_NAME>"
KEY_FILE="<DESIRED_KEY_FILE_LOCAL_PATH>"
CODE_REPO_NAME="mychefrepo" # Do not change
$ helpers/setup_seed_prj.sh
We will use the below values for this example, it will be easy to use some commands later if you set these variables on your shell.
if you change any of the below values, make sure to use the same value throughout the example.
CORE_PROJECT="my-core-prj"
GOLD_IMAGE_VM="ws-dev-gold"
VPC_NAME="my-core-vpc"
GCP_REGION_A="us-west1"
GCP_REGION_B="us-central1"
GCP_ZONE="us-west1-b"
APP_SA_NAME="svc-ws-dev"
HELPER_BUCKET="my-core-prj-deployment-helpers"
DEPLOYMENT_ID="my-dev-website"
CODE_REPO_NAME="mychefrepo"
We will use Terraform Project Factory module to create a project and Network module to create a custom network.
Project Factory module has a
auto_create_network
option which will be set tofalse
so that the default network is not created.
The below components will be created by Terraform:
- Project Name: $CORE_PROJECT
- VPC Name: $VPC_NAME
- Subnets:
- dev-$VPC_NAME-$GCP_REGION_A
- prd-$VPC_NAME-$GCP_REGION_A
- sse-$VPC_NAME-$GCP_REGION_A
- tst-$VPC_NAME-$GCP_REGION_A
- dev-$VPC_NAME-$GCP_REGION_B
- prd-$VPC_NAME-$GCP_REGION_B
- sse-$VPC_NAME-$GCP_REGION_B
- tst-$VPC_NAME-$GCP_REGION_B
- Private google access enabled on all the subnets
- Firewall rules
- Allow HTTP health check traffic between LB and instances with network tag allow-health-check
- Allow API traffic to GCP private endpoints (199.36.153.4/30)
- Routes
- Route to ‘199.36.153.4/30’ with internet gateway as the next hop
- GCS Bucket to be used for deployment automation helper scripts and installers
- Helper bucket name: $HELPER_BUCKET
-
Set Terraform backend config.
- Edit file /single-project-vpc/core/backend.tf.
- Replace
<TF_BUCKET_NAME>
with the value of$TF_BUCKET_NAME
. - Save the file.
-
Make a copy of terraform.tfvars template.
$ cd simple-website-tf-chef/single-project-vpc/core $ cp terraform.tfvars.template terraform.tfvars
-
Set the values in terraform.tfvars as needed and save the file.
organization_id = "<ORG_ID>" credentials_file_path = "<KEY_FILE>" billing_account_id = "<BILLING_ACCOUNT>" project_name = "<CORE_PROJECT>" network_name = "<VPC_NAME>" subnet_region_and_cidr = { "subnet_01_region" = "<GCP_REGION_A>" "subnet_02_region" = "<GCP_REGION_B>" "subnet_01_prd" = "10.30.10.0/24" "subnet_01_dev" = "10.30.11.0/24" "subnet_01_tst" = "10.30.12.0/24" "subnet_01_sse" = "10.30.13.0/24" "subnet_02_prd" = "10.40.10.0/24" "subnet_02_dev" = "10.40.11.0/24" "subnet_02_tst" = "10.40.12.0/24" "subnet_02_sse" = "10.40.13.0/24" }
-
Run
terraform init
andterraform plan
within/single-project-vpc/core
directory.$ terraform init $ terraform plan <...terraform plan output will be shown here...>
-
Review the plan output to see what resources will be created and then run
terraform apply
to create the resources.$ terraform apply <...terraform will show plan output that will be applied and will ask for confirmation before it is applied...>
-
Edit /helpers/startup-scripts/ws_bootstrap.sh.
- Set the value of variable
SEED_PROJECT
with your value of$SEED_PROJECT
.
- Set the value of variable
-
Upload the edited ws_bootstrap.sh to GCS Bucket.
$ cd simple-website-tf-chef/ $ gsutil cp -r helpers/startup-scripts/ gs://$HELPER_BUCKET/ # To check if the file was uploaded successfully $ gsutil ls -r gs://$HELPER_BUCKET
-
Upload the chef repo to CSR.
Get source repository and set
git remote
to CSR repo that was created in SEED_PROJECT and push code from mychefrepo to it.Read more about pushing code from existing repository to CSR here.
$ cp -r simple-website-tf-chef/mychefrepo/ ./mychefrepo $ cd mychefrepo/ $ CODE_REPO_URL="https://source.developers.google.com/p/${SEED_PROJECT}/r/${CODE_REPO_NAME}" $ git config --global credential.'https://source.developers.google.com'.helper gcloud.sh $ git init $ git add . && git commit -am 'initial commit' $ git remote add google "${CODE_REPO_URL}" $ git push --all google
-
Create a golden image with dependencies pre-installed.
For high availability of the website, it is really important for the cluster to autoscale faster and so all the dependencies must be pre-installed and only the basic website configuration should be done during the bootstrapping. Also since the environment won’t have internet access, doing this during deployment would require additional work to enable internet access via a proxy.
-
Create a GCE Instance with public internet access to make it easy to install packages and dependencies. This example is using CentOS 7 Linux base image.
$ gcloud compute instances create $GOLD_IMAGE_VM \ --project=$CORE_PROJECT \ --zone=$GCP_ZONE \ --machine-type=n1-standard-1 \ --subnet="dev-${VPC_NAME}-${GCP_REGION_A}" \ --tags=allow-ssh-from-internet \ --image-family=centos-7 \ --image-project=centos-cloud \ --boot-disk-size=10GB \ --boot-disk-type=pd-standard \ --boot-disk-device-name=$GOLD_IMAGE_VM --service-account=project-service-account@${CORE_PROJECT}.iam.gserviceaccount.com
-
Once the instance is in RUNNING status, ssh into it to install some packages.
$ gcloud compute ssh $GOLD_IMAGE_VM --zone=$GCP_ZONE --project=$CORE_PROJECT
-
Install git so that cookbooks and app code can be pulled from the repository.
$ sudo yum install git
-
Install Apache web server.
$ sudo yum install httpd
-
Configure git to use gcloud as credentials helper.
This step is needed when using Cloud Source Repository (CSR) for code management. This will let you use git commands to interact with CSR to pull code during bootstrap.
$ sudo git config --global credential.'https://source.developers.google.com'.helper gcloud.sh
-
Install Chef Client.
We are using version 15.0.300 for this example.
$ curl -L https://omnitruck.chef.io/install.sh | sudo bash -s -- -v 15.0.300
-
Install Stackdriver Logging and Monitoring Agents.
# Monitoring Agent $ curl -sSO https://dl.google.com/cloudagents/install-monitoring-agent.sh $ sudo bash install-monitoring-agent.sh # Logging Agent $ curl -sSO https://dl.google.com/cloudagents/install-logging-agent.sh $ sudo bash install-logging-agent.sh
-
Log out and stop the instance after installing packages to be able to create an image from it.
$ gcloud compute instances stop $GOLD_IMAGE_VM --zone=$GCP_ZONE \ --project=$CORE_PROJECT
-
Create the GCE instance image using the gcloud command or via the GCP Console. It typically takes 5-10 mins for the image creation to complete.
$ gcloud compute images create $GOLD_IMAGE_VM \ --project=$CORE_PROJECT \ --description="Gold Image for Web Servers" \ --family='centos-7' \ --source-disk=$GOLD_IMAGE_VM \ --source-disk-zone=$GCP_ZONE
-
Check the status of image using the below command. It should by in ‘Ready’ state once completely created.
$ gcloud compute images list --filter="name='$GOLD_IMAGE_VM'" --project=$CORE_PROJECT
-
-
Create a service account to be used by MIG VMs.
We will use Terraform to create the service account in CORE_PROJECT and grant it required permissions on GCP.
The below components will be created by Terraform:
- Service Account in CORE_PROJECT
- Permissions granted to the service account:
- Permissions on CORE_PROJECT
- roles/storage.objectViewer
- roles/monitoring.metricWriter
- roles/logging.logWriter
- Permissions on SEED_PROJECT
- roles/source.reader
- Permissions on CORE_PROJECT
Steps:
- Set Terraform backend config.
- Edit file /service-account/dev/backend.tf.
- Replace
<TF_BUCKET_NAME>
with the value of$TF_BUCKET_NAME
. - Save the file.
- Make a copy of /service-account/dev/terraform.tfvars.template.
$ cd simple-website-tf-chef/service-account/dev $ cp terraform.tfvars.template terraform.tfvars
- Set the values in terraform.tfvars as needed and save the file.
project_id = "<CORE_PROJECT>" credentials_file_path = "<KEY_FILE>" service_account_id = "<APP_SA_NAME>" code_repo_project = "<SEED_PROJECT>"
- Run
terraform init
andterraform plan
within/service-account/dev
directory.$ terraform init $ terraform plan <...terraform plan output will be shown here...>
- Review the plan output to see what resources will be created and then run
terraform apply
to create the resources.$ terraform apply <...terraform will show plan output that will be applied and will ask for confirmation before it is applied...>
We will use Terraform VM module to create the GCE MIG without public IPs and LB module to create the Global LB. The startup script will be used to trigger chef client in local mode on the instance that will do OS and application configurations and then make the website available.
The below components will be created by Terraform:
- A GCE Managed Instance Group (MIG) running CentOS 7.
- The instances will pull
ws_bootstrap.sh
from GCS based on the value of metadata keystartup_script_url
and run it locally.Since Private Google Access is enabled, the instance will use GCP private network to get the script from GCS bucket.
- The instances will pull
- Bootstrapping process triggered by
ws_bootstrap.sh
:- Git pull to CSR repo to get Chef cookbooks.
- Run chef client local mode to start the configuration. Chef will then apply website code and bring Apache web server up.
- A Global HTTPS Load Balancer to interface and load balance the traffic between end users and the web servers.
- A self signed cert will be created and applied to GLB to serve SSL connections.
-
Set Terraform backend config.
- Edit file /ws-deploy/dev/backend.tf.
- Replace
<TF_BUCKET_NAME>
with the value of$TF_BUCKET_NAME
. - Save the file.
-
Make a copy of /ws-deploy/dev/terraform.tfvars.template
$ cd simple-website-tf-chef/ws-deploy/dev $ cp terraform.tfvars.template terraform.tfvars
-
Set the values in terraform.tfvars as needed and save the file.
project_id = "<CORE_PROJECT>" credential_file_path = "<KEY_FILE>" region = "<GCP_REGION_A>" network_name = "<VPC_NAME>" deployment_id = "<DEPLOYMENT_ID>" # some identifier for naming of resources /************************* * MIG Template Variables *************************/ machine_type = "n1-standard-1" tags = ["allow-healthcheck"] source_image = "<GOLD_IMAGE_VM>" source_image_project = "<CORE_PROJECT>" auto_delete_disk = "true" disk_size_gb = "10" disk_type = "pd-standard" metadata = { "startup-script-url" = "gs://<HELPER_BUCKET>/startup-scripts/ws_bootstrap.sh" } service_account = { email = "<APP_SA_NAME>@<CORE_PROJECT>.iam.gserviceaccount.com" scopes = ["cloud-platform"] } subnetwork_name = "dev-<VPC_NAME>-<GCP_REGION_A>" /************************* * MIG Cluster Variables *************************/ min_replicas = "2" enable_autoscaling = "true" /************************* * Load Balancer Variables *************************/ healthcheck_port_name = "http" healthcheck_port_number = "80" healthcheck_target_path = "/root/home/index.html"
-
Run
terraform init
andterraform plan
within/ws-deploy/dev
directory.$ terraform init $ terraform plan <...terraform plan output will be shown here...>
-
Review the plan output to see what resources will be created and then run
terraform apply
to create the resources.$ terraform apply <...terraform will show plan output that will be applied and will ask for confirmation before it is applied...>
There are different ways to verify that the website is live:
- Check the website URL that was given by terraform as output.
- Go to the
GCP Console > Network Services > Load Balancing
and select your load balancer and ensure that the health checks are passing. When passing it will show a green check mark under Backends. - Check logs on Stackdriver and Serial Port for the GCE instances that are part of the Managed Instance Group (MIG).
Follow the below steps to cleanup all the resources created during this example:
$ cd simple-website-tf-chef/ws-deploy/dev/
$ terraform destroy
< ... review the plan that will be destroyed and type 'yes' to proceed ... >
-
Remove the service account.
$ cd simple-website-tf-chef/service-account/dev/ $ terraform destroy
-
Remove GCE instance that was used to create instance image.
$ gcloud compute instances delete $GOLD_IMAGE_VM --zone=$GCP_ZONE --project=$CORE_PROJECT
-
Remove GCE instance image.
$ gcloud compute images delete $GOLD_IMAGE_VM --project=$CORE_PROJECT
-
Remove the CSR code repository.
$ gcloud source repos delete $CODE_REPO_NAME --project $SEED_PROJECT
$ cd simple-website-tf-chef/single-project-vpc/core/
$ terraform destroy