This module automatically forms LXD cluster on amazon AWS. This terraform module will do the following:
- Setup networking
- Setup multi az public subnet
- Setup bastion node
- Setup compute instances
- Setup Private Key access
- Automatically form a cluster
- Destroy a cluster
- Enable removal of specific nodes gracefully
- Protect against
database-leader
deletion
These functionality come together to enable the user to fully manage LXD cluster using IaC (infrastructure as code)
Name | Version |
---|---|
terraform | >= 1.0.0 |
aws | ~> 5.0 |
tls | 4.0.4 |
Name | Version |
---|---|
aws | 5.43.0 |
cloudinit | 2.3.3 |
ssh | 2.7.0 |
terraform | n/a |
tls | 4.0.4 |
Name | Source | Version |
---|---|---|
balancer | upmaru/instellar/aws//modules/balancer | ~> 0.9 |
global_accelerator | upmaru/instellar/aws//modules/global-accelerator | ~> 0.9 |
Name | Description | Type | Default | Required |
---|---|---|---|---|
ami_architecture | The architecture of the AMI | string |
"amd64" |
no |
balancer | Enable Load Balancer | bool |
false |
no |
balancer_deletion_protection | Enable balancer deletion protection | bool |
true |
no |
balancer_ssh | Enable SSH port on balancer | bool |
true |
no |
bastion_size | Bastion instance type? | string |
"t3a.micro" |
no |
bastion_ssh | Enable SSH port | bool |
true |
no |
blueprint | Identifier of the blueprint | string |
n/a | yes |
cluster_topology | How many nodes do you want in your cluster? | list(object({ |
[] |
no |
global_accelerator | Enable Global Accelerator | bool |
false |
no |
identifier | Name of your cluster | string |
n/a | yes |
network_dependencies | value | list |
[] |
no |
node_detail_revision | The revision of the node detail | number |
1 |
no |
node_monitoring | Enable / Disable detailed monitoring | bool |
false |
no |
node_size | Which instance type? | string |
"t3a.medium" |
no |
protect_leader | Protect the database leader node | bool |
true |
no |
public_subnet_ids | Public subnet ids to pass in if block type is compute | list(string) |
n/a | yes |
publicly_accessible | Make the cluster publically accessible? If you use a load balancer this can be false. | bool |
true |
no |
region | AWS region | string |
n/a | yes |
ssh_keys | List of ssh key names | list(string) |
[] |
no |
ssm | Enable SSM | bool |
false |
no |
storage_size | How much storage on your nodes? | number |
40 |
no |
volume_type | Type of EBS Volume to use | string |
"gp3" |
no |
vpc_id | vpc id to pass in if block type is compute | string |
n/a | yes |
vpc_ip_range | VPC ip range | string |
n/a | yes |
Name | Description |
---|---|
balancer | Load balancer details |
bastion_access | Bastion access output for passing into other modules |
bastion_security_group_id | Bastion security group id |
bootstrap_node | Bootstrap node details |
cluster_address | Bootstrap node public ip |
identifier | Identifier of the cluster |
nodes | Compute nodes details |
nodes_iam_role | IAM Role for nodes and bootstrap node |
nodes_security_group_id | Nodes security group id |
subnet_ids | Subnet IDs |
trust_token | Trust token for the cluster |
vpc_id | VPC id |