Skip to main content

Module: ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS.

The module also creates AutoScaling Policies and CloudWatch Metric Alarms to monitor CPU utilization on the EC2 instances and scale the number of instance in the AutoScaling Group up or down. If you don't want to use the provided functionality, or want to provide your own policies, disable it by setting the variable autoscaling_policies_enabled to false.

At present, although you can set the created AutoScaling Policy type to any legal value, in practice only SimpleScaling is supported. To use a StepScaling or TargetTrackingScaling policy, create it yourself and then pass it in the alarm_actions field of custom_alarms.

Usage

locals {
userdata = <<-USERDATA
#!/bin/bash
cat <<"__EOF__" > /home/ec2-user/.ssh/config
Host *
StrictHostKeyChecking no
__EOF__
chmod 600 /home/ec2-user/.ssh/config
chown ec2-user:ec2-user /home/ec2-user/.ssh/config
USERDATA
}

module "autoscale_group" {
source = "cloudposse/ec2-autoscale-group/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"

namespace = var.namespace
stage = var.stage
environment = var.environment
name = var.name

image_id = "ami-08cab282f9979fc7a"
instance_type = "t2.small"
security_group_ids = ["sg-xxxxxxxx"]
subnet_ids = ["subnet-xxxxxxxx", "subnet-yyyyyyyy", "subnet-zzzzzzzz"]
health_check_type = "EC2"
min_size = 2
max_size = 3
wait_for_capacity_timeout = "5m"
associate_public_ip_address = true
user_data_base64 = base64encode(local.userdata)

# All inputs to `block_device_mappings` have to be defined
block_device_mappings = [
{
device_name = "/dev/sda1"
no_device = "false"
virtual_name = "root"
ebs = {
encrypted = true
volume_size = 200
delete_on_termination = true
iops = null
kms_key_id = null
snapshot_id = null
volume_type = "standard"
}
}
]

tags = {
Tier = "1"
KubernetesCluster = "us-west-2.testing.cloudposse.co"
}

# Auto-scaling policies and CloudWatch metric alarms
autoscaling_policies_enabled = true
cpu_utilization_high_threshold_percent = "70"
cpu_utilization_low_threshold_percent = "20"
}

To enable custom_alerts the map needs to be defined like so :

alarms = {
alb_scale_up = {
alarm_name = "${module.default_label.id}-alb-target-response-time-for-scale-up"
comparison_operator = "GreaterThanThreshold"
evaluation_periods = var.alb_target_group_alarms_evaluation_periods
metric_name = "TargetResponseTime"
namespace = "AWS/ApplicationELB"
period = var.alb_target_group_alarms_period
statistic = "Average"
threshold = var.alb_target_group_alarms_response_time_max_threshold
dimensions_name = "LoadBalancer"
dimensions_target = data.alb.arn_suffix
treat_missing_data = "missing"
ok_actions = var.alb_target_group_alarms_ok_actions
insufficient_data_actions = var.alb_target_group_alarms_insufficient_data_actions
alarm_description = "ALB Target response time is over ${var.alb_target_group_alarms_response_time_max_threshold} for more than ${var.alb_target_group_alarms_period}"
alarm_actions = [module.autoscaling.scale_up_policy_arn]
}
}

All those keys are required to be there so if the alarm you are setting does not requiere one or more keys you can just set to empty but do not remove the keys otherwise you could get a weird merge error due to the maps being of different sizes.