Publishing Cloudfront distributions with Terraform

Bryan Arellano
11 min readJul 5, 2022

Have you heard about Content Delivery Network or CDN? If you don’t, here is a short explanation, a CDN is a set of servers distributed around the world to deliver web pages, images, videos, etc… Suppose this, you are in Japan and you want to check a web page from New York, without a CDN you are going to have a lot of latency when you try to navigate on that page because you are establishing a direct connection, but don’t worry because CDN comes to save the day, with this network the original page is in New York but you can access through of nearest node(server), for example, a node in Europe, so you can check your favorite content without latency, streaming services like Netflix use CDN to deliver their content without latency.

There are different CDN providers: Cloudflare, AWS, Azure, Google, etc…Today I’m going to talk about Cloudfront, which is a CDN service from AWS.

Cloudfront manages distributions to deliver the content you can add distributions to show a web page, images, videos, etc… A distribution tells CloudFront where you want the content to be delivered and what of kind rules you want to apply, for example, you can add geo-restrictions, TTL, whitelists, etc...

With Cloudfront you can add distributions manually but that is not a good practice, because you know you can forget some configuration like a rule, add restrictions, or set the wrong content. Here is where Terraform comes in, as it is a good tool to create distributions automatically, you can define the resource, rules, restrictions, etc…

Here, I’m going to explain how to implement a distribution with S3 and Cloudfront through Terraform. The idea is to use S3 to allocate your content, and Cloudfront delivers the content, but everything is published with Terraform.

Maybe you heard about S3 is used to publish websites, that is a good idea to deliver content that is not requested around the world, you know, could be a page that is used only in your country, in that case, you can easily use S3 to deliver your webpage, but if you want to deliver your content quickly you need to use Cloudfront.

First of all, you need to create a project with your code and your infrastructure, I recommend a structure similar to this:

In this example, I created a web page with the default content of React, because the content is not important here, I want to focus on the infrastructure. Here you can find an explanation of the structure that I used to create the infrastructure for my project.

The first thing is to add your S3 bucket, to upload your compiled code.

# ----------------------------------------------------------------------------------------------------------------------
# S3 BUCKET TO STORE WEBSITE FILES
# @param bucket Bucket name
# @param acl The canned ACL to apply
# @param cors_rule A rule of Cross-Origin Resource Sharing
# ----------------------------------------------------------------------------------------------------------------------
resource "aws_s3_bucket" "bucket" {
bucket = "${var.environment}-${var.bucket_name}-distribution"
acl
= "public-read"
cors_rule
{
allowed_headers = [
"*"]
allowed_methods = [
"PUT",
"POST",
"GET"
]
allowed_origins = [
"*"]
max_age_seconds = 3000
}

force_destroy = true
website
{
index_document = "index.html"
error_document
= "index.html"
}
}

A little note here, with the new release of Terraform (v4 or upper), you can use a resource called aws_s3_bucket_website_configuration, you still can use this configuration to create the bucket, but you are going to see some warnings in your console when you try to deploy your resources.

With this configuration, we are going to create a bucket based on the environment, and with rules to allow access to the content (ACL), the cors rule is a parameter to allow requests on the page, you know if you want to request any API you need to add this rules, otherwise, your page going to lock the requests, finally, the parameter “website” sets the main file to access to your web site, in this case, the main file is index.html, but you can change it to any file for example index.js.

Additionally, we need to set the outputs, because to create a distribution, the information from the bucket as the name, id, ARN, etc. is necessary.

output "bucket_information" {
value = {
bucket_regional_domain_name : aws_s3_bucket.bucket.bucket_regional_domain_name
bucket
: aws_s3_bucket.bucket.bucket
name
: "${var.environment}-${var.bucket_name}-distribution"
id
: aws_s3_bucket.bucket.id
website_endpoint
: aws_s3_bucket.bucket.website_endpoint
arn
: aws_s3_bucket.bucket.arn
}
}

Now, we need to set the policies to restrict access to the bucket, with this policy the only way to get the content from the web page is through the Cloudfront distribution.

resource "aws_s3_bucket_public_access_block" "bucket_restriction" {
bucket = var.bucket_information.id

block_public_acls
= true
block_public_policy
= true
ignore_public_acls
= true
restrict_public_buckets
= true
depends_on
= [
aws_s3_bucket_policy.web_distribution
]
}

With this, we have a bucket to locate the compiled code and the restrictions to avoid access with the URL generated by S3, if you didn’t know when you create a bucket to publish a website, S3 generates automatically an URL to get access, but we don’t want that people access to the webpage with something like this http://dev-my-page.s3-website-us-east-1.amazonaws.com.

The next part is to create the SSL certificates to set an encrypted connection, with these certificates your website is enabled to use the protocol HTTPS. Don’t worry about the cost because this service is free, you don’t pay anything, at least you want to use the private manager.

# ----------------------------------------------------------------------------------------------------------------------
# ACM (Amazon Certificate Manager)
# The ACM certificate resource allows requesting and management
# of certificates from the Amazon Certificate Manager.
# @param provider Credentials to execute request, always must be in us-east-1
# @param domain_name A domain name for which the certificate should be issued
# @param validation_method Which method to use for validation. DNS or EMAIL are valid
# @param lifecycle The lifecycle block and its contents are meta-arguments, available for all resource blocks regardless of type.
# @param subject_alternative_names Set of domains that should be SANs in the issued certificate (uncomment if you need to renew the certificates)
# ----------------------------------------------------------------------------------------------------------------------

resource "aws_acm_certificate" "cert" {
provider = aws.east
domain_name = var.domain_name
validation_method
= "DNS"

lifecycle
{
create_before_destroy = true
}
subject_alternative_names = var.cert_sans
}

# ----------------------------------------------------------------------------------------------------------------------
# ACM Validation (Amazon Certificate Manager)
# The ACM certificate resource allows requesting and management
# of certificates from the Amazon Certificate Manager.
# @param provider Credentials to execute request, always must be in us-east-1
# @param certificate_arn The ARN of the certificate that is being validated.
# @param validation_record_fqdns List of FQDNs that implement the validation. Only valid for DNS validation method ACM certificates
# ----------------------------------------------------------------------------------------------------------------------

resource "aws_acm_certificate_validation" "cert" {
provider = aws.east
certificate_arn = aws_acm_certificate.cert.arn
validation_record_fqdns
= var.cert_validation_fqdn.*.fqdn
}

An important thing is that you need to create the certificates in us-east-1, at the time that I write this guide AWS recommends creating the certificates in the us-east-1 region, here you need to add a secondary provider and set the region that you want to use.

provider "aws" {
alias = "east"
region
= "us-east-1"
access_key
= var.aws_access_key_id
secret_key
= var.aws_secret_access_key
}

Now, we need to associate the certificates with a domain, here you need to identify your case because you could have a domain, maybe with google in that case you need to migrate the management of your domain to AWS. You could add manually your domain with route53 in AWS and reference the resource in your configuration, or you can create the domain with Terraform. If you don’t have a domain (hosted zone) validated, you could have an error when you try to access your web page, so be careful with that. In this example I already had a domain and I referenced the domain in my configuration in the root, and after that, you need to import the domain with this command: terraform import aws_route53_record.myrecord Z4KAPRWWNC7JR_dev.example.com_NS_dev

# ----------------------------------------------------------------------------------------------------------------------
# AWS IMPORTED RESOURCES
# ----------------------------------------------------------------------------------------------------------------------

resource "aws_route53_zone" "hosted_zone" {
name = var.domain
lifecycle
{
prevent_destroy = true
}
}

Here you need to add prevent_destroy, to prevent that terraform deletes your domain (route53 record), for example, you could make a wrong configuration and you want to delete everything to start again, but if you don’t add this line terraform could delete your record, if you are using a profile with a lot of permissions that could happen, here you could create a profile with limited permissions, and deny the permission to delete records in route53, but is better prevent possible mistakes.

To associate certificates with a domain you need to use this

# ----------------------------------------------------------------------------------------------------------------------
# Route53 record
# @param zone_id The ID of the hosted zone to contain this record.
# @param name The name of the record.
# @param type The record type.
# @param alias An alias block, conflicts with ttl & records.
# ----------------------------------------------------------------------------------------------------------------------

resource "aws_route53_record" "sub_domain" {
zone_id = var.hosted_zone_id
name
= var.sub_domain
type
= "A"

alias
{
name = var.aws_cloudfront_distribution.domain_name
zone_id
= var.aws_cloudfront_distribution.hosted_zone_id
evaluate_target_health
= false
}
}

# ----------------------------------------------------------------------------------------------------------------------
# Route53 Here the certificates are added to hosted zone
# @allow_overwrite Allows the overwrite to replace the record, uncomment if you need to renew the certificate
# ----------------------------------------------------------------------------------------------------------------------

resource "aws_route53_record" "cert_validations" {
count = length(var.cert_sans) + 1
zone_id = var.hosted_zone_id
name
= element(var.domain_certificates.cert.domain_validation_options.*.resource_record_name, count.index)
type = element(var.domain_certificates.cert.domain_validation_options.*.resource_record_type, count.index)
records = [element(var.domain_certificates.cert.domain_validation_options.*.resource_record_value, count.index)]
allow_overwrite = true
ttl
= 60
}

The final steps are to add the distribution and set the permissions.

# ----------------------------------------------------------------------------------------------------------------------
# CLOUDFRONT DISTRIBUTION
# @param origin One or more origins for this distribution
# @param default_root_object By default, show index.html file
# @param enabled A flag that specifies whether Origin Shield is enabled.
# @param custom_error_response If there is a 404, return index.html with a HTTP 200 Response
# @param default_cache_behavior The default cache behavior for this distribution
# @param price_class Distributes content to US and Europe
# @param restrictions Restricts who is able to access this content
# @param viewer_certificate SSL certificate for the service.
# ----------------------------------------------------------------------------------------------------------------------

resource "aws_cloudfront_distribution" "distribution" {
origin {
origin_id = var.sub_domain
domain_name
= var.bucket_information.bucket_regional_domain_name
s3_origin_config
{
origin_access_identity = aws_cloudfront_origin_access_identity.web_distribution.cloudfront_access_identity_path
}
}
aliases = local.aliases

enabled
= true

default_root_object
= "index.html"

default_cache_behavior
{
allowed_methods = [
"DELETE",
"GET",
"HEAD",
"OPTIONS",
"PATCH",
"POST",
"PUT"]
cached_methods = [
"GET",
"HEAD"]
target_origin_id = var.sub_domain

forwarded_values
{
query_string = false

cookies
{
forward = "none"
}
}

viewer_protocol_policy = "allow-all"
min_ttl
= 0
default_ttl = 3600
max_ttl = 86400
}

ordered_cache_behavior {
path_pattern = "/*"
allowed_methods
= [
"GET",
"HEAD",
"OPTIONS"]
cached_methods = [
"GET",
"HEAD",
"OPTIONS"]
target_origin_id = var.sub_domain

forwarded_values
{
query_string = false
headers
= [
"Origin"]

cookies {
forward = "none"
}
}

min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy
= "redirect-to-https"
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
acm_certificate_arn = var.certificate_arn
ssl_support_method
= "sni-only"
}
price_class = "PriceClass_100"
custom_error_response
{
error_caching_min_ttl = 86400
error_code = 404
response_code = 200
response_page_path = "/index.html"
}
}

resource "aws_cloudfront_origin_access_identity" "web_distribution" {
comment = "Managed by Terraform"
}

Here there are a lot of configurations, but the most important options are:

  • Domain Name: Reference to the domain created with route53
  • Origin Id: Reference to the S3 bucket where the files are located.
  • Allowed methods: Allowed HTTP methods.
  • TTL (Time to live) configurations: Max time to cache the data in each node.
  • Restrictions: Set of rules to restrict the access, you could set rules by IPs, geo-references, etc...
  • Viewer Certificate: SSL certificate.
  • Price Class: Here you choose a plan to create nodes and distribute your content, so be careful with your plan.
  • Default Root Object: Path to access web page main file.
  • Response Page Path: Path to show a page in case of an error like 404.

Finally, you need to set the permissions:

# ----------------------------------------------------------------------------------------------------------------------
# S3 IAM POLICY
# Provides an IAM role.
# https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
# ----------------------------------------------------------------------------------------------------------------------

data "aws_iam_policy_document" "distribution_bucket" {
statement {
actions = [
"s3:GetObject"]
principals {
type = "AWS"
identifiers
= [
var.cloudfront_distribution_oai_iam_arn
]
}
resources = [
"${var.bucket_information.arn}/*"]
}
statement {
actions = [
"s3:ListBucket"]
resources = [
var.bucket_information.arn]

principals {
type = "AWS"
identifiers
= [
var.cloudfront_distribution_oai_iam_arn]
}
}
}

resource "aws_s3_bucket_policy" "web_distribution" {
bucket = var.bucket_information.id
policy
= data.aws_iam_policy_document.distribution_bucket.json
}


resource "aws_s3_bucket_public_access_block" "bucket_restriction" {
bucket = var.bucket_information.id

block_public_acls
= true
block_public_policy
= true
ignore_public_acls
= true
restrict_public_buckets
= true
depends_on
= [
aws_s3_bucket_policy.web_distribution
]
}

In this file, there are two resources and data, the data source is used to define a policy to use it into a resource, it is better if you define your policies as data and reference them in your resources, in this case, the data(policy) is defined and used in the policy related with the bucket, and after that, we add a restriction to lock the public access to the bucket.

Brief notes, I’m using remote storage to upload my terraform.state file

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version
= "~> 4.0"
}
}

backend "s3" {
//you need to set the bucket name here
bucket = ""
key
= "terraform.tfstate"
region
= "us-east-1"
workspace_key_prefix
= "env:"
}
}

If you don’t want to use a bucket to store your state you need to change this configuration.

About my variables, I defined a couple of variables that you need to set as env variables before starting, and I was working with workspaces called prod and dev, so you need to create your workspace to deploy your resources.

# ----------------------------------------------------------------------------------------------------------------------
# AWS CREDENTIALS
# ----------------------------------------------------------------------------------------------------------------------

variable "aws_access_key_id" {
description = "AWS access key credential"
}

variable "aws_secret_access_key" {
description = "AWS secret access key credential"
}

variable "region" {
default = "us-east-1"
}

variable "aws_account_id" {
description = "AWS account id"
}

# ----------------------------------------------------------------------------------------------------------------------
# DOMAIN DEFINITIONS
# ----------------------------------------------------------------------------------------------------------------------
//you need to set your domain here
variable "domain" {
default = ""
}

locals {
prod_certs = [
"www.app.${var.domain}",
"app.${var.domain}"
]
dev_certs = [
"app.${terraform.workspace}.${var.domain}"
]
environment = terraform.workspace
is_production
= local.environment == "prod"
cert_sans
= local.is_production ? local.prod_certs : local.dev_certs
domains
= {
"root" = var.domain
"prod"
= "app.${var.domain}"
"dev"
= "app.demo.${var.domain}"
}
}

variable "backend" {
default = "s3"
}

To export your variables you need to use this:

export TF_VAR_aws_account_id=
export TF_VAR_aws_access_key_id=
export TF_VAR_aws_secret_access_key=

And to start a new workspace (environment) you can use this:

terraform workspace new <env>

Now, you can start with the deployment, first at all you need to check your configuration with the command: terraform plan, and if everything is ok you can use terraform apply -auto-approve.

If you are using a new user from AWS, maybe you are going to see some errors related to permissions. In that case, you need to check the policies related to your user and try again.

Finally, your distribution is created and associated with the bucket and the certificates, but you don’t have anything in your bucket, so you need to run the build command and upload the content. In this project, you can use npm run build and upload the content with aws s3 cp build/ s3://bucket-name/ — —recursive.

When your content is uploaded, maybe you see an error when you try to access your website:

This error is not related to the configuration, you need to invalidate the distribution

And when the process is done you going to see your content:

My page is available now, but maybe in the future could be removed 😅.

You can see the code of the project here: https://github.com/ridouku/terraform-cloudfront-distribution.git

More information about:

Questions? Comments? Contact me at ridouku@gmail.com

--

--