Social

Monday, January 18, 2021

Terraform s3 Hosted site


 

 

In the last one we did a terraform resource dissection of the aws_s3_bucket

In todays fun filled excitement we are going to look a little farther down that rabbit hole and configure a simple hosted site using the s3 bucket. This episode is going to differ from the attached video by adding the DNS section to the code, but everything else will remain the same. 

 

Again with the catches to having a bucket: 

  • Bucket names must be DNS compatible. No special characters
  • New bucket names are s3.region vs old ones were s3-region 
  • Bucket names must be 100% unique, you cannot name it the same as another accounts

 


resource "aws_s3_bucket" "reference_name" {}

Starting with a general resource

Replace “reference_name” with how you want to reference the bucket in other locations. 

 

  bucket = var.project_name
  acl    = "public-read"

 In our block we want to start by first giving the bucket a name, and an Access Control List permissions.  

We have set the permissions in the ACL  to "public-read", this will allow everyone to read the contents of the bucket and any web pages that you have in the bucket. Remote web browsers cannot view the general contents of a bucket, only explicit files directly. 

For the policy section we are going to use a "templatefile" to put a custom attribute in for the bucket name. 

 

  policy = templatefile("site-policy.json", { project_name = var.project_name} )

This will allow us to use this template for other projects because we are not hard coding information. 

For the template file, it reads as "filename.json" { first_variable = var.value, second_variable = var.value2 } 

You are also able to use a map for the values in the template. More information can be found here


  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }


If you have requirements that use Server Side Encryption, SSE, here is where you would configure it. With this example we are going to use AES256, also known as the amazon default encryption.
You do not have to use the AWS default encryption, but for our example and budget, we are going to use AES-256. Some other options are Amazon S3 managed  keys or Customer Master Keys. More can be read about them here

 

 

  website {
    index_document = "index.html"
    error_document = "error.html"
  }

In this section we have the "website" files. These are the files we will use for our default document, "index.html" and for our errors, or 404 page. 

Setting an "error_document" will also prevent outside browsers from being able to have an generic s3 error message returned so that they can return to your web site/page easier. 


#### RAW CODE ####

resource "aws_s3_bucket" "hostedsite01" {
  bucket = var.project_name
  acl    = "public-read"

  policy = templatefile("site-policy.json", { project_name = var.project_name} )

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }

  website {
    index_document = "index.html"
    error_document = "error.html"
  }
  tags = var.standard_tags
}


##### JSON FILE ##### 


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicRead",
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:GetObject",
        "s3:GetObjectVersion"
      ],
      "Resource": [
        "arn:aws:s3:::${project_name}/*"
      ]
    }
  ]
}



The JSON file above is for the s3 bucket policy. It states that all public resources are able to read the objects in the bucket and if versioning is enabled, they will be able to read the object's versions. 

The "${project_name}" is the variable that was set above when the "templatefile".


No comments:

Post a Comment