Social

Monday, January 18, 2021

Terraform s3 Hosted site


 

 

In the last one we did a terraform resource dissection of the aws_s3_bucket

In todays fun filled excitement we are going to look a little farther down that rabbit hole and configure a simple hosted site using the s3 bucket. This episode is going to differ from the attached video by adding the DNS section to the code, but everything else will remain the same. 

 

Again with the catches to having a bucket: 

  • Bucket names must be DNS compatible. No special characters
  • New bucket names are s3.region vs old ones were s3-region 
  • Bucket names must be 100% unique, you cannot name it the same as another accounts

 


resource "aws_s3_bucket" "reference_name" {}

Starting with a general resource

Replace “reference_name” with how you want to reference the bucket in other locations. 

 

  bucket = var.project_name
  acl    = "public-read"

 In our block we want to start by first giving the bucket a name, and an Access Control List permissions.  

We have set the permissions in the ACL  to "public-read", this will allow everyone to read the contents of the bucket and any web pages that you have in the bucket. Remote web browsers cannot view the general contents of a bucket, only explicit files directly. 

For the policy section we are going to use a "templatefile" to put a custom attribute in for the bucket name. 

 

  policy = templatefile("site-policy.json", { project_name = var.project_name} )

This will allow us to use this template for other projects because we are not hard coding information. 

For the template file, it reads as "filename.json" { first_variable = var.value, second_variable = var.value2 } 

You are also able to use a map for the values in the template. More information can be found here


  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }


If you have requirements that use Server Side Encryption, SSE, here is where you would configure it. With this example we are going to use AES256, also known as the amazon default encryption.
You do not have to use the AWS default encryption, but for our example and budget, we are going to use AES-256. Some other options are Amazon S3 managed  keys or Customer Master Keys. More can be read about them here

 

 

  website {
    index_document = "index.html"
    error_document = "error.html"
  }

In this section we have the "website" files. These are the files we will use for our default document, "index.html" and for our errors, or 404 page. 

Setting an "error_document" will also prevent outside browsers from being able to have an generic s3 error message returned so that they can return to your web site/page easier. 


#### RAW CODE ####

resource "aws_s3_bucket" "hostedsite01" {
  bucket = var.project_name
  acl    = "public-read"

  policy = templatefile("site-policy.json", { project_name = var.project_name} )

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }

  website {
    index_document = "index.html"
    error_document = "error.html"
  }
  tags = var.standard_tags
}


##### JSON FILE ##### 


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicRead",
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:GetObject",
        "s3:GetObjectVersion"
      ],
      "Resource": [
        "arn:aws:s3:::${project_name}/*"
      ]
    }
  ]
}



The JSON file above is for the s3 bucket policy. It states that all public resources are able to read the objects in the bucket and if versioning is enabled, they will be able to read the object's versions. 

The "${project_name}" is the variable that was set above when the "templatefile".


Sunday, January 10, 2021

Terraform deploy AWS S3 Bucket - Standard

First the video, second the full text.


 

 

Lets start with a simple S3 bucket and then progress to one for web hosting.
I know these are beginning steps, but s3 buckets are a foundational tool within the AWS infrastructure

Lets start with the resource for an S3 bucket. ‘aws_s3_bucket’
Some quick caveats before starting,

  • Bucket names must be DNS compatible. No special characters
  • New bucket names are s3.region vs old ones were s3-region 
  • Bucket names must be 100% unique, you cannot name it the same as another accounts


For our simple bucket we are going to use the code for a simple bucket that we would dump logs to, for an external application like loggly or New Relic to pic up the logs.
We will want to keep these for a limited time, and we do not need versioning on but we will turn it on for this example.

With terraform we need to start with our resource name,

resource “aws_s3_bucket” “reference_name” {}


Replace “reference_name” with how you want to reference the bucket in other locations.

In our block, lets first give out bucket a name, and then the access control list.

bucket = “myfirstniftybucketforlogs”
acl = “private” 


You can give it any of the canned ACLs here, and that includes private, public-read, public-read-write, or log-delivery-write  
Be aware that public-read-write can be dangerous and it is highly recommended to use it only with the grant permission instead just to limit access.


Versioning can be enabled (true) or disabled (false)

versioning {
   enabled  = “true”
}


If you have requirements that use Server Side Encryption, SSE, here is where you would configure it. With this example we are going to use AES256, also known as the amazon default encryption.
You do not have to use the AWS default encryption, but for our example and budget, we are going to use AES-256. Some other options are Amazon S3 managed  keys or Customer Master Keys. More can be read about them here, https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html

 
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }




The next part is the magic part, it enables the migrating of data from one level of storage to the next. The lifecycle _rule

For our example here, we want  2 transition rules and an expiration rule.
Before setting the rules, you will need to make sure that they are enabled, and that it has a unique ID. 
This will migrate the data from premium storage to standard storage after 30 days. Then after 60 days, it will migrate the data to glacier storage.
The final step will be to remove the data completely from the bucket,  your data retention policies will dictate how long you need to keep the data, so consult your companies data retention policy before setting the expiration.

The storage_class can move from the standard storage to the following storage classes

  • STANDARD_IA - Standard storage with infrequent access
  • ONEZONE_IA - Single zone infrequent access storage
  • GLACIER - An archive storage, things will take longer to retrieve from here 
  • DEEP_ARCHIVE - very infrequently accessed storage, much longer to return but better for larger files



lifecycle_rule {
   id = “expire_data”
   Enabled = “true’

  transition {
    days = 30
    storage_class = “STANDARD_IA”  
  }
  transition {
    days = 60
    storage_class = “GLACIER”  
  }
expiration {
  days = 90
}  




#### RAW CODE ####

resource "aws_s3_bucket" "my_first_bucket" {
  bucket = "${var.project_name}-bucket"
  acl    = "private"

  versioning {
    enabled = "true"
  }

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }

  lifecycle_rule {
    id      = "expire"
    enabled = "true"

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 60
      storage_class = "GLACIER"
    }

    expiration {
      days = 90
    }
  }

  tags = var.standard_tags
}



Part 2 - The web hosting:
There will be another part for this shortly. It will go over how to use terraform to create a hosted AWS S3 web site.