First the video, second the full text.
Lets start with a simple S3 bucket and then progress to one for web hosting.
I know these are beginning steps, but s3 buckets are a foundational tool within the AWS infrastructure
Lets start with the resource for an S3 bucket. ‘aws_s3_bucket’
Some quick caveats before starting,
- Bucket names must be DNS compatible. No special characters
- New bucket names are s3.region vs old ones were s3-region
- Bucket names must be 100% unique, you cannot name it the same as another accounts
For our simple bucket we are going to use the code for a simple bucket that we would dump logs to, for an external application like loggly or New Relic to pic up the logs.
We will want to keep these for a limited time, and we do not need versioning on but we will turn it on for this example.
With terraform we need to start with our resource name,
Replace “reference_name” with how you want to reference the bucket in other locations.
In our block, lets first give out bucket a name, and then the access control list.
You can give it any of the canned ACLs here, and that includes private, public-read, public-read-write, or log-delivery-write
Be aware that public-read-write can be dangerous and it is highly recommended to use it only with the grant permission instead just to limit access.
Versioning can be enabled (true) or disabled (false)
If you have requirements that use Server Side Encryption, SSE, here is where you would configure it. With this example we are going to use AES256, also known as the amazon default encryption.
You do not have to use the AWS default encryption, but for our example and budget, we are going to use AES-256. Some other options are Amazon S3 managed keys or Customer Master Keys. More can be read about them here, https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html
The next part is the magic part, it enables the migrating of data from one level of storage to the next. The lifecycle _rule
For our example here, we want 2 transition rules and an expiration rule.
Before setting the rules, you will need to make sure that they are enabled, and that it has a unique ID. This will migrate the data from premium storage to standard storage after 30 days. Then after 60 days, it will migrate the data to glacier storage.
The final step will be to remove the data completely from the bucket, your data retention policies will dictate how long you need to keep the data, so consult your companies data retention policy before setting the expiration.
The storage_class can move from the standard storage to the following storage classes
- STANDARD_IA - Standard storage with infrequent access
- ONEZONE_IA - Single zone infrequent access storage
- GLACIER - An archive storage, things will take longer to retrieve from here
- DEEP_ARCHIVE - very infrequently accessed storage, much longer to return but better for larger files
#### RAW CODE ####
Part 2 - The web hosting:
There will be another part for this shortly. It will go over how to use terraform to create a hosted AWS S3 web site.