Social

Tuesday, February 2, 2021

Terraform EC2 Instance

This is taken directly from the README on github. So it is a bit redundant. 

There is also a YouTube Video Here  




EC2 Instance Example
This is a simple AWS EC2 instance deployment. We are using the terraform guide for deploying it.
This is an overly simplified deployment with just a security group for doing the basic deployment.
 

Background
The instance will be deployed with `terraform aws_instance` and it will be a basic instance. We also use a security group to allow access for http and ssh, ports 80 and 22.

We are also getting into the search feature of the `data.aws_ami` to pull an AMI that is basic for AWS Linux 2.

Notice
If you are going to follow this template for any type of production, please switch it to https instead of just http. This was only done as an example


The Security Group
We use the AWS securty group to allow the access. For this example we will have two ingress rules and out egress rule.
The ingress rule will allow traffic from anywhere over 22 and 80.

Resource
We start with the `resource "aws_security_group"`
In the group, we will set the name, a descrition and the VPC ID.
The `name` will be what you will call the group, and these should be unique for you.

The `description` will be something useful for you to find it later.

The `vpc_id` will be what VPC it has access to. You should use this if you have multiple VPCs under your account.

We also have `tags` under the main body, but I like to put them at the bottom because over time the tags can change but the resource will remain the same.

resource "aws_securty_group" "my_fancy_security_group" {}
  name =  "ssh-web"
  description = "SSH and web access"

  tags = {
      "Name" = "Value",
      "More Name" = "Another Value"
  }

Ingress
With the ingress, this will allow the flow of traffic in to item behind this security group, in this example it is the EC2 Instance.

Under the `ingess` the cidr_blocks control the IP that traffic is coming from. This can set an IP, a range, or a subnet.
You can group multiple IPs by seperating them with a comma.

cidr_blocks = [
    "10.1.1.0/24".
    "192.168.1.212/32",
    "192.168.2.215/32",
    "172.19.21.20/20"
]


The `description` is how you want to lable this rule, or used if this rule is for a one off reason you want to remember later.

The `from_port` and `to_port` are the ports that this range covers. It allows the traffic from these ports. In general they should be the same or the same ranges.

Next is the 'protocol' and this can `tcp`, `udp`, `icmp`, `icpmv6` or "-1" which stands for ALL.

The `security_groups` section is any other included security groups that you wish to include in this securty group section. It is optional.


  ingress = [
    {
      cidr_blocks = [
        "0.0.0.0/0",
      ]
      description      = ""
      from_port        = 22
      to_port          = 22
      protocol         = "tcp"
      security_groups  = []
    },
    {
      cidr_blocks = [
        "0.0.0.0/0",
      ]
      description      = ""
      from_port        = 80
      to_port          = 80
      protocol         = "tcp"
      security_groups  = []
    },
  ]

Egress
The `egress` items are the same as the `ingress` items listed above and they have been copied to reflect that.

The `cidr_blocks` control the IP that traffic is coming from. This can set an IP, a range, or a subnet.
You can group multiple IPs by seperating them with a comma.

cidr_blocks = [
    "10.1.1.0/24".
    "192.168.1.212/32",
    "192.168.2.215/32",
    "172.19.21.20/20"
]


The `description` is how you want to lable this rule, or used if this rule is for a one off reason you want to remember later.

The `from_port` and `to_port` are the ports that this range covers. It allows the traffic from these ports. In general they should be the same or the same ranges.

Next is the 'protocol' and this can `tcp`, `udp`, `icmp`, `icpmv6` or "-1" which stands for ALL.

The `security_groups` section is any other included security groups that you wish to include in this securty group section. It is optional.

  egress = [
    {
      cidr_blocks = [
        "0.0.0.0/0",
      ]
      description      = ""
      from_port        = 0
      to_port          = 0
      protocol         = "-1"
      security_groups  = []
     }
  ]

AMI Data
For this we will be using the Data to call which AMI we want to use for our basic instance we will not be using a lot of the options here.
We want to select the Amazon Linux 2 machine image.

Instead of `resource` we are going to be using `data` since we are dealing with data and not creating any resources.

With the data, the layout is simular to the way a resource is utilized.

data "aws_ami" "my_reference" {}


We want to select the most recent version of the AMI, so we want to set that flag to true.
most_recent = true

Next we want to use `filter{}` to narrow down our choices. This is laid out in json. Multiple filter tags are able to be used to help limit the overlapping results.

You can find other filters by used AWS CLI to describe the AMI or by looking at the AMI in the web console. It is the same filtes you would use with AWS CLI `aws ec2 describe-images --owners self amazon --filter "Name=name, Values=amzn2-ami-hvm-2.0.2021*"`
filter {
    name = "Name"
    values = "firstValue"
}

filter {
    name = "Architecture"
    values = ["x86_64"]
} 
 

 

We will have to select an owner for the AMI that is used. This is a required value.
You can select multiple owners, but there should not be a reason to have multiple ones listed.

owners = ["amazon"]


The EC2 Instance
This is the part that will deploy a basic EC2 instance, it will not have anything it will only be a basic server, and you will have to manually install everything you want  on it afterwards.

Again, we are using this to just deploy the instance, there are a lot of other options with this resource

The resource type will be `aws_instance`
resource "aws_instance" "my_fancy_instance" {}

Next is to choose the arguments to use, and since this is a simple deployment it will be straight forward.

For the `ami` we will use `data.aws_ami` from above, since we have that ID now.
ami = data.aws_ami.my_reference.id


With the instance type, you want to choose what you really need for this instance since over provisioning can get expensive quickly.
If you use this as a module or would like to use it with difference instance types it should be a variable.  
A list of all AWS Instance types can be found here
instance_type = var.instance_type


Attach the `security_group` from above at this point
security_group = [aws_security_group.my_fancy_security_group.id]


For the `key_name` it will use a `key_pair` that you have already provisioned and have access to. If you need to create a new key pair, it is recommended to do it before starting, being sure to downlaod it and set the proper permissions as specified the Amazon documents
key_name = "my_fancy_keypair"


This step is optional if you have a VPN configured or you do not need to access this machine from outside the VPC, but we want to set a public address to this instance so that we can connect to it remotely.
This is not needed if you are doing an ELB and have your logs going to an S3 bucket or a central logging server. This value can be true or false.
associate_public_ip_address = true


We do need to bind this instance to a subnet, you can hard code it or use other tricks to get the subnet setup on it. In this case, we will use a variable as we did with the VPC ID listed above.

subnet_id = "subnet-XXXXXXXXXX"

Finally here is where the tags are set, since we are using our standard tags, it is a variable, but it can be manually entered.

tags = {
    "Name" = "MyInstance",
    "Location" = "us-west-2"
}



RAW CODE
Here is the raw code that is also in the [main.tf]

data "aws_ami" "aws_linux" {
  most_recent = true

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-2.0.2021*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  owners = ["137112412989"]
}

resource "aws_security_group" "ec2_instance_sg" {
  name        = "ssh-web"
  description = "SSH and web access GLOBAL!"
  vpc_id      = var.vpc_id

  ingress = [
    {
      cidr_blocks = [
        "0.0.0.0/0",
      ]
      description      = ""
      from_port        = 22
      to_port          = 22
      ipv6_cidr_blocks = []
      prefix_list_ids  = []
      protocol         = "tcp"
      security_groups  = []
      self             = false
    },
    {
      cidr_blocks = [
        "0.0.0.0/0",
      ]
      description      = ""
      from_port        = 80
      to_port          = 80
      ipv6_cidr_blocks = []
      prefix_list_ids  = []
      protocol         = "tcp"
      security_groups  = []
      self             = false
    },
  ]

  egress = [
    {
      cidr_blocks = [
        "0.0.0.0/0",
      ]
      description      = ""
      from_port        = 0
      to_port          = 0
      ipv6_cidr_blocks = []
      prefix_list_ids  = []
      protocol         = "-1"
      security_groups  = []
      self             = false
    }
  ]

  tags = var.standard_tags
}

resource "aws_instance" "ec2_instance" {
  ami                         = data.aws_ami.aws_linux.id
  instance_type               = var.instance_type
  security_groups             = [aws_security_group.ec2_instance_sg.id]
  key_name                    = var.ssh_key_name
  associate_public_ip_address = true
  subnet_id                   = var.subnet_id

  tags = var.standard_tags
}

 



Monday, January 18, 2021

Terraform s3 Hosted site


 

 

In the last one we did a terraform resource dissection of the aws_s3_bucket

In todays fun filled excitement we are going to look a little farther down that rabbit hole and configure a simple hosted site using the s3 bucket. This episode is going to differ from the attached video by adding the DNS section to the code, but everything else will remain the same. 

 

Again with the catches to having a bucket: 

  • Bucket names must be DNS compatible. No special characters
  • New bucket names are s3.region vs old ones were s3-region 
  • Bucket names must be 100% unique, you cannot name it the same as another accounts

 


resource "aws_s3_bucket" "reference_name" {}

Starting with a general resource

Replace “reference_name” with how you want to reference the bucket in other locations. 

 

  bucket = var.project_name
  acl    = "public-read"

 In our block we want to start by first giving the bucket a name, and an Access Control List permissions.  

We have set the permissions in the ACL  to "public-read", this will allow everyone to read the contents of the bucket and any web pages that you have in the bucket. Remote web browsers cannot view the general contents of a bucket, only explicit files directly. 

For the policy section we are going to use a "templatefile" to put a custom attribute in for the bucket name. 

 

  policy = templatefile("site-policy.json", { project_name = var.project_name} )

This will allow us to use this template for other projects because we are not hard coding information. 

For the template file, it reads as "filename.json" { first_variable = var.value, second_variable = var.value2 } 

You are also able to use a map for the values in the template. More information can be found here


  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }


If you have requirements that use Server Side Encryption, SSE, here is where you would configure it. With this example we are going to use AES256, also known as the amazon default encryption.
You do not have to use the AWS default encryption, but for our example and budget, we are going to use AES-256. Some other options are Amazon S3 managed  keys or Customer Master Keys. More can be read about them here

 

 

  website {
    index_document = "index.html"
    error_document = "error.html"
  }

In this section we have the "website" files. These are the files we will use for our default document, "index.html" and for our errors, or 404 page. 

Setting an "error_document" will also prevent outside browsers from being able to have an generic s3 error message returned so that they can return to your web site/page easier. 


#### RAW CODE ####

resource "aws_s3_bucket" "hostedsite01" {
  bucket = var.project_name
  acl    = "public-read"

  policy = templatefile("site-policy.json", { project_name = var.project_name} )

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }

  website {
    index_document = "index.html"
    error_document = "error.html"
  }
  tags = var.standard_tags
}


##### JSON FILE ##### 


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicRead",
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:GetObject",
        "s3:GetObjectVersion"
      ],
      "Resource": [
        "arn:aws:s3:::${project_name}/*"
      ]
    }
  ]
}



The JSON file above is for the s3 bucket policy. It states that all public resources are able to read the objects in the bucket and if versioning is enabled, they will be able to read the object's versions. 

The "${project_name}" is the variable that was set above when the "templatefile".


Sunday, January 10, 2021

Terraform deploy AWS S3 Bucket - Standard

First the video, second the full text.


 

 

Lets start with a simple S3 bucket and then progress to one for web hosting.
I know these are beginning steps, but s3 buckets are a foundational tool within the AWS infrastructure

Lets start with the resource for an S3 bucket. ‘aws_s3_bucket’
Some quick caveats before starting,

  • Bucket names must be DNS compatible. No special characters
  • New bucket names are s3.region vs old ones were s3-region 
  • Bucket names must be 100% unique, you cannot name it the same as another accounts


For our simple bucket we are going to use the code for a simple bucket that we would dump logs to, for an external application like loggly or New Relic to pic up the logs.
We will want to keep these for a limited time, and we do not need versioning on but we will turn it on for this example.

With terraform we need to start with our resource name,

resource “aws_s3_bucket” “reference_name” {}


Replace “reference_name” with how you want to reference the bucket in other locations.

In our block, lets first give out bucket a name, and then the access control list.

bucket = “myfirstniftybucketforlogs”
acl = “private” 


You can give it any of the canned ACLs here, and that includes private, public-read, public-read-write, or log-delivery-write  
Be aware that public-read-write can be dangerous and it is highly recommended to use it only with the grant permission instead just to limit access.


Versioning can be enabled (true) or disabled (false)

versioning {
   enabled  = “true”
}


If you have requirements that use Server Side Encryption, SSE, here is where you would configure it. With this example we are going to use AES256, also known as the amazon default encryption.
You do not have to use the AWS default encryption, but for our example and budget, we are going to use AES-256. Some other options are Amazon S3 managed  keys or Customer Master Keys. More can be read about them here, https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html

 
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }




The next part is the magic part, it enables the migrating of data from one level of storage to the next. The lifecycle _rule

For our example here, we want  2 transition rules and an expiration rule.
Before setting the rules, you will need to make sure that they are enabled, and that it has a unique ID. 
This will migrate the data from premium storage to standard storage after 30 days. Then after 60 days, it will migrate the data to glacier storage.
The final step will be to remove the data completely from the bucket,  your data retention policies will dictate how long you need to keep the data, so consult your companies data retention policy before setting the expiration.

The storage_class can move from the standard storage to the following storage classes

  • STANDARD_IA - Standard storage with infrequent access
  • ONEZONE_IA - Single zone infrequent access storage
  • GLACIER - An archive storage, things will take longer to retrieve from here 
  • DEEP_ARCHIVE - very infrequently accessed storage, much longer to return but better for larger files



lifecycle_rule {
   id = “expire_data”
   Enabled = “true’

  transition {
    days = 30
    storage_class = “STANDARD_IA”  
  }
  transition {
    days = 60
    storage_class = “GLACIER”  
  }
expiration {
  days = 90
}  




#### RAW CODE ####

resource "aws_s3_bucket" "my_first_bucket" {
  bucket = "${var.project_name}-bucket"
  acl    = "private"

  versioning {
    enabled = "true"
  }

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }

  lifecycle_rule {
    id      = "expire"
    enabled = "true"

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 60
      storage_class = "GLACIER"
    }

    expiration {
      days = 90
    }
  }

  tags = var.standard_tags
}



Part 2 - The web hosting:
There will be another part for this shortly. It will go over how to use terraform to create a hosted AWS S3 web site. 


Thursday, September 10, 2020

Windows Application setup - using chocolatey

 


 

Today I am using Chocolatey to install the basic every day apps I use on my windows devops desktop
Chocolatey is a package manager for windows, that works in the same fashion as brew does for MacOS. It is a pretty lightweight set of powershell commands and uses pre-compiled packages for Windows. It also works for older versions of windows, so if you are hanging onto that Windows 7 installation, this should help.

First we want to install Chocolatey itself, and that is done within powershell. 
You want to run powershell 'As Administrator' so that you have privileged access. 
The next set is to check that you are able to run the signed code,

Get-ExecutionPolicy

If the above command returns "Restricted" run

https://youtu.be/xn5-1fbiTRY

This will allow the running of all signed code. 

Now we are able to install Chocolatey, and to do that, we run the command

Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

Now we can install the apps. 
To install an application, you can use the following command

choco install vscode
 
 
The apps i am installing are:

Using Chocolatey allows you to install the apps from just the powershell prompt and makes it a bit simpler to use.
You can also stack the apps instead of installing one by one,
choco install git python awscli

This is a huge time saver as well since it goes directly from one app to the next. 

 

Overall I think chocolatey is a great resource for managing application installs on stand alone workstation or servers.  I have not used it aside from "one off" instances so I cannot give it any type of valid input there. 

 

Tuesday, August 4, 2020

MacMini - Day 1

I honestly forgot that I should write blog posts about my videos. Also I am not overly happy with how this video looks or feels so I will be playing with that as time goes on.



So in this video we do a few basic things, re-arrange the dock, and installed a few applications
oh-my-zsh was installed using the curl command to fetch the installation script and the run it.

As an added bonus, if you are on a Mac, you will get an error about

Insecure completion-dependent directories detected:

To resolve that issue just run the command it gives you on the directory it shows. In my case it was

compaudit| xargs chmod g-w,o-w /usr/local/share/zsh

Then the error warning about not being able to load completion rules went away.
 
Install instructions:

$ sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" 


With brew again we have a curl to a script for installation, and this one did not give any issues.

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

Brew is used to install a lot of items we will need form the command line, as a simple package manager kind of like apt or yum in the linux world.

It comes in really handy and has a lot of support for being able to install just about any FOSS appication.

Next we want to install Amazon Web Services Command Line Interface  (AWS CLI), it will be used more later, but as for right now, we just want to install it.
With AWS CLI from brew, it will also install Python 3.8.5. This can be handy later or it could mess up anything you were working on within python.
brew install awscli


With terraform we will be using it to deploy the environment to AWS. It will be used to limit mistakes and make results repeatable. I will also be uploading my code to github after it is completed.

brew install terraform
With VS Code we will be using it as an Integrated Development Environment for things like Terraform and Docker. We will also use it for a lot of general editing.
brew cask install visual-studio-code

With docker we did a standard install to get the desktop services, Docker Desktop, and this will be more for ease of use and container configuiration.


  • Hardware
If you are curious about the hardware it is a 2012 MacMini i7, 16BG RAM, 1TB SSD. This hardware should be good enough to run a few containers. There is a chance that a change that this MacMini will get bootcamp and have Windows 10 installed just to show counter points, but it depends on audience request.

Saturday, May 16, 2020

Brush this thing off

So a little bit of an update. I am going to do a few videos for terraform and see how that goes.
I have been on a keyboard kick lately, and ended up with another keyboard. I will post it on twitter, since I should post more there. I will also be a little more active there and possibly instagram also. Instagram will depend on if I can  take pics and tag them correctly.
I should setup a user account on my computer to do all this devops stuff, but let's see if I do it, or remain lazy.
Also, while I am here, why not update this blog's theme. I mean it is my blog and it should reflect things that I enjoy. So let's get spicy.


Old Look:













Vs whatever I have choosen to go around this.

There have been a few major life events lately, so let me take those as a reason to be more motivated, to quote Casey Niestat "Do More"


There is also the question of why do I have a blog? I never update it. Well I think I will start doing that.
I am going to attempt to do a post every Monday and Thursday for the next 4 weeks. I think that will be a fun little challenge to see if I really want to go down this route.

Starting Monday, I will go ahead with the terraform VPC, and post the basics of that.


Wednesday, April 6, 2016

Editing from the past

Greeting! if this was 2008 this workstations would be really kick ass!
But it is not 2008, or even 2010, maybe in 2012 it would "Just be enough" for some people, but this is how I will be doing the video edits from here out.

First the system stats:
It is a custom build system that back in the day it was nice.

  • Motherboard GA-EP35-DS3L (link)
  • Intel E8400 Core Duo (3GHz)
  • 6GB of Patriot RAM (2x2GB 2x1GB)
  • Toshiba 320GB 7200 HDD
  • Pioneer DVD-RW
  • Integrated RealTek ACL 888 Chipset
  • LEPA 500W PSU 
  • GeForce 9500 GT 1GB (here)
  • IOGear 4.0 USB Bluetooth (here)
  • Wireless: TP-Link USB Wireless 
  • Dell Keyboard (the generic one)
  • Logitech Laser Mouse 
  • Blue Ice Snowball (because BALLA!
  • Microsoft LifeCam Cinema (Old School, sucks actually) 
  • Monitors: 2x Apple Cinema Display 20" (1680x1050 each)
  • Monster Heat sink (NZXT no link) 
  • Various case fans 
  • Cheap case 



Why would I try this with such old hardware? I mean seriously, not even an SSD! It is simple, I had this hardware laying around and I spent $10 on this update. It was on the wifi adapter.
I know, even back in 2008 when the system was build the first time, the parts total was around $500.
In fact the only thing that are not from the original system is the hard drive, the monster heat sink, and the power supply. Time killed all the original ones.

Another reason to do it, because I can. It is an overly old system and it does show a few signs of wear and tear, but it should be a good start for some ghetto video editing and time wasting, youtube anyone?

So far the experience has been, it is slow. It crashes every once and a while, but it does work, and it does edit videos.


Did I mention that it is running El Capitan? Yes fellow readers, this is a hackintosh, in a pretty true sense of the word. There was some issues, the audio was a bit difficult to configure, but after that was done everything else "just worked"

The system spec for it is an Macmini 5,1 (2011 Mac Mini) and the audio problem was handled by using Clover bootloader, and forcing it to load the kext on boot.
Audio Issue:
  To solve the audio issue, I mounted the clover partition, then added the AppleHDA.kext directly to the EFI/CLOVER/kext/10.11 folder.
After that, everything worked.
For wi-fi I am using the TP-Link Wifi Adapter, it was cheap and it connects to the internet.

As an added bonus, it takes 5 minutes to render a 2 minute video. Overall though, I don't think it is a bad little system for general use.