AWS Network and Terraform – Part one

The word out there is that Public Cloud is the solution for all the problems..a bit too strong, right? As Network Engineers we tend to have a feeling for our on-premise Datacenter where we have control over all network matters, but I strong believe that we should embrace the Cloud, it is indeed a very interesting, good, reliable and fun solution to learn and use in MANY different use-cases. AWS is the leader IaaS in the market by far, and that’s why I decide to learn my way into the Public Cloud with them. This post won’t cover AWS Networking Fundamentals, for that I recommend the 2 part blog posts by Nick Matthews “Amazon VPC for On-Premises Network Engineers – Part 1 and Part 2

Terraform is a Simple and Powerful Infrastructure as a Code tool that can be used to Template, Plan and Create Infrastructure in a multitude of different Providers, including but not limited to AWS. I suggest a look on Terraform Getting Started guide for a better idea of how it works, they have an amazing Documentation, and there’s no need to be a Programmer to understand it.

Lets play around with it so we can learn more.

Defining your Modules

Modules can be thought as Functions, you define a Module that will be reused as many times as you want, this avoid repetitive code and also give us a cleaner and easier to manage configuration. On our example, I am gonna have 2 main folders, one called Modules and another one called Projects, which is where each new Project will be placed and the Resources will all be sourced from the Modules Folder. Let start by defining the Modules. Under “/modules/networking-stack” we will create the terraform files (.tf) that will contain our Templated Code to spin-up the aws networking stack. 

“../modules/network-stack/vpc.tf”

resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
enable_dns_support = true
enable_dns_hostnames = true
tags {
Name = "${var.vpc_name}"
}
}

view raw
blog_example_vpc.tf
hosted with ❤ by GitHub

“../modules/network-stack/network.tf”

# Declare the data source, this gets all AZ available in the region specified
data "aws_availability_zones" "available" {}
#Define the resourses as IGW, NATGW, RT, ACL
resource "aws_internet_gateway" "igw" {
vpc_id = "${aws_vpc.vpc.id}"
tags {
Name = "igw-terraform-lab"
}
}
#Define the Public routing tables, link to the IGW. Note: IGW is per Region and not per AZ, so only one needed.
resource "aws_route_table" "rt-public-a" {
vpc_id = "${aws_vpc.vpc.id}"
tags {
Name = "rt-public-a"
}
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.igw.id}"
}
}
resource "aws_route_table" "rt-public-b" {
vpc_id = "${aws_vpc.vpc.id}"
tags {
Name = "rt-public-b"
}
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.igw.id}"
}
}
#Define the NAT GWs to be used by the Private subnets, they will be placed on the Public Subnet. One NAT-GW per AZ.
#First create the EIPs to be attached to the NatGWs
resource "aws_eip" "eip-nat-a" {
vpc = true
}
resource "aws_eip" "eip-nat-b" {
vpc = true
}
resource "aws_nat_gateway" "ngw-a" {
allocation_id = "${aws_eip.eip-nat-a.id}"
subnet_id = "${aws_subnet.net-public-a.id}"
depends_on = ["aws_internet_gateway.igw"]
}
resource "aws_nat_gateway" "ngw-b" {
allocation_id = "${aws_eip.eip-nat-b.id}"
subnet_id = "${aws_subnet.net-public-b.id}"
depends_on = ["aws_internet_gateway.igw"]
}
#Define the private routing tables.
resource "aws_route_table" "rt-private-a" {
vpc_id = "${aws_vpc.vpc.id}"
tags {
Name = "rt-private-a"
}
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = "${aws_nat_gateway.ngw-a.id}"
}
}
resource "aws_route_table" "rt-private-b" {
vpc_id = "${aws_vpc.vpc.id}"
tags {
Name = "rt-private-b"
}
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = "${aws_nat_gateway.ngw-b.id}"
}
}

“../modules/network-stack/subnet.tf”

#create a subnet, link to a az and associate to a routing table.
resource "aws_subnet" "net-public-a" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${var.subnet-public-a}"
tags {
name = "subnet-public-a"
}
availability_zone = "${data.aws_availability_zones.available.names[0]}"
}
resource "aws_route_table_association" "pub-rt-subnet-a" {
subnet_id = "${aws_subnet.net-public-a.id}"
route_table_id = "${aws_route_table.rt-public-a.id}"
}
resource "aws_subnet" "net-public-b" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${var.subnet-public-b}"
tags {
name = "subnetnet-public-b"
}
availability_zone = "${data.aws_availability_zones.available.names[1]}"
}
resource "aws_route_table_association" "pub-rt-subnet-b" {
subnet_id = "${aws_subnet.net-public-b.id}"
route_table_id = "${aws_route_table.rt-public-b.id}"
}
resource "aws_subnet" "net-private-a" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${var.subnet-private-a}"
tags {
name = "subnet-private-a"
}
availability_zone = "${data.aws_availability_zones.available.names[0]}"
}
resource "aws_route_table_association" "priv-rt-subnet-a" {
subnet_id = "${aws_subnet.net-private-a.id}"
route_table_id = "${aws_route_table.rt-private-a.id}"
}
resource "aws_subnet" "net-private-b" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${var.subnet-private-b}"
tags {
name = "net-private-b"
}
availability_zone = "${data.aws_availability_zones.available.names[1]}"
}
resource "aws_route_table_association" "priv-rt-subnet-b" {
subnet_id = "${aws_subnet.net-private-b.id}"
route_table_id = "${aws_route_table.rt-private-b.id}"
}
#Output Variables
output "subnet_pub_a_id" {
value = "${aws_subnet.net-public-a.id}"
}
output "subnet_priv_a_id" {
value = "${aws_subnet.net-private-a.id}"
}
output "subnet_pub_b_id" {
value = "${aws_subnet.net-public-b.id}"
}
output "subnet_priv_b_id" {
value = "${aws_subnet.net-private-b.id}"
}

“../modules/network-stack/security.tf”

#Every VPC Requires a NACL, nacls are satteless, which is not nice to manage, so we will allow everything
# and have a more granular control on the SGs, which are Stateful
resource "aws_network_acl" "nacl-all" {
vpc_id = "${aws_vpc.vpc.id}"
egress {
protocol = "-1"
rule_no = 2
action = "allow"
cidr_block = "0.0.0.0/0"
from_port = 0
to_port = 0
}
ingress {
protocol = "-1"
rule_no = 1
action = "allow"
cidr_block = "0.0.0.0/0"
from_port = 0
to_port = 0
}
tags {
name = "nacl-terraform-lab"
}
}
#Define a SG to be applied to our Public JumpBox Instance, it allows SSH only.
resource "aws_security_group" "sg-untrust" {
name = "frontend-terraform"
tags {
name = "sg-untrust"
}
description = "allow inbound https and ssh only"
vpc_id = "${aws_vpc.vpc.id}"
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
#Defines a trust SG that will be applied to our Private Instances, allow traffic for the whole VPC CIDR
resource "aws_security_group" "sg-trust" {
name = "backend-terraform"
tags {
name = "sg-trust"
}
description = "only connection from local vpc"
vpc_id = "${aws_vpc.vpc.id}"
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["${var.vpc_cidr}"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
output "sg_id" {
value = "${aws_security_group.sg-untrust.id}"
}

With that we have a complete aws-network-stack Module, with all the pieces necessary to bring up a AWS VPC, with 2 Public & Private subnets in 2 different AZs, with the proper IGW for Public Internet on the Public Subnets, NAT Gateway providing Internet Access for Private Instances and Security Groups adding Layer3/4 Security to the Instances, so we are ready to use this Module as many time as we want in our Projects.

To demonstrate that, lets create a folder called eu-west-1 on our Projects folder, and there we are going to call the Module Network-Stack that we created above, we are going to pass variable values and run Terraform over this Project, which should create our AWS Network Stack in a matter of minutes.

“../projects/eu-west-1/main.tf”

#defiens the provider to be used and the Region to create the Stack
provider "aws" {
region = "eu-west-1"
}
module "network-stack" {
#configuration parameters
vpc_cidr = "10.250.0.0/24"
vpc_name = "netoops-lab"
subnet-public-a = "10.250.0.0/26"
subnet-public-b = "10.250.0.64/26"
subnet-private-a = "10.250.0.128/26"
subnet-private-b = "10.250.0.192/26"
#Do not edit this part, this points to the Module network-stack Folder
#and all the resources defined on the Module will receive the above variables values and create the Stack for us.
source = "../../modules/network-stack"
}

That’s all. All we need to do now is run  terraform initterraform plan / terraform apply and the Base AWS Network Stack will be ready for use in a matter of minutes!

Screen Shot 2018-02-02 at 12.33.34

On a Next posts I want to show how to use Terraform to Manage our Route Tables, VPNs, Cross-Region (X)Swan IPSEC Tunnels, VPC-Peering, etc. Stay Tuned.

Git Repo with all the configs can be find here.

1 Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s