Terraform for Azure: Basics (11)

Blue Tag

Additional Environments and Improved Modules

Hi there! Welcome to this, the last post in our series on using Terraform and Azure DevOps to deploy resources into Microsoft Azure.

In case you’ve not been here before, when I’m writing these posts I make an assumption that you’re following along with me, that your code is in the same place as mine and that generally means that you’ve worked through the previous episodes:

  1. Prerequisites
  2. Repositories and Pipelines
  3. Build Pipeline and Resource Deployment
  4. Pipeline Security and Governance
  5. State File Storage Security
  6. Pipeline Refinement
  7. Modules
  8. Directories and Stages
  9. YAML Pipeline Templates
  10. For / Each Loops

If something in this post doesn’t make sense, or is confusing, go back over the previous posts and see if the answer’s there. Alternatively, I’m human, I get things wrong, I forget things, and I always appreciate suggestions on how to improve my posts, so leave me a comment!

I’m just going to add my usual preface here: I’m writing this series as a memory aid for myself with the hope that others can use it that might be on the same learning path as me. It’s not a detailed guide into Terraform, DevOps or Azure, it’s just an aid to understanding all the moving parts which will help with an understanding of and troubleshooting infrastructure as code environments. It probably doesn’t meet all the best practices that a DevOps engineer would use, but it works for me. I’m writing in the middle of 2024, so by the time you read this things might have moved on a little, but I suspect the core concepts will remain the same.

This post is building on the previous ones where we created modules and prepared stages ready for new environments. We will add a new development environment and its variables, and we will build on our existing resources, adding more resource types and features such as tagging.

Add a Development Environment

If you were following along in our post on directories and stages, you may have already created a development .tfvars file. If so, this can be deleted as we need to create a new one based on our production .tfvars. This is because if we don’t have variable blocks for each resource type we have code for in our root directory, the pipelines will fail. So copy your production .tfvars file to a place that makes sense to you and give it a sensible name:

Now we need to decide what resources we will deploy in that environment. I’m just going to go for a single resource group for now, so I’ll change the relevant details for the first resource, then delete all other resource information (I could just comment it out), and make sure I leave the resource type blocks:

###########################################
# Resource Groups
###########################################

resource_groups = {
  #  First UKS Development Resource Group
  dev_uks_rg = {
    rg_name          = "example"
    rg_location      = "UK South"
    rg_region_prefix = "uks"
    rg_environment   = "dev"
    rg_id_suffix     = "01"
  }
}


###########################################
# VNets
###########################################

vnets = {

}

Although we are not creating any VNets, the block needs to remain as it is defined in our variables .tf file and a value is expected (even if that value is null).

We know that all our Terraform code is abstracted from the environments and their values, so we don’t need to change anything there, but we need to configure our DevOps pipelines to both validate the new environment and apply it. We do this by adding a stage where appropriate in each of our pipeline files. First of all we need a plan stage in our validation pipeline. We will copy the whole production plan stage to use as a template, and paste it in as the first stage (I like Dev to come before Prod, but choose what works for you), immediately after the “stages:” command:

We now need to edit the fields that make it specific to our environment. These would be:

  • Initial comment
  • Stage name
  • Environment
  • Commandoptions

The initial comment and stage name should be relatively simple:

stages:
# Development Plan Stage
- stage: devplan
  jobs:

The next, and most important part, is the back end state file. This is the key to allowing us to have multiple environments created using the same repository and pipeline. We’re calling this “azurermkeyname” and are currently referring to an existing .tfstate file that was created the first time our pipeline was run. In here, we need to replace the file name. When the pipeline runs, it will look for the file and if it doesn’t exist, it will create it. If it did exist, it would use that state file to record the changes required to our environment. So let’s just change the file name so that on the next pipeline run, it will create a new .tfstate file:

azurermkeyname: 'terraformstatefiles/terraform_series_dev_environment.tfstate'

The environment should again be straightforward, but our commandoptions line needs amending in a couple of places, both the path to the .tfvars file and the output name. I’m also just adding a line break for clarity between our two stages:

                environment: DEV
                commandoptions: -var-file=config/development/do_series_dev.tfvars -lock=false -out=devplan

Everything else in our stage remains the same. If you now save the file, check your formatting, then commit and sync, you should see in your validation pipeline job a new stage run, where you can check the output of the plan task to make sure your resource group will be created:

So that’s our stage added to our validation pipeline, and a positive result in our validation plan jobs. What do we need to do to our apply pipeline? To be honest, it’s a little bit of the same, we copy our production apply stage and amend some details, but we also just copy the development plan stage we just created in our validation pipeline and put it in above the plan stage of our apply pipeline. Let’s start with that:

The other thing I’ve done is I’ve changed the name of the production plan stage from “plan” to “prdplan”, and I’ve updated my comments a little if you’ve been following along, but the important point is that I literally copied the whole development plan stage from my validation pipeline file into my apply pipeline file. Straight away I’ve now got a development plan stage and a production plan stage, before the production apply stage. As I mentioned before, now it’s a case of copying that production apply stage immediately below the production plan stage and making some edits. The edits are to the following values:

  • Comment(s)
  • Stage name
  • Azurermkeyname
  • Environment
  • Commandoptions
  • Applycommandoptions

The comment and stage name have been updated to show that this is a development stage:

# Development Apply Stage
- stage: devapply

The azurermkeyname, environment and commandoptions have been copied directly from the development plan stage. The applycommandoptions value matches the output of commandoptions:

                azurermkeyname: 'terraformstatefiles/terraform_series_dev_environment.tfstate'
                environment: DEV
                commandoptions: -var-file=config/development/do_series_dev.tfvars -lock=false -out=devplan
                applycommandoptions: devplan

So this new stage is now looking at the development back end state file and the development .tfvars file. We should now be in a state where we can save, check formatting, commit and sync, then if you’re happy with your plan, pull to main and watch our development environment’s resource group get created! Remember that you now have two stages that call for manual approval, and each one will have to be separately approved.

Adding Tags

So now we’re all knowledgeable about how to deploy resources into different environments in Azure. We know about pipelines and governance, automation, templates, modules, and all the other lovely stuff that Terraform gives us. It’s time to think about the little details now, the things that set Infrastructure as Code (IaC) apart, that save us time and make sure we don’t miss anything. I think a nice example of that is our tags. Tags make our searching easier, they help with cost management and control, they help us understand who’s responsible for certain resources and can give us high-level information about those resources. I want to work through creating a few tags, remembering to keep them abstracted from our core code, and maybe add a couple of other little bits of learning into the mix.

We’ll work in our development environment seeing as that’s got the fewest resources right now, and we’ll make it easier to understand what we’re doing by first of all destroying that single resource group. Comment out the resource group details in your development .tfvars file then save, commit and pull to main to make sure it doesn’t exist anymore:

resource_groups = {
#   #  First UKS Development Resource Group
#   dev_uks_rg = {
#     rg_name          = "example"
#     rg_location      = "UK South"
#     rg_region_prefix = "uks"
#     rg_environment   = "dev"
#     rg_id_suffix     = "01"
#   }
}

We’re going to be using variables to store our tag values, so as always we need to declare them in the variables .tf file in our root folder. While we’re there doing that, I’m going to add a variable in for our environment which might come in handy when naming resources. I’m just going to declare them in a block that I’m calling “Environment Variables”, but that name’s just a commented section and it doesn’t even need to be there if you don’t want:

####################################################
#  Environment variables
####################################################

variable "environment" {
  description = "Name of the environment"
  type        = string
}

variable "global_tags" {
  description = "Default tags which are merged into resource tags"
  type        = map(string)
  default     = {}
}

variable "environment_tags" {
  description = "Environment specific tags which are merged into resource tags"
  type        = map(string)
  default     = {}
}

Note a couple of things here. Firstly, I’ve provided a type for each of the variables, which although it’s not required, really helps when somebody’s following after and wants to understand what type of data is allowed. I’m using the “string” type and the “map(string)” type, which is basically just a list of strings. It wouldn’t hurt to research what each of the allowed types are in Terraform and what they do. Secondly, I’ve added the environment variable at the top, then I’ve added variables for both “global_tags” and “environment_tags”. I want to keep separate the tags that I’ll be using across my project, and those that will vary depending on the environment.

After declaring the variables, we need to add some values in our development .tfvars file. Again I’ve put nice big comment blocks so that people understand what’s going on, and I’ve put in some sensible values – the values can be anything that will be of use to you:

##################
# Global Values
##################

global_tags = {
  Deployment-method = "Terraform"
}


###############################
# Environment Specific Values
###############################

environment = "dev"

environment_tags = {
  Business-unit           = "TBC"
  Cost-centre             = "TBC"
  Data-classification     = "Unrestricted"
  Environment-type        = "Development"
  Environment-owner       = "[email protected]"
}

Because you’ve declared your variables in the root directory, you’re also going to have to make sure you have entries in your production .tfvars file. I’ve just copied the blocks over from development and made a couple of changes to the values for now, we can put the correct values in when we’re ready to deploy a production environment:

Back to our development .tfvars file, and we need to un-comment our resource group data so that we can create our resource group again. Before we save the file though, let’s take advantage of our new “environment” variable. We currently have the following in our first resource group’s array block:

rg_environment   = "dev"

We have a variable for our whole environment declaring it as “dev”, so we can remove that line, leaving us with:

resource_groups = {
  #  First UKS Development Resource Group
  dev_uks_rg = {
    rg_name          = "example"
    rg_location      = "UK South"
    rg_region_prefix = "uks"
    rg_id_suffix     = "01"
  }
}

We now need to amend our code to take advantage of the new variables we’ve created. First up is the resource groups .tf file in our root directory. We will be amending the module block to send through information on the tags, and make use of that environment variable. Before we apply tags, let’s adjust the format of the resource group name. Instead of using the “each.value.rg_environment” string, we can use the environment variable of “var.environment” in its place:

rg_name     = format("%s-%s-%s-rg-%s", each.value.rg_region_prefix, var.environment, each.value.rg_name, each.value.rg_id_suffix)

For our tags, we’re just going to send through both variable values to our module, by adding the following two lines to our module block:

  environment_tags = var.environment_tags
  global_tags      = var.global_tags

Note that in my screenshot these lines are underlined in red. This is because our module is not expecting them at this point, and doesn’t have them declared as variables within itself. So this is our next task, to add the new variables to the variables .tf file in our resource groups module:

variable "global_tags" {
  description = "Default tags which are merged into resource tags"
  type        = map(string)
  default     = {}
}

variable "environment_tags" {
  description = "Environment specific tags which are merged into resource tags"
  type        = map(string)
  default     = {}
}

I’ve copied the variable declarations from our variables .tf file in our root directory. Before we actually get to the module code, we want to merge our two “tags” variables together, so we just have one set of tags to deploy. We do this using “local” variables, or “locals”. To learn more about these, have a look here, it’s definitely worth understanding their power and how they can be used. At the top of our module’s variables .tf file, we will declare a local variable that merges both the “global_tags” and “environment_tags”:

locals {
  tags = merge(var.global_tags, var.environment_tags)
}

This new local “tags” variable can then be referenced in our module code. Before we do that though, I want to bring in a new tag which I find useful, of “Deployment Date”. We can use a bit of code to give us the exact timestamp a resource was deployed or created:

"Deployment Date" = formatdate("YYYY-MM-DD hh:mm ZZZ", timestamp())

If we use that as it is though, every time we make a change to the resource, it will be updated with a new timestamp. We therefore need to make use of another handy function in Terraform: lifecycle. Using the “ignore_changes” option of that function, we effectively say “if this is a new deployment, add the tag, if it’s a change or update to the resource, don’t touch the tag”. What we will do in our module code then, is to add a new option for “tags”, with the values merging our local tags variable and the deployment date code, whilst adding our lifecycle function, leaving us with:

  tags     = merge(local.tags, { "Deployment Date" = formatdate("YYYY-MM-DD hh:mm ZZZ", timestamp()) })
  lifecycle {
    ignore_changes = [tags["Deployment Date"]]
  }

Note how the tags variable is called as “local.tags”, rather than “var.tags”. This is because it is declared under “locals” and is understood to be a variable that is only available to the local module.

Another quick note here about modules. As always, any .tf files in the same folder are read as a single file by Terraform. It is common practice to put the “outputs” blocks into a separate “outputs.tf” file in the modules folder rather than having them in the main module .tf file. For readability, I prefer to keep them in the same file, but as always, choose what works for you!

Save your code, check your formatting, then commit and sync. Check your development plan stage in your validation pipeline’s job to see what will be deployed:

Also take a look at your production plan stage; note how the existing resource groups will have tags added, as you defined them when copying over the relevant sections to your production .tfvars file:

If you now pull this code to main and let the apply pipeline run, your new resource group will be created. This resource group will have all its tags including the deployment date, but although the existing resource groups will now have tags, because they were just changed, they won’t have that deployment date tag.

It’s probably a good time now to just match up your production .tfvars resource group creation data with development, by removing the extra “rg_environment” key from each entry, as that’s not used in name generation anymore:

VNets, Subnets and NSGs

As this is the last in our “basics” series, let’s just get everything up to a good level in both our development and production environments, and apply our learning to all our resource types, including a couple of new ones!

First of all let’s get a new VNet in our development environment, and update our VNets module with what we’ve learned with our resource groups. Our tag variables have already been declared in our root variables file, so we can ignore that. In our .tfvars file though, we need some data that can be used to create the VNet. We won’t copy what’s in production right now, as we learned about name formatting and this means we probably want to build it out from scratch. So let’s start at the beginning inside our VNets array, by adding a comment and an array item name:

  #  UK South Development Hub VNet
  uks_dev_hub_vnet = {

We then want to have the unique part of the name:

    vnet_name           = "hub"

Next comes the address space and the region:

    vnet_address_space  = ["10.10.50.0/24"]
    vnet_location       = "UK South"

We’ll add our prefix and suffix for the naming convention:

    vnet_region_prefix  = "uks"
    vnet_id_suffix      = "01"

Lastly we’ll add the reference to the required resource group:

    vnet_resource_group = "dev_uks_rg"

Now we need to update the VNets .tf file in our root directory. As with our resource groups we want to make the VNet name a combination of multiple values, that matches our naming convention:

vnet_name           = format("%s-%s-%s-vnet-%s", each.value.vnet_region_prefix, var.environment, each.value.vnet_name, each.value.vnet_id_suffix)

We’ll also add our tags keys:

  environment_tags    = var.environment_tags
  global_tags         = var.global_tags

Now to the variables .tf file in our VNets module, where we’ll declare the tags variables (including the local variables):

locals {
  tags = merge(var.global_tags, var.environment_tags)
}

variable "global_tags" {
  description = "Default tags which are merged into resource tags"
  type        = map(string)
  default     = {}
}

variable "environment_tags" {
  description = "Environment specific tags which are merged into resource tags"
  type        = map(string)
  default     = {}
}

And finally we have our main VNets module .tf file, where we’ll modify the resource block. The first thing we’ll do is add in our tags block:

  tags     = merge(local.tags, { "Deployment Date" = formatdate("YYYY-MM-DD hh:mm ZZZ", timestamp()) })
  lifecycle {
    ignore_changes = [tags["Deployment Date"]]
  }

I like to enable VNet encryption on all my VNets so that if any virtual machines in there support it and have accelerated networking enabled, they will use it. To do this, after those tags details are added we will put in the code which enables VNet encryption:

  encryption {
    enforcement = "AllowUnencrypted"
  }

Another thing we can do when creating VNets is add custom DNS servers, rather than using the default Azure DNS server. You might want to do this where you have a DNS private resolver or domain controllers that you want to use to get your DNS information. You can do this either by adding the information to the .tfvars array information and putting their creation in this resource block, or you can do it later using the Terraform “azurerm_virtual_network_dns_servers” resource block. I prefer to do the latter, but that means we do need to use the ID of each VNet created, and that in turn means we need an output statement in this module file:

output "vnet_id" {
  value       = azurerm_virtual_network.vnet.id
  description = "The id of the created VNet"
}

We will also need to refer to the VNet’s name and resource group when creating other resources, so rather than naming them separately, we’ll output those too:

output "vnet_rg" {
  value       = azurerm_virtual_network.vnet.resource_group_name
  description = "The name of the created VNet's Resource Group"
}

output "vnet_name" {
  value       = azurerm_virtual_network.vnet.name
  description = "The name of the created VNet"
}

Don’t forget to update the format of your production .tfvars VNets array to match the new requirements of our naming code! Once that’s done, save, check formatting, commit and sync, then check that your plan stages all run OK.

Subnets

So that’s our VNets created, but we can’t really utilise them until they have subnets configured. We have a couple of options for creating the subnets. We can either add them in as parameters of our VNet creation, or we can create them separately and assign them to VNets as part of the creation. After trying both methods I feel that creating them separately works best for me, not necessarily for the actual creation and simplification of code, but for the re-use of subnet information and using the output of the VNets module to get that information. Again, what works for you is always best, and remember that there are lots of modules already out there that you can clone or refer to, this series is just about learning what goes off under the hood.

So what will the process be of creating subnets? As usual, we’ll be declaring variables, creating an array in our .tfvars along with the appropriate data, creating a main .tf file which will call a new module that we’ll also create. Let’s start with that variable declaration. We need to open the variables .tf file in our root directory and declare a variable for the array’s name:

###########################################
# Subnet Variables
###########################################

# Subnets Array Variable
variable "subnets" {
  description = "Array of Subnets to create"
  type        = any
  default     = []
}

Next we need to create our array in our .tfvars file, with details of each subnet we want. We’ll start with the array itself, add the following to your development .tfvars file under your VNets array:

###########################################
# Subnets
###########################################

subnets = {

}

Remember to add the same block to your production .tfvars file so that your validate pipeline doesn’t hang! We then need to add the following keys and associated values into our development .tfvars subnets array:

  • Array entry name
  • Subnet name
  • Subnet address range
  • Subnet’s associated VNet
  • Subnet’s associated service endpoints

If you look at the requirements for the subnet creation resource blocks, you will find more information about each of these keys. I’m going to put in details for the first subnet as below:

  # UK South Development Hub VNet Azure Firewall Subnet
  uks_dev_hub_vnet_afw_subnet = {
    snet_name       = "AzureFirewallSubnet"
    snet_address    = ["10.10.50.0/26"]
    snet_vnet       = "uks_dev_hub_vnet"
    snet_sendpoints = []
  }

I’m then going to add another couple of subnets to the array, by copying the first one and just changing the item name, the subnet name and the address range:

  # UK South Development Hub VNet Azure Firewall Management Subnet
  uks_dev_hub_vnet_afwmgmt_subnet = {
    snet_name       = "AzureFirewallManagementSubnet"
    snet_address    = ["10.10.50.64/26"]
    snet_vnet       = "uks_dev_hub_vnet"
    snet_sendpoints = []
  }
  # UK South Development Hub VNet Gateway Subnet
  uks_dev_hub_vnet_gateway_subnet = {
    snet_name       = "GatewaySubnet"
    snet_address    = ["10.10.50.128/26"]
    snet_vnet       = "uks_dev_hub_vnet"
    snet_sendpoints = []
  }

Now we have all the information we need, we’re going to have to create a .tf file in our root directory that will call a new subnets module as part of a for / each loop. The information our loop in the .tf file will need is:

  • Subnet name (from .tfvars)
  • Subnet address range (from .tfvars)
  • Subnet VNet (query to the VNets module)
  • VNet’s resource group (query to the VNets module)
  • Subnet service endpoints (from .tfvars)
  • Dependency (needs information from the VNets module)

So we’ll put in a sensible comment at the top, then open a module block called “subnets”. In that we’ll put a source of a new subfolder of our modules folder again called “subnets”, that we’ll create shortly. We’ll then start our for / each loop by calling the array name:

# Terraform for / each loop to deploy subnets defined in an array

# Subnets For / Each loop
module "subnets" {
  source           = "./modules/subnets"
  for_each         = var.subnets

We now need to get the values for the subnets’ names and address ranges from .tfvars:

  snet_name        = each.value.snet_name
  snet_address     = each.value.snet_address

Next is where our output data from our VNets module comes in handy. For each subnet created, we’ll get the vnet array name from .tfvars and ask the VNets module what the output was for its name and resource group:

  snet_vnet        = module.vnets[each.value.snet_vnet].vnet_name
  vnet_rg          = module.vnets[each.value.snet_vnet].vnet_rg

After that, it’s another call to .tfvars for the service endpoints details, before adding the dependency on the VNets module and closing the for / each loop:

  snet_sendpoints  = each.value.snet_sendpoints
  depends_on       = [module.vnets]
}

That should be all the information we need to create subnets from our array, so now we need the module. Create the new “subnets” subdirectory under “modules” and in there, create a new variables .tf file. In this file we will declare all the variables that the module will use:

# Subnet Variables

variable "snet_name" {
  description = "Required field.  Subnet name."
}

variable "snet_address" {
  description = "Required field.  Subnet address space."
}

variable "snet_vnet" {
  description = "Required field.  Subnet's associated VNet."
}

variable "vnet_rg" {
  description = "Required field.  VNet's Resource Group."
}

variable "snet_sendpoints" {
  description = "Required field.  Subnet's associated service endpoints"
}

It’s probably worth pointing out at this time that subnets don’t use tags, which is why we’ve not put in any variables for those. Now it’s time to create the module itself. Remember these are all tasks that we’ve already performed for resource groups and VNets, so we should have a reasonable understanding of the mechanics now. We need to create a .tf file in our subnets subdirectory, then add a resource block for azurerm_subnet with an appropriate name and all the values required to create a subnet and associate it with a VNet, using the variables we’ve already created:

#####################################################
# Module code to deploy a subnet to an existing VNet
#####################################################

resource "azurerm_subnet" "subnet" {
  name                 = var.snet_name
  address_prefixes     = var.snet_address
  virtual_network_name = var.snet_vnet
  resource_group_name  = var.vnet_rg
  service_endpoints    = var.snet_sendpoints
}

When the subnets are created, we’re going to want to refer to them later for other resources, so we’ll add output blocks for the subnet ID, the subnet name, the associated VNet and the address prefixes:

output "subnet_id" {
  value       = azurerm_subnet.subnet.id
  description = "The ID of the newly created subnet"
}

output "subnet_name" {
  value       = azurerm_subnet.subnet.name
  description = "The name of the newly created subnet"
}

output "subnet_vnet" {
  value       = azurerm_subnet.subnet.virtual_network_name
  description = "The VNet of the newly created subnet"
}

output "subnet_address_prefixes" {
  value       = azurerm_subnet.subnet.address_prefixes
  description = "The address prefixes of the subnet"
}

At this point, we should be in a position to save everything, check our formatting, commit and sync then make sure the plan tasks are OK in our validation pipeline job. If you’re happy with those, pull to main and make sure your apply goes OK, and look at your shiny new VNet with its own subnets!

NSGs

Last up, and to finish off this series, let’s just create some NSGs and associate them with the subnets. The steps are going to be very similar to what we did for our subnets module, so I’m not going to go into as much detail, I’ll just give you the code I’ve used and explain any major changes. There are going to be two for / each loops, one for creating the NSGs, and the other for assigning them to the subnets. One of the for / each loops will use a module block, the other a resource block.

Before we start I’m going to create two new subnets for our development hub VNet to which I’ll be attaching the NSGs. The code for these is just a simple copy and paste with a bit of name and address editing in our development .tfvars file:

  # UK South Development Hub Resource Subnet 1
  uks_dev_hub_resource_subnet_1 = {
    snet_name       = "uks-dev-resource-sn-01"
    snet_address    = ["10.10.50.192/27"]
    snet_vnet       = "uks_dev_hub_vnet"
    snet_sendpoints = ["Microsoft.KeyVault"]
  }
  # UK South Development Hub Resource Subnet 2
  uks_dev_hub_resource_subnet_2 = {
    snet_name       = "uks-dev-resource-sn-02"
    snet_address    = ["10.10.50.224/27"]
    snet_vnet       = "uks_dev_hub_vnet"
    snet_sendpoints = ["Microsoft.KeyVault"]
  }

Now to the NSG creation and assignment. First of all, the root variables .tf:

###########################################
# NSG Variables
###########################################

# NSG Array Variable
variable "nsgs" {
  description = "List of NSGs to create"
  type        = any
  default     = []
}

###########################################
# NSG Subnet Association Variables
###########################################

# NSG Subnet Association Array Variable
variable "nsg_subnet_association" {
  description = "List of NSGs and their associated subnets"
  type        = any
  default     = []
}

Next, our development .tfvars file, not forgetting to put a blank array for each in the production .tfvars file:

###########################################
# NSGs
###########################################

nsgs = {
  # UK South Dev Hub VNet Resource Subnet 1 NSG
  uks_dev_hub_resource_subnet_1_nsg = {
    nsg_region         = "UK South"
    nsg_resource_group = "dev_uks_rg"
    nsg_subnet         = "uks_dev_hub_resource_subnet_1"
  }
  # UK South Dev Hub VNet Resource Subnet 2 NSG
  uks_dev_hub_resource_subnet_2_nsg = {
    nsg_region         = "UK South"
    nsg_resource_group = "dev_uks_rg"
    nsg_subnet         = "uks_dev_hub_resource_subnet_2"
  }
}

###########################################
# NSG Associations
###########################################

nsg_subnet_association = {
  # UK South Dev Hub VNet Resource Subnet 1 NSG
  assoc1 = {
    assoc_nsg    = "uks_dev_hub_resource_subnet_1_nsg"
    assoc_subnet = "uks_dev_hub_resource_subnet_1"
  }
  # UK South Dev Hub VNet Resource Subnet 2 NSG
  assoc2 = {
    assoc_nsg    = "uks_dev_hub_resource_subnet_2_nsg"
    assoc_subnet = "uks_dev_hub_resource_subnet_2"
  }
}

Now we need a for / each loop for each of the arrays. We can just create a single NSGs .tf file in our root directory and add both loops to the same file. Remember that NSGs have tags but the associations won’t. As mentioned earlier, we will have a module block (for creating the NSGs which will call an NSG creation module) and a resource block (which will simply assign the NSGs to the chosen subnets):

##########################################################
# Terraform code to deploy Network Security Groups (NSGs)
##########################################################

module "nsgs" {
  source              = "./modules/nsgs"
  for_each            = var.nsgs
  name                = format("%s-%s-nsg", module.subnets[each.value.nsg_subnet].subnet_vnet, module.subnets[each.value.nsg_subnet].subnet_name)
  resource_group_name = module.resource_groups[each.value.nsg_resource_group].resource_group_name
  location            = each.value.nsg_region
  environment_tags    = var.environment_tags
  global_tags         = var.global_tags
  depends_on          = [module.subnets]
}


################################################
# Terraform code to Associate NSGs with Subnets
################################################

resource "azurerm_subnet_network_security_group_association" "nsg_association" {
  for_each                  = var.nsg_subnet_association
  subnet_id                 = module.subnets[each.value.assoc_subnet].subnet_id
  network_security_group_id = module.nsgs[each.value.assoc_nsg].nsg_id
  depends_on                = [module.nsgs]
}

Now we just need a module in a new “nsgs” subdirectory of “modules”, and we need to make sure it outputs “nsg_id” so we can use it in our association loop. First up, as always, is the module’s variables .tf file:

###########################################
# NSG Variables
###########################################

locals {
  tags = merge(var.global_tags, var.environment_tags)
}

variable "name" {
  description = "Required field.  NSG name"
}

variable "resource_group_name" {
  description = "Required field.  NSG's Resource Group."
}

variable "location" {
  description = "Required field.  NSG Region"
}

variable "global_tags" {
  description = "Default tags which are merged into resource tags"
  type        = map(string)
  default     = {}
}

variable "environment_tags" {
  description = "Environment specific tags which are merged into resource tags"
  type        = map(string)
  default     = {}
}

Then we have the module’s main .tf file which will run the resource block and provide the outputs:

###########################################
# Module code to deploy an NSG
###########################################

resource "azurerm_network_security_group" "nsg" {
  name                = var.name
  resource_group_name = var.resource_group_name
  location            = var.location
  tags                = merge(local.tags, { "Deployment Date" = formatdate("YYYY-MM-DD hh:mm ZZZ", timestamp()) })
  lifecycle {
    ignore_changes = [tags["Deployment Date"]]
  }
}

####################################################
# Output data from the creation of NSGs
####################################################

output "resource_group_name" {
  description = "Resource group of the newly created NSG"
  value       = azurerm_network_security_group.nsg.resource_group_name
}

output "nsg_name" {
  description = "Name of the newly created NSG"
  value       = azurerm_network_security_group.nsg.name
}

output "nsg_id" {
  description = "ID of the newly created NSG"
  value       = azurerm_network_security_group.nsg.id
}

And that is literally it. Save, check formatting, commit and sync, check your plans, pull to main, check the apply jobs, enjoy your new resources! Why not create a new module to add rules to NSGs? How about one for route tables, or VNet Peers? There’s so much to play with even if you just stick to networking.

Summary

So there we are, we’ve reached the end of our basics series, and anything more you learn now will be in the realms of intermediate or advanced (at least in my humble opinion!). If you’ve been with me since the beginning of this journey, well done. You’ve learned how to prepare your systems, build repositories with different branches and secure them with governance. You’ve learned to build pipelines, with templates and code abstraction. You’ve learned about variables, and how to store them in a secure library. You’ve learned how to write Terraform code, abstract your data from the code, build modules, refer to data output from other modules, build fields by merging and formatting other fields, and use different types of Terraform variable. You can now deploy multiple environments using the same repository in stages, you understand how to automate the running of your pipelines, and most importantly, you’ve deployed infrastructure as code, making it repeatable, re-usable and less error-prone.

What that means ultimately is that you now have the tools to deploy more and different types of resources. You can create more of the same types of resources just by adding them to the bottom of a list. You can also follow somebody else’s code and understand what’s happening. You can look at a pipeline’s job and troubleshoot problems, or understand what’s being deployed. You’re ready to start your journey into DevOps engineering!

I’d love to hear if this series has helped you, please leave me a comment and let me know what you thought worked and went well, or what maybe could have been done a little differently!

Thank you!

– The Zoo Keeper

By TheZooKeeper

An Azure Cloud Architect with a background in messaging and infrastructure (Wintel). Bearded dog parent who likes chocolate, doughnuts and Frank's RedHot sauce, but has not yet attempted to try all three in combination!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.