Terraform for Azure: Basics (2)

Azure DevOps Logo

Repositories and Pipelines

If you’re following this series on the basics of Terraform for Azure, this is the second post and it deals with repositories (repos) and pipelines. Just to reiterate, this series is designed as a memory aid for myself, and hopefully it can help others at the same time. Any information in here probably doesn’t meet best practice, or work how a professional developer might do it, but it’s how I’m learning and at my (very basic) level of DevOps knowledge, it’s right for me.

So if you want to follow along with me, you’ll first need to install and configure your prerequisites, as per part 1. By the end of this post, you should be able to deploy a repository in Azure DevOps which will include a basic pipeline that can be used to deploy resources. These will then be synchronised with your computer, with code synchronised back to Azure DevOps to deploy a simple resource group in Azure. As we work through the series, we’ll be covering different concepts and using functions that might seem a little more complex, but by then you should understand the different moving parts and see that they simplify and improve your code.

Note that I’m writing this in the middle of 2024, and technology moves on quickly – things may well have changed by the time you’re reading this. Once again I’d like to thank James Meegan for his original documentation which I’ve used as a foundation for this series.

Create and Clone a Repository

Create the Repo

In part 1 of this series, we created a new project in Azure DevOps. When this was done a new default repository (repo) was created with the same name as the project. I’ve been informed that common practice is to ignore this repo and create dedicated new ones as required, with a descriptive naming convention. To do this, open Azure DevOps in a browser, select “Repos”, then at the top, click the down arrow next to the default repository name and select “New repository:

Ensure that the “Repository type” is “Git”, and that “Add a README” is selected (this ensures we can connect a pipeline to the repository, which can’t be done to an empty repo). Give the repo a suitable name according to your conventions, then click “Create”:

Clone the Repo

Although it’s possible to edit and work with code directly in the Azure DevOps portal, it tends to be better if individuals take a copy of the code to their own computer and work on it there in an editor of their choice (mine is Visual Studio Code). You’d normally have a folder for each of your projects (or customers, with different folders for their projects within), somewhere on your computer, into which you’d clone the relevant repos. Start by clicking the “Clone” button in the new repo:

In the resulting dialogue box, click “Clone in VS Code”:

Agree to any prompts to open VS code or access the URI, then select your chosen target folder for the repo. You *may* be prompted to sign in to Azure DevOps again at this point, or potentially generate Git credentials. Your repo is now cloned and you are ready to work on it locally on your computer:

Repository Choices

Although not needed to follow this series, it is worth considering how you want to structure your repositories. Do you want all your environment(s) in a single repository, or do you want different repositories for different resources, for example a repository for Azure Firewalls, and a different repository for Firewall Policies, so that access can be delegated to a specific team to edit the policies, but without giving them access to the infrastructure itself?

Common choices are to keep production and non production environments in separate repos, or different repos for each spoke in a standard Azure hub and spoke environment. If separating your repositories, best practice dictates a dedicated .tfstate file for each environment.

Terraform Provider

This series is not an in-depth tutorial on Terraform. To understand what Terraform providers are and what they do, it’s worth spending some time with an online learning service. You don’t need it to follow the series, but it would help your overall understanding of the steps that follow.

Create the Required Files

In the root folder of your repo on your local computer, create four files:

  • providers.tf
  • main.tf
  • variables.tf
  • terraform.tfvars

You can do this in Windows Explorer, or within VS Code itself:

The filenames are those used throughout the series, but are not fixed, and as long as the suffix remains constant, you can use the filenames to help you logically separate out code. Terraform treats all the files within the same folder as a single file. It will be discussed in a future post, but to make your code re-usable, any sensitive, project or client specific variables should be held in the .tfvars file.

Add the Terraform Provider

To understand more about providers, you can research the information on HashiCorp’s website. The specific Azure Resource Manager provider information can be found here. In essence, you are adding a provider so that Terraform knows you are working with Azure. It is important to declare the provider, and associate that with the repo before starting to build our code and our pipelines. It will pass on information such as the remote nature of the .tfstate file and what types of resources to use, without which our commands will fail.

HashiCorp regularly update their provider versions, and unless you need a specific recent version, it is best practice to use a release that is approximately six months old, but is not followed by a release that has lots of bug fixes. You can review the version history for AzureRM here. The version I will be using for this series is 3.100.0, because it is a stable release, and although more recent than I’d normally like, contains a specific feature that I use within the code we will deploy.

Add the following code to your providers.tf file:

# Configure the Microsoft Azure Provider
provider "azurerm" {
  features {}
}

# Set the Azure Provider source and version being used
terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=3.100.0"
    }
  }
}

Note that comments have been added to help understand what each code block is for and what it is doing. We could put this code block in any .tf file in the root folder, but by keeping it in a separate file called providers.tf, we are giving a logical structure to our data, helping us further down the line when there is lots of code to handle. Terraform will now know that it will be using Azure Resource Manager, and that the version of the provider will be 3.100.0.

As mentioned before, Terraform will need to know that the .tfstate file is being held remotely, in an Azure storage account. We achieve this by adding a “backend” statement to the “terraform {}” code block. At the same time, it is a good idea to mandate the minimum version of Terraform that will be used (as with the provider, try to go for a stable release around six months old, in mid-2024 I’m using version 1.6.6), again in the “terraform {} code block:

  # Note that the .tfstate file is held remotely in Azure
  backend "azurerm" {
  }
  # Set the Terraform version
  required_version = ">=1.6.6"

Synchronise the Repository

When you’ve added the code, save all your new files. It’s now time to confirm that our cloned repository has a connection to our Azure DevOps project, and that it’s working as expected. Before doing that, you need to tell Git who you are… In Visual Studio Code, open a Terminal (Terminal / New Terminal), and provide the following information:

git config --global user.email "youremail@address"
git config --global user.name "Your Name"

A handy tip I’ve found is to check your code for formatting errors before any commit. You can do this in the Terminal window by running the command:

terraform fmt -recursive

Any files in your repository that don’t meet Terraform standards will be corrected and re-saved. In your VS Code window, select the “Source Control” () icon. Enter a comment in the box at the top to describe what you’ve done, then select the drop-down arrow next to “Commit” and select “Commit & Sync”:

It’s possible to achieve all this using git commands in the Terminal window, and that’s worth researching if you want to do things that way, but I prefer to use the GUI and am writing this series with that in mind. If you now look in your Azure DevOps repository in the portal, you should see your files:

Create a Pipeline

This section of our post talks about YAML pipelines, how to create them and how to connect them to your repository. In summary we’re going to:

  • Create a basic pipeline in the Azure DevOps portal using a wizard.
  • Use the assistant to add Terraform related commands to that pipeline.
  • Connect the pipeline code to the code repository.
  • Synchronise the code from the repository to your local computer.

Once we have our pipeline code, we can copy it and use it elsewhere with minimal editing, rather than building new ones from scratch. As your knowledge improves, you can add to, adjust and refine the pipelines, and also perform troubleshooting if (when!) things go wrong.

Pipelines *MUST* be connected to a repository. A repo can be connected to multiple pipelines, but a pipeline can only be connected to one repo. As mentioned before, a pipeline cannot be connected to an empty repo, so if you ever create a pipeline before synchronising Terraform files, ensure you have that README.md file. As per best-practice, we will create two pipelines for our repository: one to validate the code and advise what would happen if the code were to be deployed, and a second to actually build or destroy the infrastructure in Azure.

In the Azure DevOps portal, select “Pipelines” on the left then select “Create Pipeline”:

As we’re using the repositories in Azure DevOps, select Azure Repos Git, then select your repository followed by “Starter Pipeline”. This results in a block of code provided by Microsoft. We don’t need this code, and it can be deleted, but it’s worth looking through to understand the syntax and layout. YAML uses indentation to denote sub-sections, whereas other languages might use brackets. With all Terraform and YAML, you will come to learn the importance of proper spacing and indentation. Again, comments are supported and are identified with the hash symbol (#). This file is going to be our validation pipeline, so we will rename it to be more intuitive. Above the code, select the file name and rename to anything you feel fits (you can change it later if you like). I am using validation.yml:

As always, we should comment our code to help our future selves and anybody else who might want to understand what’s going on. As this pipeline will run the Terraform initialisation and plan commands to validate the code, we’ll replace the default comments with something more appropriate:

Note the next code block – “trigger”. We’ll cover branches in a later post in the series but it’s worth noting that when a new repo is created in Azure DevOps, it is created with a default “main” branch which is where all the code for deployment is run from. We will be creating additional branches from this main branch where we can perform testing and validation, before the actual deployment. When we are happy with the code in say, a “development” branch, we “pull” that code from there into “main”. Best practice dictates that the “main” branch is only ever updated using this “pull” process, and never synchronised to directly, which we’ll try and govern later in this episode. The “trigger” code block as shown in the image above is effectively saying that if a change is detected in the “main” branch of the code, then the pipeline should run. This is not ideal while we’re setting things up, so we’ll change it to “none”, which will require the pipeline to be run manually if required. We’ll also comment the section with its purpose:

I’ll reiterate the importance of spacing and indentation here. If your spacing, your indentation, or indeed your hyphens at the beginning of lines are not accurate, the pipeline will not run. The next block of code is “pool”. This is the operating system used by Microsoft to run the agent which will run your code, and in this example, it is the latest version of Ubuntu Linux. You can research the other options but for this series, that’s absolutely fine so we’ll leave that line as it is, but with a comment as to its purpose:

It’s worth noting at this point that you can also choose to self-host the DevOps agents in your own infrastructure. Although it’s outside the scope of this series, it wouldn’t hurt to do a bit of research on how to do it and why you might want to (or need to!). This now leaves us with the “steps” code block. We don’t need these, so we can delete everything after the line that says “steps:”, leaving us with the following file contents:

# Validation Pipeline
# This pipeline runs the Terraform initialisation (init) and plan stages in order to validate the code

# When to run the code - "none" defines that the pipeline must be run manually
trigger:
- none

# The operating system of the DevOps agent run by Microsoft
pool:
  vmImage: ubuntu-latest

steps:

Add the Validation Steps

We now need to build out the validation steps of our pipeline, which will include the “init” and “plan” Terraform commands. Although we could write this code ourselves (with the appropriate indentation!), Microsoft provide a handy assistant to help build the blocks for us. In your code, select the line under “steps” then click the “Show assistant” button:

Type “Terraform” in the search bar and select the “Terraform” entry, which will open the wizard screen for our required Terraform commands:

The command we require first is “init”, so we can scroll down to the “AzureRM backend configuration” section. This is where we will add the details of our .tfstate backend storage account that was created in our prerequisites. Note that under “Azure subscription”, it’s important that you select the service connection you previously created rather than the actual Azure subscription name. Also, the field labelled “key” is a little (OK, a lot!) misleading. Rather than a key or password, if you click the (i) icon you’ll see that it wants a folder path for your .tfstate file. Enter or select the required details and select “Add”:

Another important point to note is that storing this information directly within the pipeline is not best practice. We will secure this later but for now it is good to understand the code and what is being read by Terraform. After adding the obligatory comments, this gives us the following code:

You can ignore the red underlining of the word “inputs”, Azure DevOps doesn’t always get it right, but it does try to make sure there are no issues with your code and if you saw a full line underlined, it would be worth checking. Right now though, we have a basic YAML pipeline that will perform a Terraform initialisation step. If you click “Save and Run”, add a descriptive message then click “Save and Run” again, we can check to see if our pipeline has worked:

Click on “Job”, to bring you to the output page from the Microsoft agent:

If you select “TerraformTaskV4” (not descriptive, and may be different for you, but we’ll fix that later!), you should see a successful initialisation of Terraform:

If you select “Pipelines” on the left (or in the breadcrumb trail at the top of the page), you’ll see that your pipeline has been given the same name as your repository, which is not ideal as we will have multiple pipelines and we want them to be named more descriptively. At the right side of your pipeline, select the three dots (more options) icon and select “Rename/move”:

Give your pipeline a more descriptive name (I’ve chosen “validation_pipeline”), and if you wish, you can create a new folder in which to store it, then click “Save”:

Select the “more options” icon against your newly renamed pipeline and select “Edit”:

We said earlier that we’d make the task name more descriptive when looking at the output from the agent. To do this we add a “displayName” value to the code block. Be sure that your indentations are correct:

OK, we’ve now initialised Terraform, but that’s about it, so we need another step. This time we’re going to follow the assistant’s wizard screen and instead of the “init” command, we’re going to select “plan”. Select the line under your “init” task, click “Show assistant”, search for and select “Terraform”, then under “Command”, select “plan”. Choose your service connection then click “Add”:

Once the code block has been added, just as we did above, add a sensible comment and a “displayName” line (don’t spell it wrong, use the correct case on all letters, and make sure it’s properly indented or the pipeline will fail), and we have our finished pipeline with all the required steps. Here’s my full pipeline code:

# Validation Pipeline
# This pipeline runs the Terraform initialisation (init) and plan stages in order to validate the code

# When to run the code - "none" defines that the pipeline must be run manually
trigger:
- none

# The operating system of the DevOps agent run by Microsoft
pool:
  vmImage: ubuntu-latest

# Steps to perform as part of the pipeline operations
steps:
# Terraform Initialisation
- task: TerraformTaskV4@4
  displayName: Run Terraform Init
  inputs:
    provider: 'azurerm'
    command: 'init'
    backendServiceArm: 'terraform-series-sc'
    backendAzureRmResourceGroupName: 'uks-tfstatefiles-rg-01'
    backendAzureRmStorageAccountName: 'ukstfstatefilessa01'
    backendAzureRmContainerName: 'tfstatefiles'
    backendAzureRmKey: 'terraformstatefiles/terraform_series.tfstate'

# Terraform Plan
- task: TerraformTaskV4@4
  displayName: Run Terraform Plan
  inputs:
    provider: 'azurerm'
    command: 'plan'
    environmentServiceNameAzureRM: 'terraform-series-sc'

Click “Validate and Save”, which will run a validation against your code. Again add a sensible descriptive comment before hitting “Save”:

You can now click “Run” (and “Run” again), click on “Job” and see the output from the agent that’s run your pipeline, with descriptive display names for each step:

And that’s it… We’ve got a repository cloned to our local computer, we’ve created an Azure DevOps pipeline linked to that repo, and it runs Terraform Initialisation and Plan steps.

Next time, we’ll see if we can actually get resources deployed into Azure.

Until then

– The Zoo Keeper

By TheZooKeeper

An Azure Cloud Architect with a background in messaging and infrastructure (Wintel). Bearded dog parent who likes chocolate, doughnuts and Frank's RedHot sauce, but has not yet attempted to try all three in combination!