- Published on
Deploy to Multiple Azure Environments with Azure DevOps Pipeline and Terraform
- Authors
- Name
- Saif Segni
Introduction
HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share
In this lab, we will demonstrate how to automate infrastructure deployments in multiple Azure environments using HashiCorp Terraform and Azure Pipelines. We will also cover the creation of an Azure Storage account to host Terraform state files, the setup of a deployment Service Principal, and the configuration of RBAC permissions for your Azure subscription.

Steps
This lab consists of three steps:
- Step 1: Prepare Our Environment
First, we will create an Azure Storage account and a container to host the Terraform state files. We will also create the Azure Service Principal to be used as an Azure DevOps service connection.
Step 2: Terraform Code : In this step, we will focus on the Terraform code and its structure.
Step 3: Azure DevOps Pipeline : In this last step, we will explore how to use Azure Pipelines to deploy Azure resources to development, staging, and production environments.
Prerequisites
An Azure account and subscription .
An azure devops organization .
Terraform Extension installed into your Azure DevOps Organization .
Azure Cli installed into your local machine .
Step 1: Prepare Our Environment
By default, Terraform stores state locally in a file named terraform.tfstate ,When working with Terraform in a team, use of a local file makes Terraform usage complicated .With remote state, Terraform writes the state data to a remote data store, which can then be shared between all members of a team. However, in this workshop we will use Azure storage to track the tf state.
Before you use Azure Storage as a backend, you must create a storage account.bellow we describe how to do the following tasks
- We need first to create an Azure resource group
$ az group create --name rg-tf-backend --location eastus
- Create the Azure storage account
az storage account create --resource-group rg-tf-backend --name st_acoount_tf_backend --sku Standard_LRS --encryption-services blob
- Create blob container
az storage container create --name terraform-state --account-name st_acoount_tf_backend
- Create the azure SP A Service Principal (SPN) is required to allow Terraform the Azure devops agent to authenticate against the Azure subscription and create Azure resources.
Us the az cli command to create a SP with contributor role in the azure subscription :
az ad sp create-for-rbac --name terraform-sp --role Contributor \
--scopes /subscriptions/00000000-0000-0000-0000-000000000000
After the SP creation , the next step is to configure an azure devops connection service, for that you have to follow this steps :
In your Azure DevOps organization, go to “Project Settings” > “Service connections.”
Create a new service connection, selecting “Azure Resource Manager” as the service connection type.
Fill in the details using the Application ID, Tenant ID, and Client Secret created of the “terraform-sp” , give it the name “terraform-deploy” .
Step 2: Terraform Code
In this step, we will focus on the Terraform code and its structure. For this simple lab, the resources we will deploy include an Azure Resource Group and an Azure Virtual Network in each environment (development, staging, and production).
Note that this lab does not cover the principles of Terraform modules. A detailed article on this will be published soon—be sure to check back for updates."
The code is stored in Azure DevOps Git repositories. The code structure includes a “networking” directory at the root level, which consists of:
├── main.tf
├── output.tf
├── variables.tf
├── dev
│ ├── backend.hcl
│ └── variables.tfvars
├── prod
│ ├── backend.hcl
│ └── variables.tfvars
├── staging
│ ├── backend.hcl
│ └── variables.tfvars
- main.tf: This is the configuration file where we define the resource blocks (the resources to create).
locals {
common_tags = {
Environment = var.env_name
createdWith = "Terraform"
}
}
# Create an Aure Resource Group
resource "azurerm_resource_group" "rg" {
name = "rg-network-${var.env_name}"
location = var.location
tags = merge(
local.common_tags,
{
composant = "rg"
}
)
lifecycle {
prevent_destroy = true
}
}
# Create an Azure Virtual Network
resource "azurerm_virtual_network" "vnet" {
name = "vnet-${var.env_name}"
location = var.location
resource_group_name = azurerm_resource_group.rg.name
address_space = var.vnet_address_space
tags = merge(
local.common_tags,
{
composant = "vnet"
}
)
lifecycle {
prevent_destroy = true
}
}
- Variables.tf: This file contains the input variables.
variable "env_name" {
description = "the environnement where the resource will be create"
type = string
}
variable "location" {
description = " (Required) The location/region where the resource is created. Changing this forces a new resource to be created."
type = string
default = "francecentral"
}
variable "vnet_address_space" {
description = "The address space that is used in the virtual network. More than one address space can be provisioned"
type = list(string)
}
- Outputs.tf: This file is used for defining output values, which are similar to return values in programming languages.
output "rg_name" {
value = azurerm_resource_group.rg.name
}
output "rg_id" {
value = azurerm_resource_group.rg.id
}
output "vnet_id" {
value = azurerm_virtual_network.vnet.id
}
- Finally, we have three subfolders named dev, prod, and staging. In each of these subfolders, we find two files: the backend.hcl file, which references the Azure Terraform backend configuration (the location where the Terraform state file is stored), and the terraform.tfvars file, which allows us to manage variable assignments systematically. let’s start with the backend.hcl file :
resource_group_name = "rg-tf-backend"
storage_account_name = "st_acoount_tf_backend"
container_name = "terraform-state"
key = "network-dev.tfstate"
subscription_id = "xxxxxxxxxxxxxxxxxxxxxxxxx"
tenant_id = "xxxxxxxxxxxxxxxxxxxxxxxxx"
and now let’s take a look for one of the .tfvars :
env_name = "dev"
vnet_address_space = ["10.0.1.0/24"]
Step 3 : Azure pipeline
The Azure DevOps pipeline is hosted at the root level. This pipeline utilizes parameters defined as dropdown menus, allowing users to select the environment in which to deploy the Terraform code during execution.
The pipeline consists of three stages: validate, plan, and apply. In the first stage, we execute the terraform validate command. During the plan stage, we run the terraform plan command and store the output in a plan file. Finally, the apply stage uses the output file to execute the terraform apply command, deploying the resources in Azure.The apply task need a manualy validation.
trigger: none
pool:
vmImage: 'ubuntu-latest'
# Declaration of parameters associated with the execution of the pipeline
parameters:
# Dropdown menu.
- name: environment
type: string
displayName: Environment
default: Development
values:
- 'Development'
- 'Production'
- 'Staging'
variables:
## Declaration of variables to specify the environment
## A condition is applied to the environment parameter declared above
- ${{ if eq(parameters.environment, 'Development') }}:
- name: environment
value: dev
- ${{ if eq(parameters.environment, 'Production') }}:
- name: environment
value: prod
- ${{ if eq(parameters.environment, 'Staging') }}:
- name: environment
value: staging
stages:
- stage: Validation
displayName: 'Terraform Validate'
jobs:
- job: Validate
steps:
- task: ms-devlabs.custom-terraform-tasks.custom-terraform-installer-task.TerraformInstaller@0
displayName: 'install'
inputs:
terraformVersion:
- task: TerraformCLI@0
displayName: 'init'
inputs:
backendType: azurerm
command: 'init'
commandOptions: -upgrade -reconfigure -backend-config="${{ parameters.environment }}/backend.hcl"
backendServiceArm: 'terraform-deploy'
workingDirectory: '$(System.DefaultWorkingDirectory)/networking'
- task: TerraformCLI@0
displayName: 'validate'
inputs:
backendType: azurerm
command: 'validate'
environmentServiceName: 'terraform-deploy'
workingDirectory: '$(System.DefaultWorkingDirectory)/networking'
allowTelemetryCollection: true
- stage: Plan
displayName: 'Terraform Plan'
jobs:
- job: Plan
steps:
- task: TerraformCLI@0
displayName: 'plan'
inputs:
command: 'plan'
commandOptions: -var-file="${{ parameters.environment }}/variables.tfvars" -out=tfplan
environmentServiceName: 'terraform-deploy'
workingDirectory: '$(System.DefaultWorkingDirectory)/networking'
allowTelemetryCollection: true
- stage: 'Apply'
displayName: 'Terraform Apply'
jobs:
- job: waitForValidation
dependsOn: Plan
displayName: Wait for external validation
pool: server
timeoutInMinutes: 4320 # job times out in 3 days
steps:
- task: ManualValidation@0
inputs:
notifyUsers: ''
instructions: 'Execute Terraform APPLY?'
- job: deploy
dependsOn: waitForValidation
steps:
- task: TerraformCLI@0
displayName: 'apply'
inputs:
command: 'apply'
commandOptions: -var-file="${{ parameters.environment }}/variables.tfvars" -out=tfplan
environmentServiceName: 'terraform-deploy'
workingDirectory: '$(System.DefaultWorkingDirectory)/networking'
allowTelemetryCollection: true
If we take the development environment as an example (where you have selected “Development” during pipeline execution), then ${{ parameters.environment }}
will be equal to “dev.” Consequently, the pipeline will execute the following commands :
terraform init upgrade -reconfigure -backend-config="dev/backend.hcl"
terraform validte
terraform plan -var-file="dev/variables.tfvars" -out=tfplan
terraform apply tfplan
that’s all folks 👏 ! That’s all for this lab, thanks for reading 🙏