Published on

Implementing Azure Hub-and-Spoke Architecture with Firewall and NAT Gateway Integration Using Terraform.

Authors
  • avatar
    Name
    Saif Segni
    Twitter

Introduction

For production environments, it is advisable to use a hub and spoke network configuration, with the firewall placed in a dedicated virtual network. The workload servers should reside in virtual networks that are peered with the hub virtual network containing the firewall., configure the NAT gateway directly on the Azure Firewall subnet.In this setup, the NAT gateway can facilitate outbound connectivity from the hub virtual network to all peered spoke virtual networks.

Terraform is a powerful infrastructure-as-code tool that allows you to define, provision, and manage cloud infrastructure in a consistent and version-controlled way.

In this tutorial, you will learn how to use Terraform to deploy an Azure Hub and spoke architecture with azure firewall and nat getaway integration .

diagram

Prerequisites

  • An Azure account and subscription .
  • Terraform installed on your machine with version >= v0.12.
  • Azure CLI installed on your machine .

Azure Resources to Provision

In this tutorial , We will use Terraform to create this azure resources :

  • Two Azure ressources Group
  • Two Azure Vnet
  • An Azure Firewall in the dedicated subnet into the Hub virtual Network
  • A Virtual network peering
  • Route table
  • A firewall policy
  • A bastion Host
  • A Nat Gateway
  • A Virtual machine

Deployment With Terraform :

You can find the Terraform configuration code in my git repo : https://github.com/segni-saifeddine/azure-hub-spoke-with-nat_gateway-firewall

Note that this lab does not cover the principles of Terraform modules and remote tf states.

Terraform Configuration files

First, let’s check the different Terraform .tf files:

  • Rg.tf file: This configuration file is used to create an Azure resource group. A resource group is a logical container in Azure that allows you to manage and organize resources as a unit. In this file, we define two Azure resource groups named rg-hub-network and rg-spoke-network. These resource groups will help organize network resources for our infrastructure .
resource "azurerm_resource_group" "hub" {
  name     = "rg-hub-network"
  location = var.location
  tags = merge(
    local.common_tags,
    {
      composant = "rg"
    }
  )

  lifecycle {
    ignore_changes  = [tags]
    prevent_destroy = true
  }
}

resource "azurerm_resource_group" "spoke" {
  name     = "rg-spoke-network"
  location = var.location
  tags = merge(
    local.common_tags,
    {
      composant = "rg"
    }
  )

  lifecycle {
    ignore_changes  = [tags]
    prevent_destroy = true
  }
}
  • Vnet.tf : First, we create two virtual networks (VNets) in Azure, which provide the fundamental building blocks for private networking. The first network is the Hub VNet, and the second is the Spoke VNet.

The Hub network includes two subnets one for the azure firewall and the second for the azure bastion host .

Additionally, we define two azurerm_virtual_network_peering resource blocks to establish virtual network peering between the two VNets. This peering enables seamless communication between the Hub and Spoke networks.

resource "azurerm_virtual_network" "hub" {
  name                = "vnet-hub"
  location            = azurerm_resource_group.hub.location
  resource_group_name = azurerm_resource_group.hub.name
  address_space       = var.vnet_hub_address_space
  tags = merge(
    local.common_tags,
    {
      composant = "vnet"
    }
  )
}

resource "azurerm_virtual_network" "spoke" {
  name                = "vnet-spoke"
  location            = azurerm_resource_group.spoke.location
  resource_group_name = azurerm_resource_group.spoke.name
  address_space       = var.vnet_spoke_address_space
  tags = merge(
    local.common_tags,
    {
      composant = "vnet"
    }
  )
}

resource "azurerm_virtual_network_peering" "hub" {
  name                      = "peer-hub-to-spoke"
  resource_group_name       = azurerm_resource_group.hub.name
  virtual_network_name      = azurerm_virtual_network.hub.name
  remote_virtual_network_id = azurerm_virtual_network.spoke.id
}

resource "azurerm_virtual_network_peering" "spoke" {
  name                      = "peer-spoke-to-hub"
  resource_group_name       = azurerm_resource_group.spoke.name
  virtual_network_name      = azurerm_virtual_network.spoke.name
  remote_virtual_network_id = azurerm_virtual_network.hub.id
}

resource "azurerm_subnet" "fw" {
  name                 = "AzureFirewallSubnet"
  resource_group_name  = azurerm_resource_group.hub.name
  virtual_network_name = azurerm_virtual_network.hub.name
  address_prefixes     = ["10.2.1.0/24"]
}

resource "azurerm_subnet" "bastion" {
  name                 = "AzureBastionSubnet"
  resource_group_name  = azurerm_resource_group.hub.name
  virtual_network_name = azurerm_virtual_network.hub.name
  address_prefixes     = ["10.2.2.0/24"]
}

resource "azurerm_subnet" "vm_spoke" {
  name                 = "subnet-private"
  resource_group_name  = azurerm_resource_group.hub.name
  virtual_network_name = azurerm_virtual_network.hub.name
  address_prefixes     = ["10.1.1.0/24"]
}
  • Firewall.tf : Create an azure firewall with a public Ip and configure a network rules to allow outbound internet traffic on port 80,443 from the spoke vnet .
resource "azurerm_public_ip" "fw" {
  name                = "pip-fw"
  location            = azurerm_resource_group.hub.location
  resource_group_name = azurerm_resource_group.hub.name
  allocation_method   = "Static"
  sku                 = "Standard"
}

resource "azurerm_firewall" "fw" {
  name                = "fw-hub"
  location            = azurerm_resource_group.hub.location
  resource_group_name = azurerm_resource_group.hub.name
  sku_name            = "AZFW_VNet"
  sku_tier            = "Standard"
  tags = merge(
    local.common_tags,
    {
      composant = "fw"
    }
  )
  ip_configuration {
    name                 = "configuration"
    subnet_id            = azurerm_subnet.fw.id
    public_ip_address_id = azurerm_public_ip.fw.id
  }

}
# Network rule collections
resource "azurerm_firewall_network_rule_collection" "fw_net_rule" {
  azure_firewall_name = azurerm_firewall.fw.name
  resource_group_name = azurerm_firewall.fw.resource_group_name
  action              = "Allow"
  name                = "spoke-to-internet"
  priority            = 100
  rule {
    name = "allow-web"

    source_addresses = [
      "10.1.0.0/16",
    ]

    destination_ports = [
      "80", "443"
    ]

    destination_addresses = ["*"]

    protocols = [
      "TCP",
    ]
  }
}

  • Bastion.tf : With Azure Bastion uses your browser to connect to VMs in your virtual network over secure shell (SSH) or remote desktop protocol (RDP) by using their private IP addresses.
resource "azurerm_public_ip" "bastion" {
  name                = "pip-bastion"
  location            = azurerm_resource_group.hub.location
  resource_group_name = azurerm_resource_group.hub.name
  allocation_method   = "Static"
  sku                 = "Standard"
}

resource "azurerm_bastion_host" "bastion" {
  name                = "bastion"
  location            = azurerm_resource_group.hub.location
  resource_group_name = azurerm_resource_group.hub.name

  ip_configuration {
    name                 = "configuration"
    subnet_id            = azurerm_subnet.bastion.id
    public_ip_address_id = azurerm_public_ip.bastion.id
  }
} 

Hourly pricing starts from the moment that Bastion is deployed, In lab context ,I advice to delete this resource after you finish using it.

  • Nat_gateway.tf : In this step we will create the azure nat getway ,azure public ip and then assoicate the nat gateway to the public ip and also to the azure firewall subnet .
resource "azurerm_public_ip" "nat" {
  name                = "pip-nat"
  location            = azurerm_resource_group.hub.location
  resource_group_name = azurerm_resource_group.hub.name
  allocation_method   = "Static"
  sku                 = "Standard"
}

resource "azurerm_nat_gateway" "nat" {
  name                = "NatGateway"
  location            = azurerm_resource_group.hub.location
  resource_group_name = azurerm_resource_group.hub.name
  sku_name            = "Standard"
}

resource "azurerm_nat_gateway_public_ip_association" "nat" {
  nat_gateway_id       = azurerm_nat_gateway.nat.id
  public_ip_address_id = azurerm_public_ip.nat.id
}

resource "azurerm_subnet_nat_gateway_association" "nat" {
  subnet_id      = azurerm_subnet.fw.id
  nat_gateway_id = azurerm_nat_gateway.nat.id
}

The Need for a NAT Gateway in This Architecture 🤔 Azure NAT Gateway eliminates the need for a public IP on the NVA. By associating a NAT Gateway with the NVA’s public subnet, all outbound internet traffic is routed through it, enhancing security and enabling scalable SNAT with multiple public IPs or IP prefixes .

  • Route-tb.tf : Create the route table to force all inter-spoke and internet egress traffic through the firewall in the hub virtual network.
resource "azurerm_route_table" "rt" {
  name                          = "rt-spoke-to-hub"
  location                      = azurerm_resource_group.spoke.location
  resource_group_name           = azurerm_resource_group.spoke.name
  disable_bgp_route_propagation = false

  route {
    name                   = "route-to-hub"
    address_prefix         = "0.0.0.0/0"
    next_hop_type          = "VirtualAppliance"
    next_hop_in_ip_address = azurerm_firewall.fw.ip_configuration[0].private_ip_address 
  tags = merge(
    local.common_tags,
    {
      composant = "rt"
    }
  )
}
}
resource "azurerm_subnet_route_table_association" "rt" {
  subnet_id      = azurerm_subnet.vm_spoke.id
  route_table_id = azurerm_route_table.rt.id
}
  • Vm.tf : An Ubuntu virtual machine is used to test the outbound internet traffic through the NAT gateway. The “random_password” resource block is used to create a random vm-password .The Vm interface is attached to the “vm_spoke” subnet .
resource "random_password" "vm" {
  length           = 16
  special          = true
  override_special = "!#$%&*()-_=+[]{}<>:?"
  # Use a keeper to ensure the password remains the same as long as this value doesn't change
  keepers = {
    constant = "fixed-value"
  }
}

resource "azurerm_network_interface" "vm" {
  name                = "nic-vm-spoke"
  location            = azurerm_resource_group.spoke.location
  resource_group_name = azurerm_resource_group.spoke.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurerm_subnet.vm_spoke.id
    private_ip_address_allocation = "Dynamic"
  }
}

resource "azurerm_linux_virtual_machine" "vm" {
  name                = "vm-spoke"
  resource_group_name = azurerm_resource_group.spoke.name
  location            = azurerm_resource_group.spoke.location
  size                = "Standard_F2"
  admin_username      = "adminuser"
  network_interface_ids = [
    azurerm_network_interface.vm.id,
  ]

  admin_password = random_password.vm.result
  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }
source_image_reference {
    publisher = "Debian"
    offer     = "debian-11"
    sku       = "11"
    version   = "latest"
  }
}

  • variables.tf file, where we define the values for our variables .
variable "location" {
  description = " (Required) The location/region where the resouce is created. Changing this forces a new resource to be created."
  type        = string
  default     = "francecentral"
}

variable "vnet_hub_address_space" {
  default     = ["10.2.0.0/16"]
  description = "The address space that is used in the virtual network. More than one address space can be provisioned"
  type        = list(string)
}
variable "vnet_spoke_address_space" {
  default     = ["10.1.0.0/16"]
  description = "The address space that is used in the virtual network. More than one address space can be provisioned"
  type        = list(string)
}

Terraform Workflow

Finally, let’s run the Terraform workflow, sit back, and watch Terraform work its magic to deploy our Azure resources!

$ terraform init 
$ terraform validate
$ terraform plan -out=tfplan
$ terraform apply tfplan
tfplan

Waiting for Terraform to wave its wand and make infrastructure happen!

Funny GIF

You can test and verify that the outbound internet traffic is leaving the NAT gateway by conencting to the Virtual machine ( from the bastion host) , tap this command “curl ifconfig.me” and verify the IP address returned by the command matches the public IP address of the NAT gateway.

that’s all folks 👏 ! That’s all for this lab, thanks for reading 🙏