0

What are you looking to accomplish?

To scale up or down the quantity of 'Resource VMs' in direct relation with their necessary 'Resource OS_disk'. For example, if we require to provision '5, 10, or 20' VMs we will require also to match the same '5, 10, or 20' OS Disks.

Currently, I know how to use count.index (see bellow code snippet) for the resource VM, but I don't know how to implement it for the resource OS_Disk, meaning that if we require to provision '200 VMs' I can define those '200 VMs' by setting that amount via the parameter count from the resource VM code block (see bellow code snippet), but I can't for the '200' OS_disk (disclaimer, once again because I don't know how to)

Do you have a code example to share?

resource "azurerm_network_interface" "network_card_resource" {
  count                     = 1
  name                      = "network_card_${count.index}"
  location                  = "removed"
  resource_group_name       = removed

  ip_configuration {
    name                          = "internal"
    subnet_id                     = removed.outputs.removed.subnet.removed.main.subnets["removed-XX.XX.XX.XX_XX"].id
    private_ip_address_allocation = "Dynamic"
  }
}

resource "azurerm_windows_virtual_machine" "vm_resource" {
  count                     = 10
  name                      = "vm_name_${count.index}"
  resource_group_name       = removed
  location                  = "removed"
  size                      = "Standard_D4_v3"


  admin_username            = var.admin_username
  admin_password            = var.admin_password
  network_interface_ids     = [
    element(azurerm_network_interface.network_card_resource.*.id, count.index)
  ]

  os_disk {
    count                   = 10
    name                    = "disk_name_${count.index}"
    caching                 = "ReadWrite"
    storage_account_type    = "Standard_LRS"
    resource_group_name     = removed
    disk_size_gb            = 180
  }

The above code failed to build the 10 OS disks via the count.index,

Is there an approach to tackle this?

alexis19apl
  • 41
  • 1
  • 10
  • hello @alexis19apl, only one OS disk can be attached to a VM , so as you are creating 10 vms , all the VM's will be attached to single VM's , so you don't have to give the count in the OS disk block. May i know if you want to attach 10 data disks to each VM as well ? – Ansuman Bal Jan 11 '22 at 05:45
  • Hello @alexis19apl, added an update to the answer please check it . – Ansuman Bal Jan 11 '22 at 09:21

1 Answers1

0

It is not possible to add 10 OS disk for a particular VM , you can have only 1 OS disk Per VM and other than that you can attach data disks (2 disks for 1 VM VCPU).

So , as a solution you can use for_each instead of count for better functionality of the code and then you can use the below script to create the VM's with additional data disks as per your requirement:

In the below code I am creating 3 VM's with 1 OS disk and 5 Data disk each:

provider "azurerm"{
    features{}
}

data "azurerm_resource_group" "example"{
    name="ansumantest"
}

resource "azurerm_virtual_network" "example" {
  name                = "example-network"
  address_space       = ["10.0.0.0/16"]
  location            = data.azurerm_resource_group.example.location
  resource_group_name = data.azurerm_resource_group.example.name
}


resource "azurerm_subnet" "test_subnet" {
    name = "VM-subnet"
    resource_group_name = data.azurerm_resource_group.example.name
    virtual_network_name = azurerm_virtual_network.example.name
    address_prefixes = ["10.0.0.0/24"]
}

resource "azurerm_public_ip" "myterraformpublicip" {
   for_each = toset(var.instances)
   name                         = "myPublicIP-${each.key}"
   location                     = data.azurerm_resource_group.example.location
   resource_group_name          = data.azurerm_resource_group.example.name
   allocation_method            = "Dynamic"
}

resource "azurerm_network_interface" "myterraformnic" {
    for_each= toset(var.instances)
    name                      = "myNIC-${each.key}"
    location                  = data.azurerm_resource_group.example.location
    resource_group_name       = data.azurerm_resource_group.example.name

    ip_configuration {
      name                          = "myNicConfiguration"
      subnet_id                     = azurerm_subnet.test_subnet.id
      private_ip_address_allocation = "Dynamic"
      public_ip_address_id          = azurerm_public_ip.myterraformpublicip[each.key].id
    }
}
  variable "instances" {
  default = ["vm-test-1", "vm-test-2", "vm-test-3"]
  }
  variable "nb_disks_per_instance" {
  default = 5
  }
locals {
  vm_datadiskdisk_count_map = { for k in toset(var.instances) : k => var.nb_disks_per_instance }
  luns                      = { for k in local.datadisk_lun_map : k.datadisk_name => k.lun }
  datadisk_lun_map = flatten([
    for vm_name, count in local.vm_datadiskdisk_count_map : [
      for i in range(count) : {
        datadisk_name = format("datadisk_%s_disk%02d", vm_name, i)
        lun           = i
      }
    ]
  ])
}

resource "azurerm_windows_virtual_machine" "example" {
  for_each = toset(var.instances)
  name                = "example-machine${each.key}"
  resource_group_name = data.azurerm_resource_group.example.name
  location            = data.azurerm_resource_group.example.location
  size                = "Standard_F2"
  admin_username      = "adminuser"
  admin_password      = "P@$$w0rd1234!"
  network_interface_ids = [azurerm_network_interface.myterraformnic[format("%s", each.key)].id]

   os_disk {
    name                    = "${each.key}-OSDISK"
    caching                 = "ReadWrite"
    storage_account_type    = "Standard_LRS"
    disk_size_gb            = 180
  }

  source_image_reference {
    publisher = "MicrosoftWindowsServer"
    offer     = "WindowsServer"
    sku       = "2016-Datacenter"
    version   = "latest"
  }
}

resource "azurerm_managed_disk" "example" {
  for_each             = toset([for j in local.datadisk_lun_map : j.datadisk_name])
  name                 = each.key
  location             = data.azurerm_resource_group.example.location
  resource_group_name  = data.azurerm_resource_group.example.name
  storage_account_type = "Standard_LRS"
  create_option        = "Empty"
  disk_size_gb         = 10
}

resource "azurerm_virtual_machine_data_disk_attachment" "example" {
  for_each           = toset([for j in local.datadisk_lun_map : j.datadisk_name])
  managed_disk_id    = azurerm_managed_disk.example[each.key].id
  virtual_machine_id = azurerm_windows_virtual_machine.example[element(split("_", each.key), 1)].id
  lun                = lookup(local.luns, each.key)
  caching            = "ReadWrite"
}

Output:

enter image description here

enter image description here

enter image description here


Matching the OS Disk with count is not required by providing another count inside the OS_disk Block :

For example, I created 2 VMs using terraform then its by default that 2 OS disk will also be created like below :

resource "azurerm_linux_virtual_machine" "myterraformvm" {
count = 2
name                  = "testpoc0${count.index + 1}"
location              = data.azurerm_resource_group.example.location
resource_group_name   = data.azurerm_resource_group.example.name
 #network_interface_ids = azurerm_network_interface.myterraformnic.*.id
network_interface_ids = [element(azurerm_network_interface.myterraformnic.*.id, count.index + 1)]
size                  = "Standard_DS1_v2"

os_disk {
    ## count == not required here again as this is a child block inside the main VM resource block , so as many VM's will be created OS disk will also be created that many
    name              = "OsDisk${count.index + 1}"
    caching           = "ReadWrite"
    storage_account_type = "Premium_LRS"
}

source_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "18.04-LTS"
    version   = "latest"
}

computer_name  = "testpoc0${count.index}"
admin_username = "azureuser"
disable_password_authentication = true

admin_ssh_key {
    username       = "azureuser"
    public_key     = file("~/.ssh/id_rsa.pub")
}
}

enter image description here

enter image description here

Ansuman Bal
  • 9,705
  • 2
  • 10
  • 27
  • Hello @AnsumanBal-MT, first of all, thanks for jumping to the rescue, I have corrected the initial explanation to make explicit the desired outcome is to create as many **OS disk** as their direct relation with the **VMs**. i.e, if **10 VMs** are provision those will require also **10 OS disk** (one per each). With that being said, it is possible to adapt somehow your code to provision **OS disk** instead of **manage disks**? – alexis19apl Jan 11 '22 at 08:48
  • hello @alexis19apl, if that's the case then there is no need to use count in the os-disk block as by default if you use a count on direct VM block then you can just use `count.index` in the naming convention on the os disk per vm. – Ansuman Bal Jan 11 '22 at 09:05
  • in that case you can use count as well .. let me paste the code in answer – Ansuman Bal Jan 11 '22 at 09:06
  • Hi @AnsumanBal-MT, thanks! it worked. I did notice that the **Network Interfaces 'count'** starts from 'cero' but for the other resources does it from 'one', is there a way to make it start the 'count' from 'one' so that it matches the numbering order of **VMs, OSdisks**? Please, if possible could you point me to which documentation/videos/courses could I refer for learning more about terraform, there are plenty of considerations that can break a code and make you invest a lot of time trying to figure it out – alexis19apl Jan 11 '22 at 10:45
  • @alexis19apl, you can add +1 in name like ${count.index+1}. – Ansuman Bal Jan 11 '22 at 10:51
  • 1
    The answer from @AnsumanBal-MT solved this case, therefore is solved! – alexis19apl Jan 28 '22 at 08:23
  • glad to be of help @alexis19apl :) – Ansuman Bal Jan 28 '22 at 08:24
  • How can I follow you @AnsumanBal-MT? also, I work a lot with MSFT components if I can help with anything please let me know anytime – alexis19apl Jan 28 '22 at 08:26