0

Is it possible by terraform provision AKS cluster with agents that will contains additional storage attached in each VM?

Currently PV/PVC based on SMB protocol is a bit of joke and my plan is to use rook or glusterfs, but then I like to provision my cluster by terraform where each of my node will handle also proper amount of storage, instead creating separate regular nodes just to handle them.

Best Regards.

jelmer
  • 2,405
  • 14
  • 27
MrHetii
  • 1,315
  • 10
  • 12

2 Answers2

2

I'm afraid you cannot achieve that in AKS through Terraform. The AKS is a managed service so that you cannot do many personal actions in it.

According to your requirements, I would suggest you use the aks-engine which you can manage the cluster yourself, even the master node. You can use the property diskSizesGB in the agentPoolProfiles. The description here:

Describes an array of up to 4 attached disk sizes. Valid disk size values are between 1 and 1024.

More details in clusterdefinitions. You can also take a look at the example for the diskSizesGB here.

Charles Xu
  • 29,862
  • 2
  • 22
  • 39
  • Thx a lot for this tip. Btw in regular AKS service there is no charge for cluster management, so suppose master nodes. Is it the same when I use aks-engine? – MrHetii Oct 23 '19 at 08:46
  • @MrHetii No, all the nodes in the aks-engine will not be for free. Just like you use the VMs. Additionally, you can mark the answer while it helps you solve the problem. – Charles Xu Oct 23 '19 at 08:50
1

As using aks-engine just to have extra storage was a bit of wasting resources for me, I found finally a way to add extra storage for each aks node, entirely by terraform way:

resource "azurerm_subnet" "subnet" {
  name                = "aks-subnet-${var.name}"
  resource_group_name = "${azurerm_resource_group.rg.name}"
  # Uncomment this line if you use terraform < 0.12
  network_security_group_id = "${azurerm_network_security_group.sg.id}"
  address_prefix            = "10.1.0.0/24"
  virtual_network_name      = "${azurerm_virtual_network.network.name}"
}

resource "azurerm_kubernetes_cluster" "cluster" {
  ...
  agent_pool_profile {
    ...
    count           = "${var.kubernetes_agent_count}"
    vnet_subnet_id  = "${azurerm_subnet.subnet.id}"
  }
}

# Find all agent node ids by extracting it's name from subnet assigned to cluster:
data "azurerm_virtual_machine" "aks-node" {
  # This resource represent each node created for aks cluster.
  count = "${var.kubernetes_agent_count}"
  name  = distinct([for x in azurerm_subnet.subnet.ip_configurations : replace(element(split("/", x), 8), "/nic-/", "")])[count.index]

  resource_group_name = azurerm_kubernetes_cluster.cluster.node_resource_group
  depends_on = [
    azurerm_kubernetes_cluster.cluster
  ]
}

# Create disk resource in size of aks nodes:
resource "azurerm_managed_disk" "aks-extra-disk" {
  count                = "${var.kubernetes_agent_count}"
  name                 = "${azurerm_kubernetes_cluster.cluster.name}-disk-${count.index}"
  location             = "${azurerm_kubernetes_cluster.cluster.location}"
  resource_group_name  = "${azurerm_kubernetes_cluster.cluster.resource_group_name}"
  storage_account_type = "Standard_LRS"
  create_option        = "Empty"
  disk_size_gb         = 10
}

# Attach our disks to each agents:
resource "azurerm_virtual_machine_data_disk_attachment" "aks-disk-attachment" {
  count              = "${var.kubernetes_agent_count}"
  managed_disk_id    = "${azurerm_managed_disk.aks-extra-disk[count.index].id}"
  virtual_machine_id = "${data.azurerm_virtual_machine.aks-node[count.index].id}"
  lun                = "10"
  caching            = "ReadWrite"
}
MrHetii
  • 1,315
  • 10
  • 12
  • "AKS is a managed service, and manipulation of the IaaS resources is not supported.", even if you can add the data disk to the VM of the AKS. It will cause the issue, so when you do this, you need to bear the corresponding risks. – Charles Xu Nov 19 '19 at 07:17