1

I’m using openstack_compute_instance_v2 to create instances in OpenStack. There is a lifecycle setting create_before_destroy = true present. And it works just fine in case I e.g. change volume size, where instances needs to be replaced.

But. When I do flavor change, which can be done by using resize instance option from OpenStack, it does just that, but doesn’t care about any HA. All instances in the cluster are unavailable for 20-30 seconds, before resize finishes.

How can I change this behaviour?

Some setting like serial from Ansible, or some other options would come in handy. But I can’t find anything. Just any solution that would allow me to say “at least half of the instances needs to be online at all times”.

Terraform version: 12.20.

TF plan: https://pastebin.com/ECfWYYX3

ydaetskcoR
  • 53,225
  • 8
  • 158
  • 177
n0zz
  • 197
  • 1
  • 1
  • 14
  • What does the plan output look like when you make this change? Can you edit it into your question ideally with some Terraform code creating a [mcve] please? – ydaetskcoR Apr 16 '20 at 08:40
  • I've included terraform plan in the question. It might be hard for me to create minimal reproducible example, as we have a lot of stuff added for it to simply work with our openstack and gitlab pipelines settings. But in the plan, you can see that there are 2 instances. The only change is the flavor. And they are both unavailable at the same time. – n0zz Apr 16 '20 at 09:22

2 Answers2

2

The Openstack Terraform provider knows that it can update the flavor by using a resize API call instead of having to destroy the instance and recreate it.

Unfortunately there's not currently a lifecycle option that forces mutable things to do a destroy/create or create/destroy when coupled with the create_before_destroy lifecycle customisation so you can't easily force this to replace the instance instead.

One option in these circumstances is to find a parameter that can't be modified in place (these are noted by the ForceNew flag on the schema in the underlying provider source code for the resource) and then have a change in the mutable parameter also cascade a change to the immutable parameter.

A common example here would be replacing an AWS autoscaling group when the launch template (which is mutable compared to the immutable launch configurations) changes so you can immediately roll out the changes instead of waiting for the ASG to slowly replace the instances over time. A simple example would look something like this:

variable "ami_id" {
  default = "ami-123456"
}

resource "random_pet" "ami_random_name" {
  keepers = {
    # Generate a new pet name each time we switch to a new AMI id
    ami_id = var.ami_id
  }
}

resource "aws_launch_template" "example" {
  name_prefix            = "example-"
  image_id               = var.ami_id
  instance_type          = "t2.small"
  vpc_security_group_ids = ["sg-123456"]
}

resource "aws_autoscaling_group" "example" {
  name                = "${aws_launch_template.example.name}-${random_pet.ami_random_name.id}"
  vpc_zone_identifier = ["subnet-123456"]
  min_size            = 1
  max_size            = 3

launch_template {
    id      = aws_launch_template.example.id
    version = "$Latest"
  }

  lifecycle {
    create_before_destroy = true
  }
}

In the above example a change to the AMI triggers a new random pet name which changes the ASG name which is an immutable field so this triggers replacing the ASG. Because the ASG has the create_before_destroy lifecycle customisation then it will create a new ASG, wait for the minimum amount of instances to pass EC2 health checks and then destroy the old ASG.

For your case you can also use the name parameter on the openstack_compute_instance_v2 resource as that is an immutable field as well. So a basic example might look like this:

variable "flavor_name" {
  default = "FLAVOR_1"
}

resource "random_pet" "flavor_random_name" {
  keepers = {
    # Generate a new pet name each time we switch to a new flavor
    flavor_name = var.flavor_name
  }
}

resource "openstack_compute_instance_v2" "example" {
  name            = "example-${random_pet.flavor_random_name}"
  image_id        = "ad091b52-742f-469e-8f3c-fd81cadf0743"
  flavor_name     = var.flavor_name
  key_pair        = "my_key_pair_name"
  security_groups = ["default"]

  metadata = {
    this = "that"
  }

  network {
    name = "my_network"
  }
}
ydaetskcoR
  • 53,225
  • 8
  • 158
  • 177
  • Okay, that looked promising, until I saw that it requires some random naming of the instances. Which I can't do. Because naming schema and formats are decided, and many other things depends on instance names, hostnames etc. Gonna check out if there are other immutables that I can use for this. Also reading about blue-green and canary deployments with terraform: https://www.hashicorp.com/blog/terraform-feature-toggles-blue-green-deployments-canary-test/ – n0zz Apr 16 '20 at 10:09
  • But its a bit of an overkill in my opinion. Would be nice to have something simpler. Also, with this solution everyone on the team would have to remember, that you can't change flavors, you need to go for green-blue deployment. – n0zz Apr 16 '20 at 10:12
  • In that case you might be stuck unless you can make spurious changes to other immutable fields such as the user data or personality. Or yeah you do blue/green at a higher level with configuration to flip back and forth or with an external orchestrator which adds complexity. Is the naming schema completely fixed? You can't add suffixes to things for any reason? – ydaetskcoR Apr 16 '20 at 10:17
  • Oh. I've dig in into the openstack provider sources and name field is also ForceNew=false. Makes sense, since in openstack you can simply rename running instance without issues. Reading about personality. But can't find any good explaination what it does/means and how it works. May as well try to change user_data, as it might be easiest way to go here. – n0zz Apr 16 '20 at 10:24
  • I don't use OpenStack so might be missing something to complete the answer here. If you find something that does work directly along the lines of the answer an edit would be appreciated as it will help other users who have your same issue. If you go a completely different route (eg use a conditional variable to do blue green or an orchestrator outside of Terraform) then a separate answer would be better. – ydaetskcoR Apr 16 '20 at 10:41
0

So. At first I've started digging how, as @ydaetskcoR proposed, to use random instance name.

Name wasn't an option, both because in openstack it is a mutable parameter, and because I have a decided naming schema which I can't change.

I've started to look for other parameters that I could modify to force instance being created instead of modified. I've found about personality. https://www.terraform.io/docs/providers/openstack/r/compute_instance_v2.html#instance-with-personality

But it didn't work either. Mainly, because personality is no longer supported as it seems:

The use of personality files is deprecated starting with the 2.57 microversion. Use metadata and user_data to customize a server instance. https://docs.openstack.org/api-ref/compute/

Not sure if terraform doesn't support it, or there are any other issues. But I went with user_data. I've already used user_data in compute instance module, so adding some flavor data there shouldn't be an issue.

So, within user_data I've added the following:

  user_data          = "runcmd:\n - echo ${var.host["flavor"]} > /tmp/tf_flavor"

No need for random pet names, no need to change instances names. Just change their "personality" by adding flavor name somewhere. This does force instance to be recreated when flavor changes.

So. Instead of simply:

  # module.instance.openstack_compute_instance_v2.server[0] will be updated in-place
  ~ resource "openstack_compute_instance_v2" "server" {

I have now:

-/+ destroy and then create replacement
+/- create replacement and then destroy

Terraform will perform the following actions:

  # module.instance.openstack_compute_instance_v2.server[0] must be replaced
+/- resource "openstack_compute_instance_v2" "server" {
n0zz
  • 197
  • 1
  • 1
  • 14