1

i want to write tests for an adapter over google compute engine and google container engine.
for example:

node = gce.managedInstanceGroup("myGroup").createNode()
//do something with node
node.delete()

when i try to use the resize operation on a managed cluster to add a node, i get back an operation:

{
"kind": "compute#operation",
"id": --
"name": "operation---",
"zone": "https://www.googleapis.com/compute/v1/projects/--/zones/us-east1-d",
"operationType": "compute.instanceGroupManagers.resize",
"targetLink": "https://www.googleapis.com/compute/v1/projects/--/zones/us-east1-d/instanceGroupManagers/---grp",
"targetId": "--",
"status": "DONE",
"user": "--@--.iam.gserviceaccount.com",
"progress": 100,
"insertTime": "2016-05-10T04:40:28.281-07:00",
"startTime": "2016-05-10T04:40:28.283-07:00",
"endTime": "2016-05-10T04:40:28.283-07:00",
"selfLink": "https://www.googleapis.com/compute/v1/projects/--/zones/us-east1-d/operations/operation---"
}

i don't see a way to extract which nodes are being created by it. i can get a list of all nodes and see which ones are with status CREATING, but that doesn't mean they were created by MY operation. i can't just delete them, i don't know where they're from.

is there a way to determine exactly which nodes my resize operation created in the managed instance group?
additionally, how can i tell when my operation specifically is completed, if there isn't? how can i clean up my nodes?
as an alternative, is there a way to add nodes to a managed instance group other than the resize operation?

Grzenio
  • 35,875
  • 47
  • 158
  • 240
nathan g
  • 858
  • 9
  • 17
  • Why do you need to know which new nodes where created? – Grzenio May 11 '16 at 08:28
  • i want to run tests on a cluster. the tests may run concurrently with another instance of the tests from another developer or CI or nightly. i don't want the overhead of creating clusters for each test if i can avoid it. i want each test to be isolated. so i want to track the machines created for a test, use only those in the test and clean only those after the test. – nathan g May 15 '16 at 07:58
  • In this case I would suggest to create a new managed instance group for every instance of the tests. The overhead of creating an empty managed instance group is negligible (it is just a record in the database effectively) and you don't get charged for it (you pay only for the VMs). This was how managed instance groups were designed to work - one MIG per one workload. – Grzenio May 15 '16 at 08:03
  • Grzenio: oh, that's great to know! i'll do that then. thanks. – nathan g May 15 '16 at 08:04
  • oh, wait, i got confused. since i'm actually working over gke and that's what creates the managed instance group, there probably IS an overhead for the kubernetes master and things like that... – nathan g May 15 '16 at 08:05
  • I can't imagine the overhead would be bigger than a small couple of seconds, and it should really be subsecond. It is the creation of VMs that takes time, but you will need to pay it either way... Also I would be shocked if you get charged for it (definitely you don't pay for MIGs, only the underlying instances). – Grzenio May 15 '16 at 08:09

2 Answers2

0

To tell which nodes were added, you can list the instances in the instance group before and after the resize and see which nodes exist afterwards that didn't exist before.

To tell when the operation is done, poll the operation returned by the resize request. You can just directly due a GET on the contents of the "selfLink" field. In the example you show, it already is done, judging by its "status" field.

I'm not aware of a way to add nodes to a managed instance group other than the resize operation, and I'm honestly not sure why you'd want to?

Alex Robinson
  • 12,633
  • 2
  • 38
  • 55
  • polling before and after is not a good idea, there could be other clients adding nodes, or multiple operations occurring concurrently. – nathan g May 11 '16 at 07:09
  • 1
    I'm a little confused why that would matter, given that all instances created by a managed instance group are identically configured. If you want to treat the VMs differently based on who created them, it sounds like you'd want them to be in separate instance groups, or not in instance groups at all. – Alex Robinson May 11 '16 at 07:38
  • they are identically configured but they don't have the same state. different things run on each instance. so if in my test i start a node, then run something - i would like to run it on that node. then i would like to terminate that node. i want some level of isolation preferably without starting a new cluster per test. – nathan g May 15 '16 at 08:00
0

Managed Instance Groups operate using the 'intent based' semantics, i.e. you tell the instance group what your desired state of the group is and the group will optimally get to that state. The only thing the operations do is to set the target state. In the example operation you pasted you can see it has "status": "DONE". Managed instance groups do not track which instances where created as a result of which operation, because it gets really messy conceptually when you have multiple (some times contradictory) operations running in parallel. Please remember that many people have autohealers and autoscalers connected, which independently change the target state.

The typical way to get the newly created instances registered/configured/etc. is to define a start-up script in the instance template that will handle everything automatically. If you really need the list of names of the instances then, as Alex Robinson wrote, the only way is to list all instances before the operation and after the operation and calculate the diff.

If you are running only one operation at a time you can poll on list managed instances and wait until all of them are running. There is also a handy gcloud command wait-until-stable command that does exactly that.

If you need a better manual control over instances in your instance group you can always create a normal (non-managed) instance group: https://cloud.google.com/compute/docs/instance-groups/unmanaged-groups

Grzenio
  • 35,875
  • 47
  • 158
  • 240
  • it's as i suspected, unfortunately. i'm actually trying to work over kubernetes in google container engine, and gke creates a managed instance group so i can't change the type of the group to unmanaged. i can't, however, manage the instances with my desired resolution in this level of abstraction. sigh. – nathan g May 15 '16 at 08:02
  • I don't really know Kubernetes very well. However, managed instance groups were designed to be used with for one 'workload' only (as already commented above). It is cheap to create an empty MIG, and then it is really easy to delete it together with all the instances. I am pretty sure Kubernetes has a corresponding concept. – Grzenio May 15 '16 at 08:06