0

I am using GCP preemptible VMs for my workers. However, GCP limits the in-use addresses to 8. I requested for increase, but they rejected it. I tried to use their NAT, but it seems to be a 1:1 NAT. I was under the impression that NATs allow many to 1 external connections.

Ironically, I need "external" access to hit an API hosted on cloud run, which is only public ip addresses.

Should I go back to AWS? I like preemptible VMs since I don't have to "bid" on a spot price like with AWS. I just want to use the reduce price for a VM for my workload, which last between 5 minutes to 1 hour. I don't want to resubmit over and over a spot request and have it potentially fail. Plus, GCP VMs come up in 30-40s compared to 1 minute or more for AWS ec2 instances.

Let me know if you have an suggestions or if I'm doing something wrong.

Thanks in advance!

Gary Leong
  • 199
  • 1
  • 2
  • 12
  • Configure a NAT Gateway to provide your spot instances Internet access. – John Hanley Nov 09 '21 at 07:19
  • Thanks for the input. I think I tried that. When I create a NAT gateway, it creates a 1:1 correspondence to each VM. If I create 8 VMs, it creates a 8 public IPs to map for each VM. I then run into the problem with in-use addresses? – Gary Leong Nov 10 '21 at 00:11
  • A NAT Gateway uses one public IP address. You configured something else. A One-to-One NAT is not a NAT Gateway (the service/product name). – John Hanley Nov 10 '21 at 00:22
  • Sorry, I'm not sure if I used the right product. I'm more experienced with AWS. However, I used what GCP calls Cloud NAT. https://cloud.google.com/nat/docs/ports-and-addresses. I saw on the logs that it tries to map 1:1. – Gary Leong Nov 10 '21 at 02:08
  • "Automatic NAT IP address allocation. When you select this option, or choose Google Cloud defaults, Cloud NAT automatically adds regional external IP addresses to your gateway based on the number of VMs that use the gateway and the number of ports reserved for each VM. " – Gary Leong Nov 10 '21 at 02:09
  • I also read on GKE, it works differently since it uses ports. The ports allows more VMs masking. Maybe I'm reading this wrong. I just know it tried to allocate 1:1. I assigned a public IP (manual) to the NAT. Once it gets passed 1 VM, it complains that it needs another external IP to assign to the subsequent VM. – Gary Leong Nov 10 '21 at 02:10
  • @GaryLeong Assuming the API hosted on Cloud RUN is accessible only over an external IP and the Preemptible VM workers are the ones connecting to the External IP of the API, NAT gateway can be used. NAT gateway can allow multiple Private VM's in a subnet to access external IP's. Elaborate the use case, if I have not understood the scenario correctly. – Ramesh kollisetty Nov 10 '21 at 03:31
  • Essentially, I created a Cloud NAT with one designated external IP. (not auto). I then created a bunch of VMs, but only the VM that got mapped to this one designated IP came up. The other ones failed because it says the designated IP was already taken. Let me test it further. Maybe I made an error with the Terraform template. I'll try it manually. – Gary Leong Nov 10 '21 at 04:32
  • When NAT gateway is used the worker VMs don't require public IP(External IP). As per your comment I noticed that a single manually designated IP is being used for NAT gateway & VM worker nodes. This obviously will result in IP conflict. You may want to modify the terraform template so that the worker VMs don't have public IP & the external IP mapped to NAT gateway. Please refer to the [link](https://cloud.google.com/nat/docs/overview) more about NAT gateway. – Ramesh kollisetty Nov 11 '21 at 10:57
  • Thank Ramesh! That was the issue. I misread and thought the NAT_IP need to be specified for all the VMS. That nat_ip is for a 1:1 mapping. – Gary Leong Nov 14 '21 at 00:32
  • @GaryLeong Thanks for your update, I will post the same as a answer upvote & accept it – Ramesh kollisetty Nov 15 '21 at 12:14

1 Answers1

0

When NAT gateway is used the worker VMs don't require public IP(External IP). As per your comment I noticed that a single manually designated IP is being used for NAT gateway & VM worker nodes. This obviously will result in IP conflict. You may want to modify the terraform template so that the worker VMs don't have public IP & the external IP mapped to NAT gateway. Please refer to the link more about NAT gateway