16

Long story short - I need to use networking between projects to have separate billing for them.

I'd like to reach all the VMs in different projects from a single point that I will use for provisioning systems (let's call it coordinator node).

It looks like VPC network peering is a perfect solution to this. But unfortunately one of the existing networks is "legacy". Here's what google docs state about legacy networks.

About legacy networks Note: Legacy networks are not recommended. Many newer GCP features are not supported in legacy networks.

OK, naturally the question arises: how do you migrate out of legacy network? Documentation does not address this topic. Is it not possible?

I have a bunch of VMs, and I'd be able to shutdown them one by one:

  1. shutdown
  2. change something
  3. restart

unfortunately it does not seem possible to change network even when VM is down?

EDIT: it has been suggested to recreate VMs keeping the same disks. I would still need a way to bridge legacy network with new VPC network to make migration fluent. Any thoughts on how to do that using GCE toolset?

Kluyg
  • 5,119
  • 2
  • 25
  • 28
Julius Žaromskis
  • 2,026
  • 2
  • 14
  • 20
  • Shared VPC is another option for separating billing for projects. https://cloud.google.com/vpc/docs/shared-vpc – Dagang Dec 08 '17 at 21:39
  • Legacy networks in a host project are not shared with service projects. The shared VPC networks in a host project must be VPC networks. – Julius Žaromskis Dec 11 '17 at 08:58
  • 1
    I'm in a similar situation and I'm curious what your migration path ultimately looked like. Would you be able to post an update or answer that describes it? – Sammitch Jul 12 '18 at 17:45
  • 1
    Sorry to disappoint, but I haven't done the migration. Seems too much trouble for what it's worth to me. I was contemplating using some kind of VPN ipsec tunnel and bringing VMs one by one, as suggested by Kluyg below. – Julius Žaromskis Jul 13 '18 at 08:01
  • Use the VM's external IP during migration (+firewall rule). After migration, re-attach the VM NIC to the VPC network (requires VM shutdown) and revert to using internal name/IP. – rustyx Jun 26 '21 at 15:42

1 Answers1

3

One possible solution - for each VM in the legacy network:

  1. Get VM parameters (API get method)
  2. Delete VM without deleting PD (persistent disk)
  3. Create VM in the new VPC network using parameters from step 1 (and existing persistent disk)

This way stop-change-start is not so different from delete-recreate-with-changes. It's possible to write a script to fully automate this (migration of a whole network). I wouldn't be surprised if someone already did that.

UDPATE

https://github.com/googleinterns/vm-network-migration tool automates the above process, plus it supports migration of a whole Instance Group or Load Balancer, etc. Check it out.

Kluyg
  • 5,119
  • 2
  • 25
  • 28
  • Thank you for the answer. I would still need a way to temporarily bridge legacy network with new VPC network to make migration fluent. For example application servers would need to reach databases. Any thoughts on how to do that using GCE toolset? – Julius Žaromskis Dec 13 '17 at 08:20
  • @JuliusŽaromskis what approach you finally took for the migration? I'm in a similar situation. – Mohit Gupta Apr 25 '19 at 17:08
  • 1
    @MohitGupta. I haven't. It seemed to much trouble to recreate all VMs. – Julius Žaromskis Apr 26 '19 at 08:20
  • Is there still a way to bridge legacy network with new VPC network? I am looking for documentations with no luck to use data fusion & connect to a vm/db (that's running under legacy vpc network) – Logan Mar 30 '20 at 07:19