2

I have two machines in the same VPC (under same subnet range) in GCP. I want to ping MAC address from one instance to another (ie. layer 2 connection). Is this supported in GCP?

If not, is GRE tunnel supported between the two VMs in the above configuration or any other tunneling?

My mail goal is to establish a layer 2 connection.

  • AFAIK, you can't. The network is software managed and handle only the level 3 and above. – guillaume blaquiere Oct 07 '20 at 12:14
  • ICMP (ping) is supported in VPCs. ICMP is OSI Layer 3. VPCs support GRE. Layer 2 does not have the concept of "connections". What are you trying to accomplish? – John Hanley Oct 07 '20 at 14:44
  • I wanted to establish a VLAN like environment where I could reach another VM's interface just by it's MAC address. Is this possible? – Sankalpa Timilsina Oct 07 '20 at 15:22
  • I am typically trying to run packet generator on 1st VM and set the packet's destination to the MAC of another VM's interface without setting the destination's IP as well. – Sankalpa Timilsina Oct 07 '20 at 15:27
  • 1
    I do not know the answer. However, I expect this to fail. 1) Part of the network stack is virtualized. 2) Google VPCs do not allow broadcast or multicast packets. How will you discover peers? 3) Allowing you to address packets at the MAC level might be a security vulnerability (which would be blocked). 4) For your use case, write some software and test. Linux does have interfaces (APIs) at Layer 2. – John Hanley Oct 07 '20 at 17:18

2 Answers2

1

You can not have L2 connectivity this out of the box. However, you can setup a VXLAN or other kind of tunnels between VMs if you really need L2 connectivity for some odd reason. I've written a blog about how to do this: https://samos-it.com/posts/gce-vm-vxlan-l2-connectivity.html (Copy pasting the main pieces below)

  1. Create the VMs In this section you will create 2 Ubuntu 20.04 VMs

Let's start by creating vm-1

gcloud compute instances create vm-1 \
          --image-family=ubuntu-2004-lts --image-project=ubuntu-os-cloud \
          --zone=us-central1-a \
          --boot-disk-size 20G \
          --boot-disk-type pd-ssd \
          --can-ip-forward \
          --network default \
          --machine-type n1-standard-2

Repeat the same command creating vm-2 this time:

gcloud compute instances create vm-2 \
          --image-family=ubuntu-2004-lts --image-project=ubuntu-os-cloud \
          --zone=us-central1-a \
          --boot-disk-size 20G \
          --boot-disk-type pd-ssd \
          --can-ip-forward \
          --network default \
          --machine-type n1-standard-2

Verify that SSH to both VMs is available and up. You might need o be patient.

gcloud compute ssh root@vm-1 --zone us-central1-a --command "echo 'SSH to vm-1 succeeded'"
gcloud compute ssh root@vm-2 --zone us-central1-a --command "echo 'SSH to vm-2 succeeded'"
  1. Setup VXLAN mesh between the VMs In this section, you will be creating the VXLAN mesh between vm-1 and vm-2 that you just created.

Create bash variables that will be used for setting up the VXLAN mesh

VM1_VPC_IP=$(gcloud compute instances describe vm-1 \
               --format='get(networkInterfaces[0].networkIP)')
VM2_VPC_IP=$(gcloud compute instances describe vm-2 \
               --format='get(networkInterfaces[0].networkIP)')
echo $VM1_VPC_IP
echo $VM2_VPC_IP

Create the VXLAN device and mesh on vm-1

gcloud compute ssh root@vm-1 --zone us-central1-a  << EOF
set -x
ip link add vxlan0 type vxlan id 42 dev ens4 dstport 0
bridge fdb append to 00:00:00:00:00:00 dst $VM2_VPC_IP dev vxlan0
ip addr add 10.200.0.2/24 dev vxlan0
ip link set up dev vxlan0
EOF

Create the VXLAN device and mesh on vm-2

gcloud compute ssh root@vm-2 --zone us-central1-a  << EOF
set -x
ip link add vxlan0 type vxlan id 42 dev ens4 dstport 0
bridge fdb append to 00:00:00:00:00:00 dst $VM1_VPC_IP dev vxlan0
ip addr add 10.200.0.3/24 dev vxlan0
ip link set up dev vxlan0
EOF

Start a tcpdump on vm-1

gcloud compute ssh root@vm-1 --zone us-central1-a
tcpdump -i vxlan0 -n

In another session ping vm-2 from vm-1 and take a look at tcpdump output. Notice the arp.

gcloud compute ssh root@vm-1 --zone us-central1-a
ping 10.200.0.3
Sam Stoelinga
  • 4,881
  • 7
  • 39
  • 54
0

Andromeda (Google's Network) is a Software Defined Networking (SDN). Andromeda's goal is to expose the raw performance of the underlying network while simultaneously exposing network function virtualization.

Hence, Andromeda itself is not a Cloud Platform networking product; rather, it is the basis for delivering Cloud Platform networking services with high performance, availability, isolation, and security. For example, Cloud Platform firewalls, routing, and forwarding rules all leverage the underlying internal Andromeda APIs and infrastructure.

Also, By default, the instances are configured with a 255.255.255.255 mask (to prevent instance ARP table exhaustion), and when a new connection is initiated, the packet will be sent to the subnet’s gateway MAC address, regardless if the destination IP is outside or within the subnet range. Thus, the instance might need to make an ARP request to resolve the gateway’s MAC address first.

Unfortunately Google doesn't allow GRE traffic[1].

So, my recommendation is to run some test like iperf or MTR between them in order to validate layer 2.

blueboy1115
  • 161
  • 2