I have a GRE link set up on a VM using the following commands: ip tunnel add tap0 mode gre local <foo> remote <bar>
and the counterpart on a different VM (in the same subnet) is exactly the same except foo<->bar
I have created and an eBPF tc
program that calls bpf_clone_redirect
to copy packets to the tunnel device on one of the hosts (i.e duplicating the traffic to tap0
link):
SEC("tc")
SEC("tc")
int tc_ingress(struct __sk_buff *skb) {
__u32 key = 0;
struct destination *dest = bpf_map_lookup_elem(&destinations, &key);
if (dest != NULL) {
struct bpf_tunnel_key key = {};
int ret;
key.remote_ipv4 = dest->destination_ip;
key.tunnel_id = dest->iface_idx;
key.tunnel_tos = 0;
key.tunnel_ttl = 64;
ret = bpf_skb_set_tunnel_key(skb, &key, sizeof(key), 0);
if (ret < 0) {
// error setting the tunnel key, do not redirect simply continue.
return TC_ACT_OK;
}
// zero flag means that the socket buffer is
// cloned to the iface egress path.
bpf_clone_redirect(skb, dest->iface_idx, 0);
}
return TC_ACT_OK;
}
}
I see the traffic passed to the GRE link tap0
by running tcpdump -i tap0
but I dont see the traffic on its remote counterpart...
- Is it necessary in such scenario to define an address for the device (ala
ip addr <> dev tap0
)? - What is the proper way of defining such tunnels?
- If I have
iptable
rules set up oneth0
would it block traffic sent to the GRE link? If "yes" is there a way to bypass those?