Setup
I am setting up an Azure VM (Standard E2as_v4 running Debian 10) to serve multiple services. I want to use a separate public IP address for each service. To test whether I can do this, I set up the following:
vm1
- nic1
- vnet1, subnet1
- ipconfig1: 10.0.1.1 <-> p.0.0.1
- nsg1
- allow: ssh (22)
- nic2
- vnet1, subnet2
- ipconfig2: 10.0.2.1 <-> p.0.0.2
- nsg2
- allow: http (80)
vnet1
- subnet1: 10.0.1.0/24
- subnet2: 10.0.2.0/24
- address space: [10.0.1.0/24, 10.0.2.0/24]
Where 10.x.x.x
IPs are private and p.x.x.x
IPs are public.
nic1
(network interface) and its accompanying nsg1
(network security group) were created automatically when I created the VM; otherwise they are symmetrical to nic2
, nsg2
(except for nsg2
allowing HTTP rather than SSH). Also, both NICs register fine on the VM.
Problem
I can connect to SSH via the public IP on nic1
(p.0.0.1
). However, I fail to connect to HTTP via the public IP on nic2
(p.0.0.2
).
Things I've tried
Listening on 0.0.0.0. To check whether it is a problem with my server, I had my HTTP server listen on 0.0.0.0
. Then I allowed HTTP on nsg1
, and added a secondary IP configuration on nic1
with another public IP (static 10.0.1.101 <-> p.0.0.3
). I added the static private IP address manually in the VM's configuration (/run/network/interfaces.d/eth0
; possibly not the right file to edit but the IP was registered correctly). I was now able to connect via both public IPs associated with nic1
(p.0.0.1
and p.0.0.3
) but still not via nic2
(p.0.0.2
). This means I successfully set up two public IPs for two different services on the VM, but they share the same NIC.
Configuring a load-balancer. I also tried to achieve the same setup using a load balancer. In this case I created a load balancer with two backend pools - backend-pool1
for nic1
and backend-pool2
for nic2
. I diverted SSH traffic to backend-pool1
and HTTP traffic to backend-pool2
. The results were similar to the above (SSH connected successfully, HTTP failed unless I use backend-pool1
rather than backend-pool2
). I also tried direct inbound NAT rules - with the same effect.
Check that communication via subnet works. Finally, I created a VM on subnet2
. I can communicate with the service using the private IP (10.0.2.1
) regardless of the NSG configuration (I tried a port which isn't allowed on the NSG and it passed). However, it doesn't work when I use the public IP (p.0.0.2
).
Question
What am I missing? Is there a setting I am not considering? What is the reason for not being able to connect to my VM via a public IP address configured on an additional NIC?
Related questions
- Configuring a secondary NIC in Azure with an Internet Gateway - the answer refers to creating a secondary public IP
- Multiple public IPs to Azure VM - the answer refers to creating a load balancer
Notes: I can try to provide command lines to recreate the setup, if this is not enough information. The HTTP server I am running is:
sudo docker run -it --rm -p 10.0.2.1:80:80 nginx
And I replaced to listen on 0.0.0.0
for subsequent tests.
Here's the final topology I used for testing.