0

I have two VNETS (VNET1 and VNET2). VNET1 allows many site2site and point to site connections. VNET2 contains an internal load balancer and a set of VMs for the back end pool of that load balancer. I have successfully setup peering with the help of another post listed below which allows all on-premise clients in VNET1 to access the internal load balancer in VNET2 but it also allows them to access the VMs in VNET2 which I want to avoid.

Accessing resources from connected Azure VNETS via VPN

I'm trying to limit on-premise clients connected to VNET1 so they can only see the internal load balancer on VNET2 (not the VMs in the backend pool). I saw a similar question below but that involved two public load balancers so not sure it's applicable here since I'm using internal load balancer.

Azure Vnet peering with public IP load balancer

I've tried setting up an NSG on the subnet where the VMs reside by creating following rules.

  1. Rule1: Allow LoadBalancer IP to VM subnet (backend VM pool).
  2. Rule2: Deny all other VnetInBound traffic (this overrides the default AllowVnetInBound).

The above rules prevents VNET1 from seeing anything in VNET2 but also prevents sending to the load balancer for some reason.

Anyone have any ideas on how this configuration could be implemented?

Geekn
  • 2,650
  • 5
  • 40
  • 80
  • If you do not want on-premise clients see the VMs in VNET2, you set internal load balance with front IP configuration of VNET1 and backend pool of VNET2, without peering between VNET1 and VNET2. So that all traffic will flow from on-premise to VNET1 -> internal load balance -> VNET2. And the traffic will not go directly to VNET2. Is it waht you want? – Charles Xu Jun 15 '18 at 07:43
  • That is correct. I originally tried to put the internal load balancer in VNET1 with target IP configuration in the default subnet of VNET1, but Azure would not allow me to build a backend pool with VMs from VNET2 (documentation indicates they must be in the same VNET). Are you saying this should be possible? Are you saying this type of configuration should be possible even without peering? – Geekn Jun 15 '18 at 10:00
  • Yes, it's possible. Load balance can transit between two Vnets. When you create load balance you need a Vnet, you can make it as VNET1. Then you need to configure backend pool of load balance. There are three types you can associate: availability set, single virtual machine and scale set. You can make them in VNET2. It can be done and you can take a try. – Charles Xu Jun 18 '18 at 01:21
  • I tried that with the internal load balancer, but it never successfully deployed it since the VMs where in the second VNET. Are you sure you're not referring to the public load balancer? There's always an API error when trying to deploy that configuration with an internal load balancer. I used standard internal load balancer in VNET1 and, from the UI, it allows me to add VMs to the backend pool from VNET2, but never allows the deployment of this configuration. I'll paste in the error below. Should I open a bug on this? – Geekn Jun 18 '18 at 03:03
  • Deployment to resource group 'ihde_dev' failed. Additional details from the underlying API that might be helpful: At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details. – Geekn Jun 18 '18 at 03:03
  • I tried using the basic internal load balancer and got a similar error but it specifically states issue with network interface of VM not being in the same VNET as the load balancer. Error: Network interface uses internal load balancer but does not use the same VNET as the load balancer. Strange that it seems to allow the ability to target it though. – Geekn Jun 18 '18 at 03:12
  • I read the document about internal load balance again, and you're right. The front end and back end just can be in the same Vnet. Although VMs can be in different subnet, they can also connect each other. Maybe ILB cannot help you do what you want. But you can configure NSGs in different Subnets. There is a priority for the NSG rules. You can make priority of Rule1 greet than Rule2, for example 100 for Rule1 and 110 for Rule2. So that Rule1 can be the primary rule to restrict the flow than Rule2. – Charles Xu Jun 18 '18 at 08:41
  • Appreciate you validating that. NSG has issues too. If we lock down back end VM subnet so that nothing can get to it except traffic from load balancer it will still deny load balanced traffic from same clients that you restricted from having direct access to them. Internal load balancer only changes the target IP and port (not the sending IP and port). So, essentially, it ensures those who should not have access are prevented from doing so whether its direct or indirectly via a load balancer. https://serverfault.com/questions/916764/security-for-peered-vnets-with-internal-load-balancer – Geekn Jun 18 '18 at 20:29
  • Replacing the internal load balancer with an application gateway almost gets us there as it allows us to target a backend pool that is outside of the VNET. I say, "almost" because it's limited to only HTTP/HTTPS traffic...ugh. You would think you could create the same setup as you would with an external load balancer for an intranet only solution, but not sure that's possible at this point. – Geekn Jun 18 '18 at 20:31
  • As I know, no matter load balance, application gateway or traffic manager, their primary feather is to balance the net traffic. So if you want configure the security of VM, they could not help you. Maybe you can find other feathers of Azure. – Charles Xu Jun 19 '18 at 01:12

0 Answers0