My AWS account has:
- 3 VPCs: dev, staging, production
- 2 VPNs: non-prod, prod
Most of the services and load balancers are deployed via EKS (kubernetes). The service and load balancer described below are deployed via EKS.
The prod VPN (CVPN Endpoint) lives in the prod VPC and can only access prod resources. An important logging and metrics service lives in the prod VPC. Logs and metrics dashboards are served via an internal load balancer that are only accessible while on the VPN. It would be helpful for me to be able to access the logging dashboard from both the prod and non-prod VPNs.
The non-prod VPN (CVPN Endpoint) lives in the dev VPC and can access dev and staging resources, like databases and EC2 instances. This has been done via a peering connection (from dev to staging) and is fine for non-production. Our non-prod resources ship logs and metrics to the logging service in our prod VPC via a publicly accessible AWS Load Balancer that only allows receiving data, not viewing or exporting data.
Basically, what I'm looking for is a reliable way to route traffic to the internal LB through the VPN, that way the service could be accessible. I'd like to avoid the setup I have for non-prod, as I only want to give access to the one service. The separation of access is helpful to prevent doing stuff in production that isn't ready. But it is enough of a hassle to switch VPNs every time I need to check logs for deployed services.
Additional Notes:
- Last resort would be to spin up a logging/metrics service for non-prod, but I'd really like to avoid that and I have think there is a way to do this.
- Second to last resort would be to set up a lambda or something that would check the load balancer IP addresses, then update the VPN route table, which would keep everything running as expected. I don't personally love that solution, but it would be better than a second logging/metric service.
- Making the load balancer public facing isn't a risk I'm willing to take at this time.
Thus far, I've tried:
- Adding the load balancer IPs to my CVPN route tables, but that only works temporarily as Load Balancer IP addresses periodically change.
- Creating a network load balancer with EIPs, but that can only be done on public facing LBs, and I'd like to keep this behind the firewall.
- Internal IP addresses from k8s annotations don't seem to work, I expected to be able to see fixed internal IP addresses I could add to the route table.
- Annotations used:
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: xxxxx
service.beta.kubernetes.io/aws-load-balancer-subnets: xxxxxx
- Annotations used: