0

I'm hoping someone here might have an idea what's causing this. I've got an AKS cluster with a number of microservices accessing resources in an Azure Postgres database server. I have firewall rules defined for the Postgres server for both the POD subnet as well as the Kubernetes service subnet. All accesses to databases in this Postgres server are from within these subnets. There is no access to this server from an external source. However, when my services start up, I always get hit by this exception:

2018-12-02 19:23:57.540  INFO [venus,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean  : Servlet dispatcherServlet mapped to [/]
2018-12-02 19:23:57.543  INFO [venus,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean  : Servlet complexGraphQLServlet mapped to [/graphql/*]
2018-12-02 19:23:57.545  INFO [venus,,,] 1 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean  : Servlet zuulServlet mapped to [/zuul/*]
2018-12-02 19:23:58.037  INFO [venus,,,] 1 --- [           main] o.f.core.internal.util.VersionPrinter    : Flyway Community Edition 5.0.7 by Boxfuse
2018-12-02 19:23:58.052  INFO [venus,,,] 1 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Starting...
2018-12-02 19:23:58.577  WARN [venus,,,] 1 --- [           main] unknown.jul.logger                       : SQLException occurred while connecting to mydbserver.postgres.database.azure.com:5432

org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host "40.12.13.14", user "postgres", database "mydb", SSL on
        at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:473)
        at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:205)
        at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
        at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195)
        at org.postgresql.Driver.makeConnection(Driver.java:452)
        at org.postgresql.Driver.connect(Driver.java:254)

I don't know where this external IP reference is coming from. I do have a couple of public IPs defined for my cluster but neither one match this 40.12.13.14 address. If I create a firewall rule in my server settings, thisexception goes away and my microservices have no trouble accessing their databases.

If this rule is needed I can obviously add it. The problem is the cluster is created via a Python deployment script and it can't create a firewall for this IP address since it doesn't know what this address is going to be until an exception occurs. Each time I run my deplyoyment script to create a new cluster, the IP address reported in this exception changes.

Does anyone know what this IP address is associated with and how I can determine what it's going to be so I can create the required firewall rule?

Peter

user3280383
  • 465
  • 2
  • 6
  • 17
  • are you trying to access Azure Managed Postgre? – 4c74356b41 Dec 02 '18 at 19:53
  • Yes I am--I should have made that clear. – user3280383 Dec 02 '18 at 19:58
  • you sure you configured vnet access properly? – 4c74356b41 Dec 02 '18 at 20:10
  • If you are asking if I've defined any vnet rules I have not. I've configured specific firewall rules and this has been enough to give access to the Postgres server. I did just do a quick test however with vnet rules and unfortunately I have only a "basic" level sku for the db server and vnet rules are not allowed. – user3280383 Dec 03 '18 at 01:53
  • I should add that I also have a VM as part of my installation and I don't have any issues accessing the Postgres server from that VM. The only real difference is that the VM lives on subnet 10.0.0.0/24 and the pods (containers) live in subnet 10.0.2.0/24, both under the same vnet. The subnet used for the VM has a security group that allows port 22 (ssh) access whereas the container's subnet blocks port 22. – user3280383 Dec 03 '18 at 02:02
  • so, i'm not sure about the question. you dont have firewall exclusion on postgres, no vnet integration and you expect it to work somehow? – 4c74356b41 Dec 03 '18 at 04:45
  • I guess I'm misunderstanding something then. Like I said, it *does* work, as long as I add this additional firewall rule for this mystery IP address. No vnet rulei s needed, and besdies, with a basic sku I can't define a vnet rule anyway. My question is how do I programmatically determine what this public IP address is so I can create the firewall rule I need to resolve the error I'm getting. – user3280383 Dec 03 '18 at 05:38
  • Well, I at least know what this IP address is now. If I connect to one of my containers (via kubectl exec) and run `curl https://ipinfo.io/ip` I get `40.12.13.14`. So this is the IP address that external services see when a connection comes in from a container in my aks cluster. That makes sense. The question is how can I ask kubernetes what this address is without having to connect to a container and run a "what is my ip" request? – user3280383 Dec 03 '18 at 15:49
  • i dont think you can, do you have external load balancer attached to worker nodes? that should be its ip address if you do – 4c74356b41 Dec 03 '18 at 16:00
  • I used to have an external load balancer and changed to using an application gateway. This was when this IP address issue started. I didn't put it together until just now. I've decided to change my pricing tier to GP and define the appropriate endpoints and and vnet rules and that has solved my problem. It's really annoying though that Azure doesn't let you define vnet rules for the Basic pricing tier. – user3280383 Dec 03 '18 at 17:04
  • for application gateway you will get a random ethereal external ip, with external LB you can get a fixed external ip and whitelist it – 4c74356b41 Dec 03 '18 at 17:15
  • I guess this is outside the scope of my original question but when an application gateway is assigned an dynamic IP, under what conditions will it change to a new IP? – user3280383 Dec 03 '18 at 17:37
  • I'd have no idea, probably when you turn it on\off. I do have a fair bunch of these, i always tie dns to it, so i dont care ;) – 4c74356b41 Dec 03 '18 at 17:53
  • Yeah, the dns name works great. Thx. – user3280383 Dec 03 '18 at 22:37

2 Answers2

1

In this case the OP solved the problem by upgrading Postgre tier and implementing vnet rules to allow traffic.

4c74356b41
  • 69,186
  • 6
  • 100
  • 141
  • Yes, this solution is now in place and working fine. The biggest issue is cost. Having to use a GP tier Postgres server costs $100+ per month. A Basic tier server can be as low as $25. When you have multiple engineers working independently this can add up. – user3280383 Dec 04 '18 at 13:49
0

Sadly, there is no way to programmatically get this IP.

user3280383
  • 465
  • 2
  • 6
  • 17
  • I found a somewhat ugly method to get this IP. I figured the kubernetes build-in dns pod would have this same IP and tried running `kubectl exec kube-dns-nnn -n kube-system -c kubedns -- wget -qO- https://ipinfo.io/ip` and sure enough the IP returned by this is the same one I get in my containers. – user3280383 Dec 03 '18 at 23:21
  • if you dont have external lb attached to the nodes - this ip will change from time to time. aslo, consider accepting\upvoting my answer ;) – 4c74356b41 Dec 04 '18 at 05:54
  • 1
    Done. Thanks for the help on this. – user3280383 Dec 04 '18 at 14:01