I've discovered a lot of lesser-documented configuration options to use, but I can't seem to configure my seed node such that it can both contact itself and be contacted by other nodes.
At the moment, I have it configured so:
akka {
cluster.multi-data-center.self-data-center = asia
remote.netty.tcp.hostname = "xxx.xx.xx.5"
remote.netty.tcp.public-hostname = "xx.xx.xxx.51"
cluster.seed-nodes = ["akka.tcp://application@xxx.xx.xx.5:1551"]
enforce-ip-family = false
dns-use-ipv6 = false
}
In this configuration, the node can connect to itself as seed node and function by itself, but other nodes that try to contact it at the public-hostname
have their messages dropped:
2018-01-25 19:29:56,934 [ERROR]: akka.remote.EndpointWriter in application-akka.actor.default-dispatcher-5 - dropping message [class akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://application@xx.xx.xxx.51:1551/]] arriving at [akka.tcp://application@xx.xx.xxx.51:1551] inbound addresses are [akka.tcp://application@xxx.xx.xx.5:1551]
My research indicated that the public-hostname configuration exists to solve this problem. Maybe not? I tried the opposite, setting the hostname to the public IP and configuring the bind-hostname
to the IP that enables the seed node to connect to itself:
akka {
cluster.multi-data-center.self-data-center = asia
remote.netty.tcp.hostname = "xx.xx.xxx.51"
remote.netty.tcp.bind-hostname = "xxx.xx.xx.5"
cluster.seed-nodes = ["akka.tcp://application@xxx.xx.xx.5:1551"]
enforce-ip-family = false
dns-use-ipv6 = false
}
Then I encounter the same paradox in the opposite direction:
2018-01-25 19:39:08,207 [WARN ]: akka.cluster.JoinSeedNodeProcess in application-akka.actor.default-dispatcher-4 - Couldn't join seed nodes after [3] attempts, will try again. seed-nodes=[akka.tcp://application@xxx.xx.xx.5:1551]
2018-01-25 19:38:48,168 [ERROR]: akka.remote.EndpointWriter in application-akka.actor.default-dispatcher-6 - dropping message [class akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://application@xxx.xx.xx.5:1551/]] arriving at [akka.tcp://application@xxx.xx.xx.5:1551] inbound addresses are [akka.tcp://application@xx.xx.xxx.51:1551]
The seed node is now unable to connect to itself, because xx.xx.xxx.51
has taken over as the inbound address.
I have also attempted to use both public-hostname
and bind-hostname
, with no success.