Its been a rabbit hole, but I found hacky way which does make the nodes find each other.
Trino borrows code from Airlift, notably this part. It has the configuration parameter node.internal-address-source
which is an enum with possible values:
public enum AddressSource
{
HOSTNAME,
FQDN,
IP,
IP_ENCODED_AS_HOSTNAME
}
So, in the values.yaml with which you install the helm chart, add these lines:
additionalConfigProperties:
- node.internal-address-source=IP_ENCODED_AS_HOSTNAME
Now, this is not enough yet, but you may see where this is going. The logs of the workers now show lines like these:
2023-03-16T14:40:01.342Z WARN http-client-node-manager-32 io.trino.metadata.RemoteNodeState Error fetching node state from http://10-42-1-186.ip:8080/v1/info/state returned status 503
I did not find a way to configure a different hostname then ip
in that address. So, if the mountain won't come to ..., instead of renaming ip to what I want, I renamed what I have to ip. Add the following headless service to your Trino installation.
apiVersion: v1
kind: Service
metadata:
labels:
app: trino
name: ip
namespace: trino
spec:
clusterIP: None # this makes it headless.
ports:
- name: http
port: 8080
protocol: TCP
targetPort: http
selector:
app: trino
type: ClusterIP
name that file trino-headless.yaml
and apply it with kubectl -n trino apply -f trino-headless.yaml
.
That it. Now your workers and coordinator are all reachable under A-B-C-D.ip where A-B-C-D is set by IP_ENCODED_AS_HOSTNAME .
I think this lets you use Trino entirely inside a service mesh.