1

I have a kubernetes cluster in google cloud. Due to the resource limit, I could not run a app that would take a large amount of memory. So I run the app in another cloud machine, and using kubectl to forward the service port, this is my kubectl forward script:

#!/usr/bin/env bash

set -u

set -e

set -x

namespace=reddwarf-cache
kubectl config use-context reddwarf-kubernetes
POD=$(kubectl get pod -l app.kubernetes.io/instance=cruise-redis -n ${namespace} -o jsonpath="{.items[0].metadata.name}")
kubectl port-forward ${POD} 6380:6379 -n ${namespace}

And I could connect the kubernetes cluster service from the remote server like this, local app connect to local mechine port and kubectl will forward the connection to kubernetes cluster. But sadly I found the kubectl forward could not keep stable for a long time, when the app runs for some time, always give connection refused error in the future time. Is it possible to fix this problem? To let me connect to kubernetes cluster service in a stable way? For example, when I port forward the redis connection, it will throw this error in the future:

E1216 11:58:43.452204    7756 portforward.go:346] error creating error stream for port 6379 -> 6379: write tcp 172.17.0.16:60112->106.14.183.131:6443: write: broken pipe
Handling connection for 6379
E1216 11:58:43.658372    7756 portforward.go:346] error creating error stream for port 6379 -> 6379: write tcp 172.17.0.16:60112->106.14.183.131:6443: write: broken pipe
Handling connection for 6379
E1216 11:58:43.670151    7756 portforward.go:346] error creating error stream for port 6379 -> 6379: write tcp 172.17.0.16:60112->106.14.183.131:6443: write: broken pipe
Handling connection for 6379

When I using this command to connect the kubernetes cluster's redis service using port forward and execute some command, will show this error:

➜  ~ redis-cli -h 127.0.0.1 -p 6379 -a 'uoGTdVy3P7'
127.0.0.1:6379> info
Error: Connection reset by peer

I have read this GitHub issue seem no one knows what's happened.

Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
Dolphin
  • 29,069
  • 61
  • 260
  • 539

1 Answers1

1

Following the GitHub issue you posted I stumbled upon a solution that might help you. First, let kubectl decide which host port to use by running:

kubectl port-forward ${POD} :6379 -n ${namespace}

Then, forward the same port on the host by running:

kubectl port-forward ${POD} 6379:6379 -n ${namespace}

Another thing you could do is to create a Service that maps your desired port to port 6379 in your redis Pods, example Service file would look like this:

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: redis
  ports:
    - protocol: TCP
      port: <port>
      targetPort: 6379
  

Apply the Service resource. Then, create Ingress backed by a single Service to make redis available from outside the cluster, example file:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-ingress
spec:
  defaultBackend:
    service:
      name: my-service
      port:
        number: <port>
    

Apply the Ingress resource and get its IP address by running:

kubectl get ingress test-ingress

After that check your redis connection from another machine using Ingress IP address.

mdobrucki
  • 462
  • 1
  • 7
  • I solve this problem using https://stackoverflow.com/questions/47484312/kubectl-port-forwarding-timeout-issue Marcel's answer. But I did not figure out why it works. – Dolphin Dec 20 '21 at 10:30