0

I had my cloud sql proxy previously working in a sidecar pattern, but found Cloud SQL Proxy in a Kubernetes cluster in the cloudsql-proxy repo, so I decided to break it out on it's own.

I immediately had problems, on first connection the container would crash. I decided to get back to as pure of a test case as possible, and add a livenessProbe.

I found that this recommended configuration self-crashes:

❯❯❯ kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
cloudsqlproxy-109958711-ks4bf   1/1       Running   5          2m

Deployment:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: cloudsqlproxy
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: cloudsqlproxy
    spec:
      containers:
      - image: gcr.io/cloudsql-docker/gce-proxy:1.09
        name: cloudsqlproxy
        command: "/cloud_sql_proxy", "--dir=/cloudsql",
                  "-instances=foo:us-central1:db=tcp:3306",
                  "-credential_file=/secrets/cloudsql/credentials.json"]
        ports:
        - name: port-db
          containerPort: 3306

        livenessProbe:
          exec:
            command: ["netcat", "-U", "/cloudsql/foo:us-central1:db=tcp:3306"]
          initialDelaySeconds: 5
          timeoutSeconds: 10

        volumeMounts:
          - name: cloudsql-instance-credentials
            mountPath: /secrets/cloudsql
            readOnly: true
          - name: ssl-certs
            mountPath: /etc/ssl/certs
          - name: cloudsql
            mountPath: /cloudsql
      volumes:
        - name: cloudsql-instance-credentials
          secret:
            secretName: cloudsql-instance-credentials
        - name: ssl-certs
          hostPath:
            path: /etc/ssl/certs
        - name: cloudsql
          emptyDir:

Service:

apiVersion: v1
kind: Service
metadata:
  name: cloudsqlproxy-service
spec:
  ports:
  - port: 3306
    targetPort: port-db
  selector:
    app: cloudsqlproxy

The logs show nothing except starting up and listening:

E  2017/10/09 13:51:35 Listening on 127.0.0.1:3306 for foo:us-central1:db
E  2017/10/09 13:51:35 Ready for new connections
E  2017/10/09 13:52:38 using credential file for authentication; email=cloud-sql-client@foo.iam.gserviceaccount.com
E  2017/10/09 13:52:38 Listening on 127.0.0.1:3306 for foo:us-central1:db
E  2017/10/09 13:52:38 Ready for new connections
E  2017/10/09 13:54:26 using credential file for authentication; email=cloud-sql-client@foo.iam.gserviceaccount.com
E  2017/10/09 13:54:26 Listening on 127.0.0.1:3306 for foo:us-central1:db
E  2017/10/09 13:54:26 Ready for new connections

What am I missing? Where should I be looking to find the reason for the crash? Do I have a config error?

kross
  • 3,627
  • 2
  • 32
  • 60

2 Answers2

1

I think you should replace -instances=foo:us-central1:db=tcp:3306 in your configuration with -instances=foo:us-central1:db=tcp:0.0.0.0:3306

  • @kross This is the correct answer, by default cloudsql-proxy only exposes the SQL port over localhost (which is one reason the sidecar pattern is the recommended methodology, since it works with the default config. The default config only accept traffic over localhost might be why your healthcheck logic failed, maybe the logic you added was using external IP & not 127.0.0.1), tcp:0.0.0.0:3306 makes it accept sql traffic from both local host and external sources. – neoakris Sep 08 '22 at 20:14
0

There appears to be a permissions issue in my environment, and a bug? with the failure not getting logged. I was only able to get the log when deployed as a sidecar.

2017/10/09 15:34:10 using credential file for authentication; email=cloud-sql-client@foo.iam.gserviceaccount.com
2017/10/09 15:34:10 Listening on 127.0.0.1:3306 for foo:us-central1:db
2017/10/09 15:34:10 Ready for new connections
2017/10/09 15:34:30 New connection for "foo:us-central1:db"
2017/10/09 15:34:30 couldn't connect to "foo:us-central1:db": ensure that the account has access to "foo:us-central1:db" (and make sure there's no typo in that name). Error during createEphemeral for foo:us-central1:db: googleapi: Error 403: The client is not authorized to make this request., notAuthorized
2017/10/09 15:34:30 New connection for "foo:us-central1:db"

I'll recreate permissions at this point and retry from my sidecar prototype, then move back to an isolated deployment.

kross
  • 3,627
  • 2
  • 32
  • 60