-1

I am creating an upload file function for my website. When a user clicks upload then my web page will call API that has a function to upload the file into minIO node on my Kubernetes and store metadata of that file into cockroachDB node on my Kubernetes

Problem is when I test this on my local environment it works fine:

  • (web URL: http://localhost:5000, API URL: http://localhost:8080/upload)

but when I create pod and run it on Kubernetes it causes the error [503 service unavailable]

  • (web URL: https://[myWebName].com, API URL: https://[myWebName].com/upload)

After I try to debug this problem I know that cause of the problem is the code that I use to INSERT data into cockroachDB but I don't know how to fix this problem and I don't know why it works on my local environment but when it uploads it to Kubernetes this function causes the error.

function that cause problem:

func cockroachUpload(data Workspace ,w http.ResponseWriter){
    //Work Fine
    db, err := sql.Open("postgres",
        "postgresql://root@128.199.248.147:31037/goliath?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.root.key&sslcert=certs/client.root.crt")
    if err != nil {
        w.Write([]byte(err.Error()))
        log.Fatal("error connecting to the database: ", err)
    }
    defer db.Close()

    //cause error
    query:="INSERT INTO workspace(name,permission) VALUES ($1,$2)"
    rows, err := db.Query(query,"test",true)

    if err != nil {
        log.Fatal(err)
    }
    defer rows.Close()
    fmt.Println("done workspace") 
}

PS: I use nodeport to connect to my minIO and CockroachDB service on my Kubernetes.

Rico
  • 58,485
  • 12
  • 111
  • 141
Tomson
  • 13
  • 2
  • 1
    Can you share your Kubernetes manifests somewhere? Can you share the logs of the pods that are running your workload? `kubectl logs `. Thanks – Rico Jul 15 '20 at 18:13
  • easy cake pal... your docker container does not contain certificate and will never upload file. locally you can do it.. but on the cluster container must generate certificate ;) this happens when using a shrinked docker image sich as alpine etc. this can be solved on your dockerfile... – Eddwin Paz Jul 16 '20 at 00:02

1 Answers1

1

This is a Dockerfile issue for the most part as I can be able to evaluate. Usually happens when you use shrink docker images from providers such as alpine. etc.. This is due that on localhost you don't need to evaluate certificates but on production, server will evaluate the certificate against your server. and it wont be available. for this case we need to re create it as on the following sample code.

# FROM golang:1.14.1 AS builder
FROM golang:alpine as builder

RUN apk update && apk add --no-cache git

# Download and install the latest release of dep
ADD https://github.com/golang/dep/releases/download/v0.4.1/dep-linux-amd64 /usr/bin/dep
RUN chmod +x /usr/bin/dep

# Copy the code from the host and compile it
WORKDIR $GOPATH/src/mycontainer
COPY Gopkg.toml Gopkg.lock ./
RUN dep ensure --vendor-only

COPY . ./
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix nocgo -o /app .

# FROM scratch
FROM alpine:latest

RUN apk --no-cache add ca-certificates

COPY --from=builder /app ./
# COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
ENTRYPOINT ["./app"]
# Expose the application on port 9000
EXPOSE 9000

deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: production-deployment
  namespace: staging
  labels:
    app: staging-customer
spec:
  replicas: 2
  selector:
    matchLabels:
      app: staging-customer
  template:
    metadata:
      labels:
        app: staging-customer
    spec:
      containers:
        - image: registry.gitlab.com/customer:000001
          name: staging-customer
          imagePullPolicy: Always
          ports:
            - containerPort: 9000
              protocol: TCP

service.yml

apiVersion: v1
kind: Service
metadata:
  name: production-service
  namespace: staging
spec:
  type: NodePort
  selector:
    app: staging-customer
  ports:
  - name: http
    port: 80
    targetPort: 9000

Also check you have something similar to this deployment ingress.

ingress.yml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: production-ingress
  namespace: staging
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - host: upload.mydomain.com
      http:
        paths:
          - path: /
            backend:
              serviceName: production
              servicePort: 9000

By the way. You can map your database on kubernetes to void using IP Address.. If don't know how to use built in kubernetes self discovery I invite you to follow this. https://www.youtube.com/watch?v=fvpq4jqtuZ8&list=PLIivdWyY5sqL3xfXz5xJvwzFW_tlQB_GB&index=7&t=0s

Eddwin Paz
  • 2,842
  • 4
  • 28
  • 48