0

My pods are under the state of "CrashloopBackOff", the setup is Jenkins with Kubernetes on GCP.

I have found a few answers where it indicates that my Dockerfile is not good and that it needs to be in an infinite state.

But I run the command in the production.yaml ["sh", "-c", "app -port=8080"] to have it in that state.

The exact same Dockerfile was used and it was working when I deployed the project manually to kubernetes.

The project I'm trying to submit looks like this:


The Dockerfile

FROM php:7.2.4-apache

COPY apache_default /etc/apache2/sites-available/000-default.conf
RUN a2enmod rewrite

COPY src /var/www/html/src
COPY public /var/www/html/public
COPY config /var/www/html/config
ADD composer.json /var/www/html
ADD composer.lock /var/www/html

# Install software
RUN apt-get update && apt-get install -y git
# Install unzip
RUN apt-get install -y unzip
# Install curl
RUN apt-get install -y curl

# Install dependencies
RUN php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer

RUN cd /var/www/html && composer install --no-dev --no-interaction --optimize-autoloader
# install pdo for mysql
RUN docker-php-ext-install pdo pdo_mysql

COPY "memory-limit-php.ini" "/usr/local/etc/php/conf.d/memory-limit-php.ini"

RUN chmod 777 -R /var/www

# Production envivorment
ENV ENVIVORMENT=prod  

EXPOSE 80

CMD apachectl -D FOREGROUND

CMD ["app"]

The Jenkinsfile

def project = '****'
def  appName = 'wobbl-mobile-backend'
def  imageTag = "gcr.io/${project}/${appName}"
def  feSvcName = "wobbl-main-backend-service"

pipeline {
  agent {
    kubernetes {
      label 'sample-app'
      defaultContainer 'jnlp'
      yamlFile 'k8s/pod/pod.yaml'
  }
  }
  stages {
    // Deploy Image and push with image container builder
    stage('Build and push image with Container Builder') {
      steps {
        container('gcloud') {
          sh "PYTHONUNBUFFERED=1 gcloud container builds submit -t ${imageTag} ."
        }
      }
    }
    // Deploy to production
    stage('Deploy Production') {
      // Production branch
      steps{
        container('kubectl') {
        // Change deployed image in canary to the one we just built
          sh("sed -i.bak 's#gcr.io/cloud-solutions-images/wobbl-main:1.0.0#${imageTag}#' ./k8s/production/*.yaml")
          sh("kubectl --namespace=production apply -f k8s/services/")
          sh("kubectl --namespace=production apply -f k8s/production/")
          sh("echo http://`kubectl --namespace=production get service/${feSvcName} -o jsonpath='{.status.loadBalancer.ingress[0].ip}'` > ${feSvcName}")
        }
      }
    }
  }
}

Than the yaml kubernetes configurations:

pod.yaml

apiVersion: v1
kind: Pod
metadata:
labels:
  component: ci
spec:
  # Use service account that can deploy to all namespaces
  serviceAccountName: default
  containers:
  - name: gcloud
    image: gcr.io/cloud-builders/gcloud
    command:
    - cat
    tty: true
  - name: kubectl
    image: gcr.io/cloud-builders/kubectl
    command:
    - cat
    tty: true

The service used backend.yaml

kind: Service
apiVersion: v1
metadata:
  name: wobbl-main-backend-service
spec:
  ports:
  - name: http
    port: 8080
    targetPort: 8080
    protocol: TCP
  selector:
    role: backend
    app: wobbl-main

The deployment production.yaml

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: wobbl-main-backend-production
spec:
  replicas: 1
  template:
    metadata:
      name: backend
      labels:
        app: wobbl-main
        role: backend
        env: production
    spec:
      containers:
      - name: backend
        image: gcr.io/cloud-solutions-images/wobbl-main:1.0.0
        resources:
          limits:
            memory: "500Mi"
            cpu: "100m"
        imagePullPolicy: Always
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
        command: ["sh", "-c", "app -port=8080"]
        ports:
        - name: backend
          containerPort: 8080

When I run kubernetes describe pod **** -n production I get the following response:

Normal Created 3m (x4 over 4m) kubelet, gke-jenkins-cd-default-pool-83e2f18e-hvwp Created container Normal Started 3m (x4 over 4m) kubelet, gke-jenkins-cd-default-pool-83e2f18e-hvwp Started container Warning BackOff 2m (x8 over 4m) kubelet, gke-jenkins-cd-default-pool-83e2f18e-hvwp Back-off restarting failed container

Any hints on how to debug this?

DaAmidza
  • 336
  • 2
  • 7
  • 25
  • 2
    `kubectl logs` is often a good first step; if the container is crashing on startup it might say something. – David Maze Oct 26 '18 at 12:54
  • @DavidMaze thank you I got some indicators here.I'll dig a dig more out than I'll let you know. – DaAmidza Oct 26 '18 at 12:57
  • @DavidMaze I got 'sh: 1: app: not found' and i also removed the EXPOSE 80 from docker since it was not on the same port as the production.Any hints? – DaAmidza Oct 26 '18 at 13:02

1 Answers1

3

First your Docker file says :

CMD ["app"]

And then within your deployment definition you have :

command: ["sh", "-c", "app -port=8080"]

This is repetition. I suggest you use one of these.

Secondly I assume one of the install commands get you the app binary. Make sure its part of your $PATH

Plus you have a pod and a deployment manifest. I hope you're using either one of them and not deploying both.

  • The configuration you are looking at was taken from https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes so I'm not sure how google can be wrong. But I'll try your answer to see if it will work. Can you explain me in detail what both lines do? – DaAmidza Oct 26 '18 at 13:38
  • 2
    A docker image will either have an `ENTRYPOINT` or `CMD`. This is the entry to the image. The process/script run as soon as the image is spun up. In this case when the image is created, the command `app` will run. This is solely how the container will start. On top of them in the deployment manifest, you specify `["sh", "-c", "app -port=8080"]` which means that when K8S spins up the image it will run that command on top of the command `app` that the docker image runs. –  Oct 27 '18 at 00:56
  • Thanks for the explanation.It helped me solve the issue. – DaAmidza Oct 27 '18 at 10:06