1

Appreciate your support as I need to decrease the pipeline execution time for running python test cases

It takes 3 minutes or less via my local machine but 8 minutes or more via pipeline machine

Local machine

processor Intel(R) Core(TM) i7-4510U CPU @ 2.00GHz

memory 16GiB System Memory

Cloudbuild Pipeline machine

8 vCPUs 8 Gib RAM

I used the below commands to execute the test cases but it is not different so I think that the test workers do not work parallel, The 8 cpu cores is not used well or docker-compose step is not used pipeline machine full capabilities.

pytest -n 4
pytest -n 6 
pytest -n auto 

my docker-compose file

version: '3'
services:
  redis:
    image: registry.hub.docker.com/library/redis:latest
    expose:
        - 6379
  postgres:
    image: registry.hub.docker.com/library/postgres:9.6
    restart: always
    command: -c max_connections=3000 -c fsync=off -c synchronous_commit=off -c full_page_writes=off -c max_locks_per_transaction=500
    volumes:
        - ./postgres/pgdata:/var/lib/postgresql/data
    environment:
        - POSTGRES_PASSWORD=my-postgres-pass
        - POSTGRES_USER=my-postgres-user
        - POSTGRES_DB=my-postgres-db
    expose:
        - "5432"
  datastore:
    image: registry.hub.docker.com/google/cloud-sdk
    entrypoint: bash
    command: gcloud beta emulators datastore start --host-port 0.0.0.0:8081
    environment:
        - CLOUDSDK_CORE_PROJECT=my-project
    expose:
        - "8081"
  pubsub:
    image: registry.hub.docker.com/google/cloud-sdk
    entrypoint: bash
    command: gcloud beta emulators pubsub start --host-port 0.0.0.0:8085
    environment:
        - CLOUDSDK_CORE_PROJECT=my-project
    expose:
        - "8085"
  django:
    working_dir: /opt/my-project
    image: $GUNICORN_IMAGE_NAME
    entrypoint: bash
    command: >
        -c "
        /usr/local/bin/pip install -r /opt/my-project/requirements.dev.txt &&
        python manage.py migrate_schemas &&
        pytest -n auto ."
    depends_on:
        - postgres
        - redis
        - datastore
        - pubsub
    environment:
        ......

My cloud build file

- name: "gcr.io/cloud-builders/docker"
  id: 'gunicorn_build'
  args: ['build', '-t', 'gcr.io/${PROJECT_ID}/${REPO_NAME}-gunicorn:${BRANCH_NAME}-${SHORT_SHA}', '.']


- name: 'gcr.io/$PROJECT_ID/docker-compose'
  id: 'test_cases'
  args: ['-f','./docker/docker-compose.yml','up', '--abort-on-container-exit' , '--exit-code-from', 'django' ]
  env:
  - 'GUNICORN_IMAGE_NAME=gcr.io/${PROJECT_ID}/${REPO_NAME}-gunicorn:${BRANCH_NAME}-${SHORT_SHA}'


options:
  machineType: 'E2_HIGHCPU_8'

Omar
  • 11
  • 2
  • Have you checked the [best practices for speeding up builds in Cloud Build](https://cloud.google.com/build/docs/speeding-up-builds)? There are some good tips that you can follow. From my experience, depending on your project size, builds in Cloud Build could be 2-3 times longer than local, so the time it's taking for you is within the expected ammount. Never the less you can try to use those best practices to gain some time. – Ralemos Feb 22 '21 at 14:38
  • Thanks for your reply but I did not know the reason that make the build take 2-3 times longer than local as local and cloud build machine have the same resources or more. – Omar Mar 15 '21 at 10:46

0 Answers0