I am working on a CloudBuild script that builds a multistage Docker image for integration testing. To optimize the build script I opted to use Kaniko. The relevant portions of the Dockerfile and cloudbuild.yaml files are available below.
cloudbuild.yaml
steps:
# Build BASE image
- name: gcr.io/kaniko-project/executor:v0.17.1
id: buildinstaller
args:
- --destination=gcr.io/$PROJECT_ID/<MY_REPO>-installer:$BRANCH_NAME
- --destination=gcr.io/$PROJECT_ID/<MY_REPO>-installer:$SHORT_SHA
- --cache=true
- --cache-ttl=24h
- --cache-repo=gcr.io/$PROJECT_ID/<MY_REPO>/cache
- --target=installer
# Build TEST image
- name: gcr.io/kaniko-project/executor:v0.17.1
id: buildtest
args:
- --destination=gcr.io/$PROJECT_ID/<MY_REPO>-test:$BRANCH_NAME
- --destination=gcr.io/$PROJECT_ID/<MY_REPO>-test:$SHORT_SHA
- --cache=true
- --cache-ttl=24h
- --cache-repo=gcr.io/$PROJECT_ID/<MY_REPO>/cache
- --target=test-image
waitFor:
- buildinstaller
# --- REMOVED SOME CODE FOR BREVITY ---
# Build PRODUCTION image
- name: gcr.io/kaniko-project/executor:v0.17.1
id: build
args:
- --destination=gcr.io/$PROJECT_ID/<MY_REPO>:$BRANCH_NAME
- --destination=gcr.io/$PROJECT_ID/<MY_REPO>:$SHORT_SHA
- --destination=gcr.io/$PROJECT_ID/<MY_REPO>:latest
- --cache=true
- --cache-ttl=24h
- --cache-dir=/cache
- --target=production-image
waitFor:
- test # TODO: This will run after tests which were not included here for brevity
images:
- gcr.io/$PROJECT_ID/<MY_REPO>
Dockerfile
FROM ruby:2.5-alpine AS installer
# Expose port
EXPOSE 3000
# Set desired port
ENV PORT 3000
# set the app directory var
ENV APP_HOME /app
RUN mkdir -p ${APP_HOME}
WORKDIR ${APP_HOME}
# Install necessary packanges
RUN apk add --update --no-cache \
build-base curl less libressl-dev zlib-dev git \
mariadb-dev tzdata imagemagick libxslt-dev \
bash nodejs
# Copy gemfiles to be able to bundle install
COPY Gemfile* ./
#############################
# STAGE 1.5: Test build #
#############################
FROM installer AS test-image
# Set environment
ENV RAILS_ENV test
# Install gems to /bundle
RUN bundle install --deployment --jobs $(nproc) --without development local_gems
# Add app files
ADD . .
RUN bundle install --with local_gems
#############################
# STAGE 2: Production build #
#############################
FROM installer AS production-image
# Set environment
ENV RAILS_ENV production
# Install gems to /bundle
RUN bundle install --deployment --jobs $(nproc) --without development test local_gems
# Add app files
ADD . .
RUN bundle install --with local_gems
# Precompile assets
RUN DB_ADAPTER=nulldb bundle exec rake assets:precompile assets:clean
# Puma start command
CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"]
Since my Docker image is a multi-stage build with 2 separate end stages that share a common base build, I want to share the cache between the common portion and the other two. To accomplish this, I set all builds to share the same cache repository - --cache-repo=gcr.io/$PROJECT_ID/<MY_REPO>/cache
. It has worked in all my tests thus far. However, I have been unable to ascertain if this is best practice or if another manner of caching a base image would be recommended. Is this an acceptable implementation?
I have come across Kaniko-warmer but I have been unable to use it for my situation.