5

I'm trying to use docker-compose inside bitbucket pipeline in order to build several microservices and run tests against them. However I'm getting the following error:

Step 19/19 : COPY . .
Service 'app' failed to build: failed to copy files: failed to copy directory: Error processing tar file(exit status 1): Container ID 166535 cannot be mapped to a host ID

As of now, my docker-compose.yml looks like this:

version: '2.3'
services:
  app:
    build:
      context: .
      target: dev
    ports:
      - "3030:3030"
    image: myapp:dev
    entrypoint: "/docker-entrypoint-dev.sh"
    command: [ "npm", "run", "watch" ]
    volumes:
      - .:/app/
      - /app/node_modules
    environment:
      NODE_ENV: development
      PORT: 3030
      DATABASE_URL: postgres://postgres:@postgres/mydb

and my Dockerfile is as follow:

# ---- Base ----
#
FROM node:10-slim AS base
ENV PORT 80
ENV HOST 0.0.0.0
EXPOSE 80
WORKDIR /app
COPY ./scripts/docker-entrypoint-dev.sh /
RUN chmod +x /docker-entrypoint-dev.sh
COPY ./scripts/docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
COPY package.json package-lock.json ./

# ---- Dependencies ----
#
FROM base as dependencies
RUN npm cache verify
RUN npm install --production=true
RUN cp -R node_modules node_modules_prod
RUN npm install --production=false

# ---- Development ----
#
FROM dependencies AS dev
ENV NODE_ENV development
COPY . .

# ---- Release ----
#
FROM dependencies AS release
ENV NODE_ENV production
COPY --from=dependencies /app/node_modules_prod ./node_modules
COPY . .
CMD ["npm", "start"]

And in my bitbucket-pipelines.yml I define my pipeline as:

image: node:10.15.3
pipelines:
  default:
    - step:
        name: 'install docker-compose, and run tests'
        script:
          - curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
          - chmod +x /usr/local/bin/docker-compose
          - docker-compose -v
          - docker-compose run app npm run test
          - echo 'tests done'
        services:
          - docker

However, this example works when I try to use docker without docker-compose, defining my pipeline as:

pipelines:
  default:
    - step:
        name: 'install and run tests'
        script:
          - docker build -t myapp .
          - docker run --entrypoint="" myapp npm run test
          - echo 'done!'
        services:
          - postgres
          - docker

I found this issue (https://jira.atlassian.com/browse/BCLOUD-17319) in atlassian community, however I could not find a solution to fix my broken usecase. Any suggestions?

Matheus Ianzer
  • 425
  • 8
  • 22
  • I hope that you have solved this problem. For future visitors, please check again with the issue linked above and read through all of the suggestions. I have just added another answer (below) and at https://jira.atlassian.com/browse/BCLOUD-17319?focusedCommentId=2654226&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-2654226 – Jason Harrison Feb 08 '21 at 18:52

2 Answers2

5

I would try to use an image with installed docker-compose already instead of installing it during the pipeline.

image: node:10.15.3
pipelines:
  default:
    - step:
        name: 'run tests'
        script:
          - docker-compose -v
          - docker-compose run app npm run test
          - echo 'tests done'
        services:
          - docker

definitions:
    services:
        docker:
            image: docker/compose:1.25.4

try to add this to your bitbucket-pipelines.yml

if it doesn't work rename docker to customDocker in the definition and in the service sections.

if it doesn't work too, then because you don't need nodejs in the pipeline directly, try to use this approach:

image: docker/compose:1.25.4
pipelines:
  default:
    - step:
        name: 'run tests'
        script:
          - docker-compose -v
          - docker-compose run app npm run test
          - echo 'tests done'
        services:
          - docker

satanTime
  • 12,631
  • 1
  • 25
  • 73
  • Using nodejs image failed to build docker/compose service, and using docker/compose image instead of nodejs built successfully but failed with the same error "*Container ID 166535 cannot be mapped to a host ID*" at the same step "*COPY . .*" – Matheus Ianzer Mar 31 '20 at 20:59
  • crazy :) have you tried to change it to `COPY . ./` or `COPY ./. /app/.`? What is the purpose to use it exactly as `. .`? – satanTime Mar 31 '20 at 21:04
  • Found also this article: https://circleci.com/docs/2.0/high-uid-error/#solution – satanTime Mar 31 '20 at 21:10
  • 1
    Okay, yes. Looks like it's much deeper. I would says try steps from https://jira.atlassian.com/browse/BCLOUD-17319?focusedCommentId=2259676&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-2259676 and then you should be able to detect where you have files with bad UID / GID and it should be clearer how to fix them. – satanTime Mar 31 '20 at 21:18
  • Running with the docker/compose image was a great option for me – PaulIsLoud Aug 20 '20 at 08:10
  • Using definitions was intriguing, but in the end did not work. BTW, the service name must be lowercase, so instead of customDocker should be `custom-docker`. But this didn't work for me, as the docker binary was not available. Using an image with docker build in works better. – Berend de Boer Nov 30 '21 at 18:52
4

TL;DR: Start from your baseimage and check for the ID that is creating the problem using commands in your dockerfile. Use "problem_id = error_message_id - 100000 - 65536" to find the uid or gid that is not supported. Chown copies the files that are modified inflating your docker image.

The details:

We were using base image tensorflow/tensorflow:2.2.0-gpu and though we tried to find the problem ourselves, we were looking too late in our Dockerfile and making assumptions that were wrong.With help from Atlassian support we found that /usr/local/lib/python3.6 contained many files belonging to group staff (gid = 50)

Assumption 1: Bitbucket pipelines have definitions for the standard "linux" user ids and group ids.

Reality: Bitbucket pipelines only define a subset of the standard users and groups. Specifically they do not define group "staff" with gid 50. Your Dockerfile base image may define group staff (in /etc/groups) but the Bitbucket pipeline is run in a docker container without that gid. DO NOT USE

RUN cat /etc/group && RUN /etc/passwd

to check for ids. Execute these commands as Bitbucket pipeline commands in your script.

Assumption 2: It was something we were installing that was breaking the build.

Reality: Although we could "move the build failure around" by adjusting which packages we installed. This was likely just a case of some packages overwriting the ownership of pre-existing

We were able to find the files by using the relationship between the id in the error message and the docker build id of

problem_id = error_message_id - 100000 - 65536

And used the computed id value (50) to fined the files early in our Dockerfile:

RUN find / -uid 50-ls
RUN find / -gid 50 -ls

For example:

Error processing tar file(exit status 1): Container ID 165586 cannot be mapped to a host ID
50 = 165586 - 100000 - 65536

Final solution (for us):

Adding this command early to our Dockerfile:

RUN chown -R root:root /usr/local/lib/python*

Fixed the Bitbucket pipeline build problem, but also increases the size of our Docker image because Docker makes a copy of all of the files that are modified (contents or filesystem flags). We will look again at multi-stage builds to reduce the size of our docker images.

Jason Harrison
  • 922
  • 9
  • 27