0

I want to analyse data with QIIME2 on a Docker container. Note that this is my first time with Docker. I created the image and then the container, and started to analyse a small subsample of data with success. However, one step of the pipeline systematically fails with the following error message complaining about space left:

Plugin error from feature-classifier:
  [Errno 28] No space left on device
Debug info has been saved to /tmp_mount/qiime2-q2cli-err-8hbv6l2e.log

The log file does not say much more to me:

Traceback (most recent call last):
  File "/opt/conda/envs/qiime2-2018.8/lib/python3.5/site-packages/q2cli/commands.py", line 274, in __call__
    results = action(**arguments)
  File "<decorator-gen-326>", line 2, in classify_sklearn
  File "/opt/conda/envs/qiime2-2018.8/lib/python3.5/site-packages/qiime2/sdk/action.py", line 231, in bound_callable
    output_types, provenance)
  File "/opt/conda/envs/qiime2-2018.8/lib/python3.5/site-packages/qiime2/sdk/action.py", line 362, in _callable_executor_
    output_views = self._callable(**view_args)
  File "/opt/conda/envs/qiime2-2018.8/lib/python3.5/site-packages/q2_feature_classifier/classifier.py", line 215, in classify_sklearn
    confidence=confidence)
  File "/opt/conda/envs/qiime2-2018.8/lib/python3.5/site-packages/q2_feature_classifier/_skl.py", line 45, in predict
    for chunk in _chunks(reads, chunk_size)) for m in c)
  File "/opt/conda/envs/qiime2-2018.8/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py", line 789, in __call__
    self.retrieve()
  File "/opt/conda/envs/qiime2-2018.8/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py", line 699, in retrieve
    self._output.extend(job.get(timeout=self.timeout))
  File "/opt/conda/envs/qiime2-2018.8/lib/python3.5/multiprocessing/pool.py", line 644, in get
    raise self._value
  File "/opt/conda/envs/qiime2-2018.8/lib/python3.5/multiprocessing/pool.py", line 424, in _handle_tasks
    put(task)
  File "/opt/conda/envs/qiime2-2018.8/lib/python3.5/site-packages/sklearn/externals/joblib/pool.py", line 371, in send
    CustomizablePickler(buffer, self._reducers).dump(obj)
  File "/opt/conda/envs/qiime2-2018.8/lib/python3.5/site-packages/sklearn/externals/joblib/pool.py", line 240, in __call__
    for dumped_filename in dump(a, filename):
  File "/opt/conda/envs/qiime2-2018.8/lib/python3.5/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 484, in dump
    NumpyPickler(f, protocol=protocol).dump(value)
  File "/opt/conda/envs/qiime2-2018.8/lib/python3.5/pickle.py", line 408, in dump
    self.save(obj)
  File "/opt/conda/envs/qiime2-2018.8/lib/python3.5/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 278, in save
    wrapper.write_array(obj, self)
  File "/opt/conda/envs/qiime2-2018.8/lib/python3.5/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 93, in write_array
    pickler.file_handle.write(chunk.tostring('C'))
OSError: [Errno 28] No space left on device

But there is plenty space left...

My image is not big (6G), inodes are ok too. The input file I want to analyse is small, it's just a test.

I've found some other topics with the same issue but all that I could try have failed.

I tried :

-remove all exited containers and dangling images

-upgrade Docker

-set a TMPDIR environnement variable to a /custom_tmp/ I've created myself within the container. I tried several ways : within the QIIME2 environnement, within the container but not within QIIME2 environnement, adding ENV TMPDIR="/cutom_tmp"/ in the Dockerfile then rebuilding the image.

-set a TMPDIR environnement variable to a /tmp_mount/ created on host server and then mount with the container as a volume

The same issue appears in each case. My guess is that maybe Docker wants to write into its own tmp dir, maybe one of those "tmpfs" with only 64M left (see command results below), and maybe I can not solve this with the TMPDIR variable, but I'm stuck there...

Many thanx for your attention and suggestions.

OS:Ubuntu 14.04.

df -h

Filesystem      Size  Used Avail Use% Mounted on
none            197G   57G  131G  31% /
tmpfs            64M     0   64M   0% /dev
tmpfs            32G     0   32G   0% /sys/fs/cgroup
/dev/sda1       939G  772G  120G  87% /tmp_mount
/dev/dm-0       197G   57G  131G  31% /data
shm              64M     0   64M   0% /dev/shm
tmpfs            32G     0   32G   0% /proc/acpi
tmpfs            32G     0   32G   0% /proc/scsi
tmpfs            32G     0   32G   0% /sys/firmware

docker version

Client:
 Version:           18.06.1-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        e68fc7a
 Built:             Tue Aug 21 17:24:58 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:23:24 2018
  OS/Arch:          linux/amd64
  Experimental:     false

env

CONDA_SHLVL=1
LC_ALL=C.UTF-8
CONDA_EXE=/opt/conda/bin/conda
XDG_CONFIG_HOME=/home/qiime2
LANG=C.UTF-8
CONDA_PREFIX=/opt/conda/envs/qiime2-2018.8
R_LIBS_USER=/opt/conda/envs/qiime2-2018.8/lib/R/library/
TINI_VERSION=v0.16.1
PYTHONNOUSERSITE=/opt/conda/envs/qiime2-2018.8/lib/python*/site-packages/
PWD=/
HOME=/home/qiime2
CONDA_PYTHON_EXE=/opt/conda/bin/python
DEBIAN_FRONTEND=noninteractive
TMPDIR=/tmp_mount/
CONDA_PROMPT_MODIFIER=(qiime2-2018.8)
TERM=xterm
MPLBACKEND=Agg
SHLVL=1
PATH=/opt/conda/envs/qiime2-2018.8/bin:~/miniconda3/bin:/opt/conda/envs/qiime2-2018.8/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
CONDA_DEFAULT_ENV=qiime2-2018.8
_=/usr/bin/env
OLDPWD=/data
Flamingo
  • 1
  • 2
  • Can we see the `docker info` output too? Some very old Docker setups defaulted to storing all of their state in a limited-size disk image file (look for the magic word "devicemapper" in that output). – David Maze Oct 26 '18 at 13:26
  • Hi @DavidMaze thank you for the information! I just find out what was the problem and it was not a Docker issue! I found explanation there : https://forum.qiime2.org/t/error-no-28-out-of-memory/5758/16 . Multi threading this task is way to loud for the memory, so I tried with 1 thread and it worked... I can't believe I spent so much hours trying with Docker. Have a nice day! – Flamingo Oct 26 '18 at 14:38

1 Answers1

0

I found explanation there : forum.qiime2.org/t/error-no-28-out-of-memory/5758/16 . Multi threading this task is way to loud for the memory, so I tried with 1 thread and it worked... Hope it can help someone one day.

Flamingo
  • 1
  • 2