2

I've encountered a weird problem and I do not know how to proceed.

I have docker 18.09.2, build 6247962 on a VMware ESXi 6.5 virtual machine running Ubuntu 18.04. I have docker 19.03.3, build a872fc2f86 on a Azure virtual machine running Ubuntu 18.04. I have the following little test script that I run on both hosts and in different docker containers:

#!/usr/bin/python3

import fcntl
import struct

image_path = 'foo.img'

f_obj = open(image_path, 'rb')
binary_data = fcntl.ioctl(f_obj, 2, struct.pack('I', 0))
bsize = struct.unpack('I', binary_data)[0]
print('bsize={0}'.format(bsize))
exit(0)

I run "ps -ef >foo.img" to get the foo.img file. The output of the above script on both virtual machines is bsize=4096.

I have the following Dockerfile on both VMs:

FROM ubuntu:19.04

RUN apt-get update && \
    apt-get install -y \
        python \
        python3 \
        vim

WORKDIR /root
COPY testfcntl01.py foo.img ./
RUN chmod 755 testfcntl01.py

If I create a docker image with the above Dockerfile on the VM running docker 18.09.2, the above gives me the same results as the host.

If I create a docker image with the above Dockerfile on the VM running docker 19.03.3, the above gives me the following error:

root@d317404714a6:~# ./testfcntl01.py
Traceback (most recent call last):
  File "./testfcntl01.py", line 9, in <module>
    binary_data = fcntl.ioctl(f_obj, 2, struct.pack('I', 0))
OSError: [Errno 22] Invalid argument

I compared the docker directory structure, the daemon.json file, the logs, the "docker info" between the hosts. They look to be identical. I tried with a FROM ubuntu:18.04 as well as ubuntu:19.04. I've tried with python2 as well as python3. Same results.

I do not know why the fcntl fails only on a docker container on the Azure VM running docker 19.03.3. Did something change in docker between 18 and 19 that might have caused this? Is there some configuration change that I need to make to get this to work? Something else I'm missing?

Any help would be greatly appreciated.

Thank you

Lewis Muhlenkamp


UPDATE01:
I was following the steps here to prepare my own custom Ubuntu 18.04 VHD to use in Azure. I started with a generic install of Ubuntu Server 18.04 using ubuntu-18.04.3-live-server-amd.iso that I just downloaded from Ubuntu's website. The test below works just fine on that freshly intalled VM. I finish the step

sudo apt-get install linux-generic-hwe-18.04 linux-cloud-tools-generic-hwe-18.04

and then my test fails. So, I believe there is some issue with these hardware enablement packages.

Lewis M
  • 550
  • 3
  • 7

1 Answers1

3

I had a pretty similar error and found that if the file is in a mounted volume, at least owned by the host, it won't fail. Ie:

docker run -it -v $PWD:/these_work ubuntu:18.04 bash

Files under the /these_work directory in the container worked, however other files that were solely accessible from within the container resulted in [Errno 22] Invalid Argument.

I came here from a yocto build error from a nearly identical method of accessing the blocksize within filemap.py:

# Get the block size of the host file-system for the image file by calling
# the FIGETBSZ ioctl (number 2).
try:
    binary_data = fcntl.ioctl(file_obj, 2, struct.pack('I', 0))
except OSError:
    raise IOError("Unable to determine block size")
ibuntu
  • 41
  • 5
  • 2
    I discovered the mounted volume does not have to be necessarily shared with the host, it can be just a docker volume (for me this was not clear from the answer). Also I discovered that if I mount this volume to non-empty directory in the container, Docker automatically stores the directory content to this volume so that I don't have to make any additional changes in the container and everything works. Thank you for the workaround! – Honza Vojtěch Mar 18 '21 at 11:28