Background context (why):
We have a project written in Python, running on a containerised platform. We are using a Network File System (OpenStack based) to store images and mounts, this is amazingly useful in terms of point in time restores, but the size of the Docker images is starting to kill us in deployments.
The python:2.7-slim image is about 180Mb (about 200Mb with our code and dependencies), and takes about 45 seconds to pull. (Lots of nodes!)
What I want to do:
I want to compile a static binary from the Python code which I can then run on a much smaller Alpine container.
Progress so far:
Running
$ cython --embed app.py
$ gcc -I /usr/lib/python2.7 -o app app.c -lpython2.7
or alternatively
$ pyinstaller -F app.py
Yields an ELF binary that will run on a bunch of things (Ubuntu/Debian/Fedora/Kali/Arch). The first requires that python-dev is installed on the target, the second doesn't but typically runs around 1200% slower than the first, and has difficulty with some of our code.
Where be dragons:
Compiling and running the app on Debian works fine.
$ docker run -v `pwd`/app:/app debian:jessie /app
* The application has run successfully...
But Alpine seems to fail.
$ docker run -v `pwd`/app:/app alpine:latest /app
standard_init_linux.go:178: exec user process caused "no such file or directory"
I suspect that this is because of the dynamic requirement for libc.so.6, but I can not seem to figure out how to create a static binary with GCC without dynamically requiring libc?