8

From this link I found that Google Cloud Dataflow uses Docker containers for its workers: Image for Google Cloud Dataflow instances

I see it's possible to find out the image name of the docker container.

But, is there a way I can get this docker container (ie from which repository do I go to get it?), modify it, and then indicate my Dataflow job to use this new docker container?

The reason I ask is that we need to install various C++ and Fortran and other library code on our dockers so that the Dataflow jobs can call them, but these installations are very time consuming so we don't want to use the "resource" property option in df.

Pablo
  • 10,425
  • 1
  • 44
  • 67
Jonathan Sylvester
  • 1,275
  • 10
  • 23
  • Not technically an answer to your question but you can probably pull off what you want using Google Cloud Dataproc. Dataproc runs your code using Spark instead of Dataflow but essentially it accomplishes the exact same goal of writing a data pipeline. Dataproc also supports custom Docker images. – Jack Edmonds Dec 26 '18 at 15:15
  • See https://issues.apache.org/jira/browse/BEAM-6706?focusedCommentId=16773376&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16773376 about which SDKs allow what kind of containers. – ron Feb 21 '19 at 11:30

3 Answers3

6

Update for May 2020

Custom containers are only supported within the Beam portability framework.

Pipelines launched within portability framework currently must pass --experiments=beam_fn_api explicitly (user-provided flag) or implicitly (for example, all Python streaming pipelines pass that).

See the documentation here: https://cloud.google.com/dataflow/docs/guides/using-custom-containers?hl=en#docker

There will be more Dataflow-specific documentation once custom containers are fully supported by Dataflow runner. For support of custom containers in other Beam runners, see: http://beam.apache.org/documentation/runtime/environments.


The docker containers used for the Dataflow workers are currently private, and can't be modified or customized.

In fact, they are served from a private docker repository, so I don't think you're able to install them on your machine.

Pablo
  • 10,425
  • 1
  • 44
  • 67
  • Ok, but I found this option called "WorkerHarnessContainerImageFactory" (see: https://cloud.google.com/dataflow/java-sdk/JavaDoc/com/google/cloud/dataflow/sdk/options/DataflowPipelineWorkerPoolOptions.WorkerHarnessContainerImageFactory) Is it possible, therefore, that I can ssh into a running instance (see 1st link in original post), then get docker image, then modify it, then upload modified docker image into my private google containers, and then "somehow" update the WorkerHarnessContainerImageFactory parameter to point to this newly uploaded docker image? – Jonathan Sylvester Jun 11 '17 at 14:25
  • 1
    These containers are not meant to be modified. When a new worker is launched, the internal container is installed from the internal repository. – Pablo Jun 12 '17 at 18:06
  • 2
    Is this still valid today in 2020 or it is a feature that Dataflow team will provide in a short future ? – Dr. Fabien Tarrade Feb 14 '20 at 13:53
  • 2
    @Dr.FabienTarrade Custom containers are only supported within Beam portability framework. Pipelines launched within portability framework currently must pass --experiments=beam_fn_api explicitly (user-provided flag) or implicitly (for example, all Python streaming pipelines pass that). There will be more Dataflow-specific documentation once custom containers are fully supported by Dataflow runner. For support of custom containers in other Beam runners, see: https://beam.apache.org/documentation/runtime/environments/. – Valentyn May 14 '20 at 15:58
  • @Valentyn Thanks for these details. I didn't had time to test it but I will watch to see when this is fully supported by Dataflow runner. The issue we will face is python dependency that need access to internet but this is not autorized in our company. The only solution will be to use custom container with all needed dependencies. – Dr. Fabien Tarrade May 19 '20 at 16:53
  • 2
    @Dr.FabienTarrade you can also stage required pipeline dependencies at pipeline startup, see https://beam.apache.org/documentation/sdks/python-pipeline-dependencies/. – Valentyn May 21 '20 at 14:10
  • I didn't this one and I will look at it. Thanks for the pointer. – Dr. Fabien Tarrade May 22 '20 at 06:05
4

Update Jan 2021: Custom containers are now supported in Dataflow.

https://cloud.google.com/dataflow/docs/guides/using-custom-containers?hl=en#docker

Travis Webb
  • 14,688
  • 7
  • 55
  • 109
1

you can generate a template from your job (see https://cloud.google.com/dataflow/docs/templates/creating-templates for details), then inspect the template file to find the workerHarnessContainerImage used

I just created one for a job using the Python SDK and the image used in there is dataflow.gcr.io/v1beta3/python:2.0.0

Alternatively, you can run a job, then ssh into one of the instances and use docker ps to see all running docker containers. Use docker inspect [container_id] to see more details about volumes bound to the container etc.

Andreas
  • 489
  • 6
  • 14
  • Hi, Can you please elaborate more on this? I dont find any documentation on creating dataflow templates for custom containers. I want to create template and then when triggering a job, I want to pass the IMAGE path as parameter.. – Chaitanya Patil Aug 26 '21 at 16:04
  • Sorry, haven’t used that service in 5 years. Don’t remember what this was about – Andreas Aug 27 '21 at 19:17