8

My Apache beam pipeline implements custom Transforms and ParDo's python modules which further imports other modules written by me. On Local runner this works fine as all the available files are available in the same path. In case of Dataflow runner, pipeline fails with module import error.

How do I make custom modules available to all the dataflow workers? Please advise.

Below is an example:

ImportError: No module named DataAggregation

    at find_class (/usr/lib/python2.7/pickle.py:1130)
    at find_class (/usr/local/lib/python2.7/dist-packages/dill/dill.py:423)
    at load_global (/usr/lib/python2.7/pickle.py:1096)
    at load (/usr/lib/python2.7/pickle.py:864)
    at load (/usr/local/lib/python2.7/dist-packages/dill/dill.py:266)
    at loads (/usr/local/lib/python2.7/dist-packages/dill/dill.py:277)
    at loads (/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py:232)
    at apache_beam.runners.worker.operations.PGBKCVOperation.__init__ (operations.py:508)
    at apache_beam.runners.worker.operations.create_pgbk_op (operations.py:452)
    at apache_beam.runners.worker.operations.create_operation (operations.py:613)
    at create_operation (/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py:104)
    at execute (/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py:130)
    at do_work (/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py:642)
divibisan
  • 11,659
  • 11
  • 40
  • 58
Karthik N
  • 97
  • 1
  • 5

2 Answers2

11

The issue is probably that you haven't grouped your files as a package. The Beam documentation has a section on it.

Multiple File Dependencies

Often, your pipeline code spans multiple files. To run your project remotely, you must group these files as a Python package and specify the package when you run your pipeline. When the remote workers start, they will install your package. To group your files as a Python package and make it available remotely, perform the following steps:

  1. Create a setup.py file for your project. The following is a very basic setup.py file.

    setuptools.setup(
        name='PACKAGE-NAME'
        version='PACKAGE-VERSION',
        install_requires=[],
        packages=setuptools.find_packages(),
    )
    
  2. Structure your project so that the root directory contains the setup.py file, the main workflow file, and a directory with the rest of the files.

    root_dir/
        setup.py
        main.py
        other_files_dir/
    

See Juliaset for an example that follows this required project structure.

  1. Run your pipeline with the following command-line option:

    --setup_file /path/to/setup.py
    

Note: If you created a requirements.txt file and your project spans multiple files, you can get rid of the requirements.txt file and instead, add all packages contained in requirements.txt to the install_requires field of the setup call (in step 1).

  • While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference. Link-only answers can become invalid if the linked page changes. - [From Review](/review/low-quality-posts/20589073) – Brown Bear Aug 14 '18 at 19:45
  • That makes a lot sense. I updated the answer with a quote of the relevant section since there's not much to condense from the section's instructions. – Michael Butler Aug 14 '18 at 21:00
  • 3
    Worth noting is that the `setup.py` file is in the same directory where you run your final `python` command, then you must preppend `./` to it: `--setup_file ./setup.py` – Nicolás Ozimica Feb 27 '20 at 14:30
0

I ran into the same issue and unfortunately, the docs are not as verbose as they need to be. So, the problem as it turns out is that both the root_dir and the other_files_dir must contain an __init__.py file. When a directory contains an __init__.py file (even if it's empty) python will treat that directory as a package, which in this instance is what we want. So, your final folder structure should look something like this:

root_dir/
    __init__.py
    setup.py
    main.py
    other_files_dir/
        __init__.py
        module_1.py
        module_2.py

And what you'll find is that python will build an .egg-info folder that describes your package including all pip dependencies. It will also contain the top_level.txt file which contains the name of the directory that holds the modules (i.e other_files_dir)

Then you would simply call the modules in main.py as below

from other_files_dir import module_1