1

I launch an EMR cluster with boto3 from a separate ec2 instance and use a bootstrapping script that looks like this:

#!/bin/bash
############################################################################
#For all nodes including master                              #########
############################################################################

wget https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh
bash Anaconda3-2019.10-Linux-x86_64.sh -b -p /mnt1/anaconda3

export PATH=/mnt1/anaconda3/bin:$PATH
echo "export PATH="/mnt1/anaconda3/bin:$PATH"" >> ~/.bash_profile

sudo sed -i -e '$a\export PYSPARK_PYTHON=/mnt1/anaconda3/bin/python' /etc/spark/conf/spark-env.sh
echo "export PYSPARK_PYTHON="/mnt1/anaconda3/bin/python3"" >> ~/.bash_profile

conda install -c conda-forge -y shap
conda install -c conda-forge -y lightgbm
conda install -c anaconda -y numpy
conda install -c anaconda -y pandas
conda install -c conda-forge -y pyarrow
conda install -c anaconda -y boto3

############################################################################
#For master                                                #########
############################################################################

if [ `grep 'isMaster' /mnt/var/lib/info/instance.json | awk -F ':' '{print $2}' | awk -F ',' '{print $1}'` = 'true' ]; then

sudo sed -i -e '$a\export PYSPARK_PYTHON=/mnt1/anaconda3/bin/python' /etc/spark/conf/spark-env.sh

echo "export PYSPARK_PYTHON="/mnt1/anaconda3/bin/python3"" >> ~/.bash_profile

sudo yum -y install git-core

conda install -c conda-forge -y jupyterlab
conda install -y jupyter
conda install -c conda-forge -y s3fs
conda install -c conda-forge -y nodejs

pip install spark-df-profiling


jupyter labextension install jupyterlab_filetree
jupyter labextension install @jupyterlab/toc

fi

Then I add a step programatically to the running cluster using add_job_flow_steps

action = conn.add_job_flow_steps(JobFlowId=curr_cluster_id, Steps=layer_function_steps)

The step is a spark-submit that is perfectly formed.

In one of the imported python files I import boto3. The error I get is

ImportError: No module named boto3

Clearly I am installing this library. If I SSH into the master node and run

python
import boto3

it works fine.Is there some kind of issue with spark-submit using the installed libraries since I am doing a conda install?

B_Miner
  • 1,840
  • 4
  • 31
  • 66
  • You need to add `export PYSPARK_DRIVER_PYTHON="/mnt1/anaconda3/bin/python3"` in your env also, to tell driver which python to use – Snigdhajyoti Jun 13 '20 at 07:07

1 Answers1

0

AWS has a project (AWS Data Wrangler) that helps with EMR launching.

This snippet should work to launch your cluster with Python 3 enabled:

import awswrangler as wr

cluster_id = wr.emr.create_cluster(
    cluster_name="wrangler_cluster",
    logging_s3_path=f"s3://BUCKET_NAME/emr-logs/",
    emr_release="emr-5.28.0",
    subnet_id="SUBNET_ID",
    emr_ec2_role="EMR_EC2_DefaultRole",
    emr_role="EMR_DefaultRole",
    instance_type_master="m5.xlarge",
    instance_type_core="m5.xlarge",
    instance_type_task="m5.xlarge",
    instance_ebs_size_master=50,
    instance_ebs_size_core=50,
    instance_ebs_size_task=50,
    instance_num_on_demand_master=1,
    instance_num_on_demand_core=1,
    instance_num_on_demand_task=1,
    instance_num_spot_master=0,
    instance_num_spot_core=1,
    instance_num_spot_task=1,
    spot_bid_percentage_of_on_demand_master=100,
    spot_bid_percentage_of_on_demand_core=100,
    spot_bid_percentage_of_on_demand_task=100,
    spot_provisioning_timeout_master=5,
    spot_provisioning_timeout_core=5,
    spot_provisioning_timeout_task=5,
    spot_timeout_to_on_demand_master=True,
    spot_timeout_to_on_demand_core=True,
    spot_timeout_to_on_demand_task=True,
    python3=True,                                        # Relevant argument
    spark_glue_catalog=True,
    hive_glue_catalog=True,
    presto_glue_catalog=True,
    bootstraps_paths=["s3://BUCKET_NAME/bootstrap.sh"],  # Relevant argument
    debugging=True,
    applications=["Hadoop", "Spark", "Ganglia", "Hive"],
    visible_to_all_users=True,
    key_pair_name=None,
    spark_jars_path=[f"s3://...jar"],
    maximize_resource_allocation=True,
    keep_cluster_alive_when_no_steps=True,
    termination_protected=False,
    spark_pyarrow=True,                                  # Relevant argument
    tags={
        "foo": "boo"
    }
)

bootstrap.sh content:

#!/usr/bin/env bash
set -e

echo "Installing Python libraries..."
sudo pip-3.6 install -U awswrangler
sudo pip-3.6 install -U LIBRARY1
sudo pip-3.6 install -U LIBRARY2
...
Igor Tavares
  • 869
  • 11
  • 8