2

A simple code takes around 130 seconds to write s3-minio, while write to local disk takes 1 second only. anything wrong?

I have followed this post but doesn't help https://docs.min.io/docs/disaggregated-spark-and-hadoop-hive-with-minio.html

Run with 3 executors will be faster -- 52 seconds, but still not fast enough

master('local[32]') can achieve 21 seconds

master('local[1]') --> 130 seconds

Environment:

single node kubernete cluster is running on local machine(16 cores/32G), A s3-minio POD(with local disk as storage), spark driver POD and some spark executor POD.

iotop shows the net traffic between minio and spark is about 100Kb~ 1Mb. cpu is also low.

number of goroutine in minio is about 150~450(max)

see below logs, I found there are a lot API call to retrieve s3 object status. is it the reason?

2020-01-05 03:00:42,674 DEBUG   org.apache.hadoop.fs.s3a.S3AFileSystem   - object_delete_requests += 1  ->  24456
2020-01-05 03:00:42,676 DEBUG   org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol      - Committing files staged for absolute locations Map()
2020-01-05 03:00:42,676 DEBUG   org.apache.hadoop.fs.s3a.S3AFileSystem   - op_get_file_status += 1  ->  61698
2020-01-05 03:00:42,676 DEBUG   org.apache.hadoop.fs.s3a.S3AFileSystem   - Getting path status for s3a://dataplatform/tmp/test_pp_60m/.spark-staging-466619ae-8b30-4be3-9c92-49e079bd449c  (tmp/test_pp_60m/.spark-staging-466619ae-8b30-4be3-9c92-49e079bd449c)
2020-01-05 03:00:42,676 DEBUG   org.apache.hadoop.fs.s3a.S3AFileSystem   - object_metadata_requests += 1  ->  141711
2020-01-05 03:00:42,677 DEBUG   org.apache.hadoop.fs.s3a.S3AFileSystem   - object_metadata_requests += 1  ->  141712
2020-01-05 03:00:42,677 DEBUG   org.apache.hadoop.fs.s3a.S3AFileSystem   - object_list_requests += 1  ->  55793
2020-01-05 03:00:42,678 DEBUG   org.apache.hadoop.fs.s3a.S3AFileSystem   - Not Found: s3a://dataplatform/tmp/test_pp_60m/.spark-staging-466619ae-8b30-4be3-9c92-49e079bd449c
2020-01-05 03:00:42,678 DEBUG   org.apache.hadoop.fs.s3a.S3AFileSystem   - Couldn't delete s3a://dataplatform/tmp/test_pp_60m/.spark-staging-466619ae-8b30-4be3-9c92-49e079bd449c - does not exist
2020-01-05 03:00:42,678 INFO    org.apache.spark.sql.execution.datasources.FileFormatWriter      - Write Job 1a68dddd-fd88-49cd-957d-36e050d31de3 committed.
2020-01-05 03:00:42,679 INFO    org.apache.spark.sql.execution.datasources.FileFormatWriter      - Finished processing stats for write job 1a68dddd-fd88-49cd-957d-36e050d31de3.
2020-01-05 03:08:59,183 DEBUG   org.apache.spark.broadcast.TorrentBroadcast      - Unpersisting TorrentBroadcast 1
2020-01-05 03:08:59,184 DEBUG   org.apache.spark.storage.BlockManagerSlaveEndpoint       - removing broadcast 1
from pyspark.sql import Row
import random, time
from pyspark.sql import SparkSession


spark = SparkSession.builder \
    .enableHiveSupport() \
    .config("spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", 2)\
    .master("local[1]") \
    .getOrCreate()

fixed_date = ['2019-01-01','2019-01-02','2019-01-03','2019-01-04']
refs = ['0','1','2']
data = bytearray(random.getrandbits(8) for _ in range(100))
start=int(time.time())
print("start=%s"%start)
rows = []

for ref_id in refs:
    for d in fixed_date:
        for camera_id in range(1):
            for c in range(1000):
               rows.append(Row(ref_id=ref_id,
                               camera_id="c_"+str(camera_id),
                               date=d,
                               data=data
                              ))
df = spark._sc.parallelize(rows).toDF()
print("partition number=%s, row size=%s"% (df.rdd.getNumPartitions(),len(rows)))
df.write.mode("overwrite")\
 .partitionBy('ref_id','date','camera_id')\
 .parquet('s3a://mybucket/tmp/test_data')

Result Update

I think hadoop s3 part is slow(whatever I use fast upload or normal s3 transfer manager), especially when I write too many files into S3, it costs around 80-100 API call per file.Probably some S3 cache(amazon EMR or alluxio will help?)

jon
  • 395
  • 1
  • 3
  • 21

1 Answers1

0

For parquet to pick up the new committer you need

  1. a bit of a bridging code underneath the normal Parquet committer
  2. The configurations of turn that on

Docs on #2 are in https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/bk_cloud-data-access/content/ch03s08s05.html

Item #1 is in Spark trunk; I don't think it's in any shipping ASF releases though. It is in the HDP03.0/3.1 spark binaries if you try with them.

also, ask for a smaller block size


fs.s3a.block.size=64M
fs.s3a.multipart.size=64M
fs.s3a.multipart.threshold=64M
stevel
  • 12,567
  • 1
  • 39
  • 50