1

I'm trying to integrate DynamoDB in EMR spark using the solution provided in AWS blog.

https://aws.amazon.com/blogs/big-data/analyze-your-data-on-amazon-dynamodb-with-apache-spark

I'm able to retrieve the results as expected . But always the Task calculator shows a warning "The calculated max number of concurrent map tasks is less than 1 , use 1 instead " and it take more than 2 minutes to fetch the data

$ spark-shell --jars /usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar

import org.apache.hadoop.io.Text;

import org.apache.hadoop.dynamodb.DynamoDBItemWritable
/* Importing DynamoDBInputFormat and DynamoDBOutputFormat */
import org.apache.hadoop.dynamodb.read.DynamoDBInputFormat
import org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat
import org.apache.hadoop.mapred.JobConf
import org.apache.hadoop.io.LongWritable

var jobConf = new JobConf(sc.hadoopConfiguration)
jobConf.set("dynamodb.servicename", "dynamodb")
jobConf.set("dynamodb.input.tableName", "customer")   // Pointing to DynamoDB table
jobConf.set("dynamodb.endpoint", "dynamodb.us-east-1.amazonaws.com")
jobConf.set("dynamodb.regionid", "us-east-1")
jobConf.set("dynamodb.throughput.read", "1")
jobConf.set("dynamodb.throughput.read.percent", "1")
jobConf.set("dynamodb.version", "2011-12-05")

jobConf.set("mapred.output.format.class", "org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat")
jobConf.set("mapred.input.format.class", "org.apache.hadoop.dynamodb.read.DynamoDBInputFormat")

var customers= sc.hadoopRDD(jobConf, classOf[DynamoDBInputFormat], classOf[Text], classOf[DynamoDBItemWritable])

cutsomers.count()

The cluster has 2 nodes of size m3.xlarge spot instance.

I'm not sure how to increase the hadoop map tasks.

Any help would be appreciated.

Created a hive table that maps to the DynamoDB table, and tried the same query using Hive shell. Query performance is normal.

select * from customer where custid='123456' -- Time taken is only 4 seconds

Sudhev Das
  • 37
  • 1
  • 4
  • I'm able to solve the problem by increasing the numbers of nodes in the cluster to 4 as suggested by AWS support. – Sudhev Das Jun 04 '19 at 16:50

0 Answers0