0

When I run Hudi DeltaStreamer on EMR, I see the hudi files get created in S3 (e.g. I see a .hoodie/ dir and the expected parquet files in S3. The command looks something like:

spark-submit \
  --conf spark.hadoop.hive.metastore.client.factory.class=com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory \
  --deploy-mode cluster \
  --jars /usr/lib/spark/external/lib/spark-avro.jar,/usr/lib/hudi/hudi-spark-bundle.jar,/usr/lib/hudi/hudi-utilities-bundle.jar,/usr/lib/hudi/cli/lib/aws-java-sdk-glue-1.12.397.jar,/usr/lib/hive/auxlib/aws-glue-datacatalog-hive3-client.jar,/usr/lib/hadoop/hadoop-aws.jar,/usr/lib/hadoop/hadoop-aws-3.3.3-amzn-2.jar --conf spark.sql.catalogImplementation=hive \
  --conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog \
  --conf spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension \
  --class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer /usr/lib/hudi/hudi-utilities-slim-bundle.jar \
  --table-type COPY_ON_WRITE \
  --source-class org.apache.hudi.utilities.sources.AvroDFSSource \
  --source-ordering-field id \
  --target-base-path s3a://my-bucket/data/my_database/my_target_table/ 
  --sync-tool-classes org.apache.hudi.aws.sync.AwsGlueCatalogSyncTool \
  --props file:///etc/hudi/conf/hudi-defaults.conf \
  --target-table my_target_table
  --schemaprovider-class org.apache.hudi.utilities.schema.SchemaRegistryProvider \
  --enable-sync \
  --enable-hive-sync

I see the data in hive:

beeline -u jdbc:hive2://ip-1-1-1-1:10000
Connecting to jdbc:hive2://ip-1-1-1-1:10000

show databases;
+-----------------------------------+
|           database_name           |
+-----------------------------------+
| my_database                       |
+-----------------------------------+

show tables;
+----------------------------------------------------+
|                      tab_name                      |
+----------------------------------------------------+
| my_target_table                                    |
+----------------------------------------------------+

I was expecting to sync to AWS Glue Data Catalog since I passed in the --sync-tool-classes listed here. However, there isn't any error and the job completes successfully, but data isn't synced from hive to data catalog.

I turned on debug logs in /etc/spark/conf/log4j2.properties and still didn't see anything useful on why my data isn't syncing from my EMR's hive to AWS Glue Data Catalog.

log4j.rootCategory=DEBUG, console
Will
  • 11,276
  • 9
  • 68
  • 76

1 Answers1

0

For my EMR setup, I was missing the configuration JSON. Once I added that to my EMR, the database and table appeared in my AWS Data Catalog.

[
    {
      "Classification": "hive-site",
      "Properties": {
        "hive.metastore.client.factory.class": "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"
      }
    },
    {
      "Classification": "spark-hive-site",
      "Properties": {
        "hive.metastore.client.factory.class": "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"
      }
    }
  ]
Will
  • 11,276
  • 9
  • 68
  • 76