I have been able to connect DocumentDb with glue and ingest data using a csv in S3, here's the script to do that
# Constants
data_catalog_database = 'sample-db'
data_catalog_table = 'data'
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
spark_context = SparkContext()
glue_context = GlueContext(spark_context)
job = Job(glue_context)
job.init(args['JOB_NAME'], args)
# Read from data source
## @type: DataSource
## @args: [database = "glue-gzip", table_name = "glue_gzip"]
## @return: dynamic_frame
## @inputs: []
dynamic_frame = glue_context.create_dynamic_frame.from_catalog(
database=data_catalog_database,
table_name=data_catalog_table
)
documentdb_write_uri = 'mongodb://yourdocumentdbcluster.amazonaws.com:27017'
write_documentdb_options = {
"uri": documentdb_write_uri,
"database": "yourdbname",
"collection": "yourcollectionname",
"username": "###",
"password": "###"
}
# Write DynamicFrame to MongoDB and DocumentDB
glue_context.write_dynamic_frame.from_options(dynamic_frame, connection_type="documentdb",
connection_options=write_documentdb_options)
In summary:
- Create a crawler that creates the schema of your data and a table, which can be stored in an S3 bucket.
- Use that db and table to ingest it into your documentdb.