I am using a Python 3.5 and Spark notebook in Watson Studio.
I am trying to export a spark dataframe to a cloud object storage and it keeps failing:
The notebook does not give an error. I have managed to export smaller dataframes without issue.
When I check the object storage there is a partial dataframe in there.
I exported with the following code:
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
from ingest.Connectors import Connectors
S3saveoptions = {
Connectors.BluemixCloudObjectStorage.URL : paid_credentials['endpoint'],
Connectors.BluemixCloudObjectStorage.IAM_URL : paid_credentials['iam_url'],
Connectors.BluemixCloudObjectStorage.RESOURCE_INSTANCE_ID : paid_credentials['resource_instance_id'],
Connectors.BluemixCloudObjectStorage.API_KEY : paid_credentials['api_key'],
Connectors.BluemixCloudObjectStorage.TARGET_BUCKET : paid_bucket,
Connectors.BluemixCloudObjectStorage.TARGET_FILE_NAME : "name.csv",
Connectors.BluemixCloudObjectStorage.TARGET_WRITE_MODE : "write",
Connectors.BluemixCloudObjectStorage.TARGET_FILE_FORMAT : "csv",
Connectors.BluemixCloudObjectStorage.TARGET_FIRST_LINE_HEADER : "true"}
name = df.write.format('com.ibm.spark.discover').options(**S3saveoptions).save()