I am getting the following error when I try to run one of the samples that is given for the Apache Spark service on IBM Bluemix:
NameErrorTraceback (most recent call last)
<ipython-input-5-7de9805c358e> in <module>()
----> 1 set_hadoop_config(credentials_1)
<ipython-input-2-e790e4773aec> in set_hadoop_config(credentials)
1 def set_hadoop_config(credentials):
2 prefix = "fs.swift.service." + credentials['name']
----> 3 hconf = sc._jsc.hadoopConfiguration()
4 hconf.set(prefix + ".auth.url", credentials['auth_url']+'/v3/auth/tokens')
5 hconf.set(prefix + ".auth.endpoint.prefix", "endpoints")
NameError: global name 'sc' is not defined
I am loading a simple CSV file using the insert to code options on the data sources palette. However, the credentials that are generated do not have the 'name' attribute in it.
credentials['name']
is not in the key value pairs that are generated after I click on insert to code.
I want to know if there is any other way to load the data or this issue an IBM Bluemix issue.