Questions tagged [data-science-experience]

IBM Data Science Experience is an interactive, collaborative, cloud-based environment where data scientists can use multiple tools to activate their insights.

IBM Data Science Experience is an interactive, collaborative, cloud-based environment where data scientists can use multiple tools to activate their insights.

Source: http://datascience.ibm.com/blog/welcome-to-the-data-science-experience/

261 questions
0
votes
3 answers

Could not parse Master URL: 'spark.bluemix.net'

I'm trying to connect to IBM's Spark as a Service running on Bluemix from RStudio running on my desktop machine. I have copied the config.yml from the automatically configured RStudio environment running on IBM's Data Science Experience: default: …
Chris Snow
  • 23,813
  • 35
  • 144
  • 309
0
votes
3 answers

Spark history server is not showing 'complete' applications

I am trying to performance tune a slow running DSX job. I have navigated to the spark history server from the underlying spark service on Bluemix (as per this question). I have executed a cell containing some basic spark code: In [1]: x =…
Chris Snow
  • 23,813
  • 35
  • 144
  • 309
0
votes
0 answers

Can I build N number of model and Predictions

Let's consider the dataset is of a bank(to predict loan) which contains the following attributes. > names(univ2) [1] "age" "inc" "family" "edu" "mortgage" "ccavg" "cc" "cd" "online" "securities" "infoReq" "loan" I have converted almost all…
0
votes
1 answer

How do I get SFTP working inside a python Notebook within DSX?

I caught that ftplib is available on DSX IBM Datascience Experience from ftplib import FTP Bu how does a SFTP connection look inside a python Notebook? So that I can import local data automatically. TIA
0
votes
2 answers

Failed to detect version from SPARK_HOME or SPARK_HOME_VERSION

I'm trying to follow a tutorial for using spark from RStudio on DSX, but I'm running into the following error: > library(sparklyr) > sc <- spark_connect(master = "CS-DSX") Error in spark_version_from_home(spark_home, default = spark_version) : …
Chris Snow
  • 23,813
  • 35
  • 144
  • 309
0
votes
1 answer

Error: could not find function "list_spark_kernels"

I'm following a tutorial to access spark from RStudio on Data Science Experience. However, a function listed in the tutorial is not available: > list_spark_kernels() Error: could not find function "list_spark_kernels" I have the files config.yml…
Chris Snow
  • 23,813
  • 35
  • 144
  • 309
0
votes
2 answers

How do I access a postgres table from pyspark on IBM's Data Science Experience?

Here is my code: uname = "xxxxx" pword = "xxxxx" dbUrl = "jdbc:postgresql:dbserver" table = "xxxxx" jdbcDF = spark.read.format("jdbc").option("url", dbUrl).option("dbtable",table).option("user", uname).option("password", pword).load() I'm…
0
votes
1 answer

DSX images generated by pixiedust display command are ugly

Any ideas why the display command in DSX gives such ugly image unlike the ones in Databricks? Plus I don't see that I can even add a hue color (even default) is ok like Databricks. Databricks Data Science Experience
Vik M
  • 45
  • 1
  • 4
0
votes
1 answer

Scheduled job does not appear to run and no kernel files are created

I have a scheduled notebook job that has been running without issue for a number of days, however, last night it stopped running. Note that I am able to run the job manually without issue. I raised a previous question on this topic: How to…
Chris Snow
  • 23,813
  • 35
  • 144
  • 309
0
votes
1 answer

How to install Azure module in IBM Data Science Experience

I'm trying to import Azure data into DSx. I get an error when I try to import the module. When I use the command "from azure.storage.blob import BlobService" in DSx, it tells me that there's no module with that name. Do I have to do some further…
choward
  • 11
  • 1
  • 3
0
votes
1 answer

Job schedule entry could not be created. Status code: 500

I have received the following error when trying to save a DSX Scheduled Job: Job schedule entry could not be created. Status code: 500 Screenshot of the error message: I've tried about six times over the last few hours and have consistently…
Chris Snow
  • 23,813
  • 35
  • 144
  • 309
0
votes
1 answer

How to install the Brunel package in a R notebook on Spark 2.0

I am trying to install the Brunel viz package in a R notebook on #DSX on SPARK 2.0 cluster. It should be possible to install this pacakge however I dont know how. Any help is appreciated thanks
0
votes
3 answers

Connecting RStudio to DashDB in the IBM Watson Studio

How do I connect R Studio to DashDb inside IBM Watson Studio?
0
votes
1 answer

how to log to the kernel-pyspark-*.log from a scheduled notebook?

In my notebook, I have setup a utility for logging so that I can debug DSX scheduled notebooks: # utility method for logging log4jLogger = sc._jvm.org.apache.log4j LOGGER = log4jLogger.LogManager.getLogger("CloudantRecommender") def info(*args): …
Chris Snow
  • 23,813
  • 35
  • 144
  • 309
0
votes
3 answers

install.packages("tm") -> "dependency 'slam' is not available"

I'm trying to install the tm package on IBM's Data Science Experience (DSX): install.packages("tm") However, I'm hitting this issue: "dependency 'slam' is not available" This post suggests that R version 3.3.1 will resolve the issue, however the R…
Chris Snow
  • 23,813
  • 35
  • 144
  • 309