Option 1
First thing default installation of ibmdbpy is present in /usr/local/....
You cannot add db2jcc jar there.
Uninstalling ibmdbpy that is installed already and then installing ibmdbpy install it in the user's(spark tenant's) .local directory.
!pip install --user lazy
!pip install --user jaydebeapi
!pip uninstall --yes ibmdbpy
!pip install ibmdbpy --user --ignore-installed --no-deps
!wget -O $HOME/.local/lib/python2.7/site-packages/ibmdbpy/db2jcc4.jar https://ibm.box.com/shared/static/lmhzyeslp1rqns04ue8dnhz2x7fb6nkc.zip
This worked.
Ref:- https://github.com/ibmdbanalytics/ibmdbpy-notebooks/blob/master/ibmdbPyDemo.ipynb
Option 2
If you are okay to use alternate method, there is python connector available on DSX.
https://datascience.ibm.com/docs/content/analyze-data/python_load.html#ibm-dashdb
from ingest.Connectors import Connectors
dashDBloadOptions = { Connectors.DASHDB.HOST : 'hostname',
Connectors.DASHDB.DATABASE : 'BLUDB',
Connectors.DASHDB.USERNAME : 'username',
Connectors.DASHDB.PASSWORD : 'XXXXX',
Connectors.DASHDB.SOURCE_TABLE_NAME : 'schema.MYTABLE'}
dashdbDF = sqlContext.read.format("com.ibm.spark.discover").options(**dashDBloadOptions).load()
dashdbDF.printSchema()
dashdbDF.show()
This gives you spark dataframe if thats what you are interested.
Thanks,
Charles.