1

I'm using spark-shell with Spark 2.1.0 in AWS Elastic Map Reduce 5.3.1 to load data from a Postgres database. loader.load always fails and then succeeds. Why would this happen?

[hadoop@[SNIP] ~]$ SPARK_PRINT_LAUNCH_COMMAND=1 spark-shell --driver-class-path ~/postgresql-42.0.0.jar 
Spark Command: /etc/alternatives/jre/bin/java -cp /home/hadoop/postgresql-42.0.0.jar:/usr/lib/spark/conf/:/usr/lib/spark/jars/*:/etc/hadoop/conf/ -Dscala.usejavacp=true -Xmx640M -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p org.apache.spark.deploy.SparkSubmit --conf spark.driver.extraClassPath=/home/hadoop/postgresql-42.0.0.jar --class org.apache.spark.repl.Main --name Spark shell spark-shell
========================================
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/02/28 17:17:52 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
17/02/28 17:18:56 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://[SNIP]
Spark context available as 'sc' (master = yarn, app id = application_1487878172787_0014).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_121)
Type in expressions to have them evaluated.
Type :help for more information.

scala> val loader = spark.read.format("jdbc") // connection options removed
loader: org.apache.spark.sql.DataFrameReader = org.apache.spark.sql.DataFrameReader@46067a74

scala> loader.load
java.sql.SQLException: No suitable driver
  at java.sql.DriverManager.getDriver(DriverManager.java:315)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:83)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:34)
  at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
  ... 48 elided

scala> loader.load
res1: org.apache.spark.sql.DataFrame = [id: int, fsid: string ... 4 more fields]
rcrogers
  • 2,281
  • 1
  • 17
  • 14

2 Answers2

0

I am running into same problem as well. I am trying to connect to vertica via Spark using JDBC. I am using : spark-shell Spark version is 2.2.0 java version is 1.8

External jars for connections: vertica-8.1.1_spark2.1_scala2.11-20170623.jar vertica-jdbc-8.1.1-0.jar

Code to connect:

import java.sql.DriverManager
import com.vertica.jdbc.Driver


val jdbcUsername = "<username>"
val jdbcPassword = "<password>"
val jdbcHostname = "<vertica server>"
val jdbcPort = <vertica port>
val jdbcDatabase ="<vertica DB>"
val jdbcUrl = s"jdbc:vertica://${jdbcHostname}:${jdbcPort}/${jdbcDatabase}?user=${jdbcUsername}&password=${jdbcPassword}"

val connectionProperties = new Properties()
connectionProperties.put("user", jdbcUsername)
connectionProperties.put("password", jdbcPassword )

val connection = DriverManager.getConnection(jdbcUrl, connectionProperties)
java.sql.SQLException: No suitable driver found for jdbc:vertica://${jdbcHostname}:${jdbcPort}/${jdbcDatabase}?user=${jdbcUsername}&password=${jdbcPassword}

  at java.sql.DriverManager.getConnection(Unknown Source)
  at java.sql.DriverManager.getConnection(Unknown Source)
  ... 56 elided

If i run the same command second time, I get the following output and connection is established

scala> val connection = DriverManager.getConnection(jdbcUrl, connectionProperties)
connection: java.sql.Connection = com.vertica.jdbc.VerticaJdbc4ConnectionImpl@7d994c
Raje
  • 15
  • 1
  • 6
0

I ran into this issue today with PySpark and the sqlserver jdbc drivers. At first I built a simple workaround - catching the Py4JJavaException and retrying, where it'd work the second time around.

The trick is to specify the driver class in the DataStreamReader.jdbc method.

Using pyspark:

spark.read.jdbc(..., properties={'driver': 'com.microsoft.sqlserver.jdbc.SQLServerDriver'})

Then all that's needed is

spark-submit --jars s3://somebucket/sqljdbc42.jar script.py

Using Scala and @Raje's example, connectionProperties.put("driver", "...")

kadrach
  • 408
  • 6
  • 11