6

I am using pspark in Zeppelin notebook and trying to fetch data using SELECT statement. I am simply trying to query a table but getting weird error for the following command:

%pyspark
spark.sql('select * from default.abc').show()

Here is the error I am getting:

Py4JJavaError: An error occurred while calling o92.sql.
: java.lang.NoSuchMethodError: com.facebook.fb303.FacebookService$Client.sendBaseOneway(Ljava/lang/String;Lorg/apache/thrift/TBase;)V
    at com.facebook.fb303.FacebookService$Client.send_shutdown(FacebookService.java:436)
    at com.facebook.fb303.FacebookService$Client.shutdown(FacebookService.java:430)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.close(HiveMetaStoreClient.java:606)
    at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154)
    at com.sun.proxy.$Proxy39.close(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2477)
    at com.sun.proxy.$Proxy39.close(Unknown Source)
    at org.apache.hadoop.hive.ql.metadata.Hive.close(Hive.java:414)
    at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:330)
    at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:317)
    at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:293)
    at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278)
    at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:221)
    at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:220)
    at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:266)
    at org.apache.spark.sql.hive.client.HiveClientImpl.databaseExists(HiveClientImpl.scala:356)
    at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:217)
    at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:217)
    at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:217)
    at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99)
    at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:216)
    at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.databaseExists(ExternalCatalogWithListener.scala:71)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.databaseExists(SessionCatalog.scala:238)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.isRunningDirectlyOnFiles(Analyzer.scala:750)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:683)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:715)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:708)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:89)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:86)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsUp(AnalysisHelper.scala:86)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$1.apply(AnalysisHelper.scala:87)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$1.apply(AnalysisHelper.scala:87)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:87)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:86)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsUp(AnalysisHelper.scala:86)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:708)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:654)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
    at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
    at scala.collection.immutable.List.foldLeft(List.scala:84)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
    at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
    at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:78)
    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)

(<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError('An error occurred while calling o92.sql.\n', JavaObject id=o905), <traceback object at 0x7f49fa356b48>)

I have also verified the list of tables.

%spark.pyspark
df = sqlContext.sql('show tables')
df.show()

+--------+----------------+-----------+
|database|       tableName|isTemporary|
+--------+----------------+-----------+
| default|             abc|      false|
| default|          abc321|      false|
| default|          abtest|      false|

Also, here are the parameters I have set in config file (zeppelin-env.sh)

export MASTER=yarn-client
export HADOOP_CONF_DIR="/etc/hadoop/conf"
export SPARK_HOME=/opt/cloudera/parcels/CDH-6.1.0-1.cdh6.1.0.p0.770702/lib/spark
export JAVA_HOME=/usr/java/jdk1.8.0_202-amd64
export PYSPARK_PYTHON=/opt/anaconda3/bin/python
export PYSPARK_DRIVER_PYTHON=/opt/anaconda3/bin/python
export PYTHONPATH=/opt/anaconda3/bin/
export PYTHONPATH=/opt/anaconda3/bin

UPDATE: I have replaced libthrift-0.9.2.jar by libthrift-0.9.3.jar as per this link. Then restart zeppelin but still no luck

Where am I going wrong.

user1584253
  • 975
  • 2
  • 18
  • 55
  • You error seem to have nothing to do with zeppelin itself. What is o92.sql, and what are facebook libraries doing in your stacktrace ? This is highly unconventional. – Mehdi LAMRANI Dec 26 '19 at 09:21
  • True, I am also confused. I haven't installed any library which pertains to Facebook. – user1584253 Dec 26 '19 at 09:26
  • Ok I see what might be going here. Actually there is a lot going on and you can miss in many possible layers (I really mean a LOT). com.facebook.fb303 looks to be a core thrift library. Py4j is used to invoke core spark scala code. As you can see, your query went through the execution process and through the catalyst optimizer, and it's invoking the Hive Metastore for the Logical Planning resolution, and your Hive Metastore is not responding through the Thirft Invocation. This pay explain why it works from time to time at startup, connection may get lost for some reason after some time. – Mehdi LAMRANI Dec 26 '19 at 09:49
  • Keep in mind there are so many pieces in the machinery and you have to be aware of each and every one of them to inquire what's really going on and where things can go wrong. I would start by trying the spark shell directly and get away from Zeppelin (as it only adds up many layers of possible integration problems) and see how it goes from there. – Mehdi LAMRANI Dec 26 '19 at 09:51
  • I understand, after restart this query works. Also, I have noticed one other thing that the list of table I am seeing is different which I can see through hue. Any suggestions? – user1584253 Dec 26 '19 at 10:23
  • Yes. There is a common confusion about the Hive Context and Spark Context. They both write to the Metastore, but they are kind of split brain. Using spark you cannot see all the content of the Hive Metastore, but only what was created by the Spark user / Context. This is a known issue but it is by design (and yes, it can be frustrating) – Mehdi LAMRANI Dec 26 '19 at 10:59
  • Also keep in mind that Zeppelin is highly unstable, erratic, and definitely not production ready. – Mehdi LAMRANI Dec 26 '19 at 11:00
  • I can see the database which I want to access using "show databases", eventhough I switched database but when I use SELECT statement, the above error occurs – user1584253 Dec 26 '19 at 11:19
  • I have also configured LDAP in zeppelin may be the query is sent by local user not LDAP user I am logged in – user1584253 Dec 26 '19 at 11:19
  • Why there is a downvote... please give feedback – user1584253 Dec 26 '19 at 12:01
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/204885/discussion-between-mehdi-lamrani-and-user1584253). – Mehdi LAMRANI Dec 26 '19 at 13:38
  • You're using "spark.sql" in one case and "sqlContext.sql" in the other. They might be different in content/setup – Mehdi LAMRANI Dec 26 '19 at 13:40

1 Answers1

1

Try adding path of libthrift-0.9.3 jar file in dependency section

  • I downloaded libthrift-0.9.3.jar in my system and set the local path in dependency artifact (Apache Zeppelin - Spark Interpreter). Then I restarted the interpreter and it worked like a charm. Good catch ! – user1584253 Jan 07 '20 at 10:57