5

I am using spark 2.0.0 to query hive table:

my sql is:

select * from app.abtestmsg_v limit 10

Yes, I want to get the first 10 records from a view app.abtestmsg_v.

When I run this sql in spark-shell,it is very fast, USE about 2 seconds .

But then the problem comes when I try to implement this query by my python code.

I am using Spark 2.0.0 and write a very simple pyspark program, code is:

Below is my pyspark code:

from pyspark.sql import HiveContext
from pyspark.sql.functions import *
import json
hc = HiveContext(sc)
hc.setConf("hive.exec.orc.split.strategy", "ETL")
hc.setConf("hive.security.authorization.enabled",false)
zj_sql = 'select * from app.abtestmsg_v limit 10'
zj_df = hc.sql(zj_sql)
zj_df.collect()

Below is my scala code:

val hive = new org.apache.spark.sql.hive.HiveContext(sc)
hive.setConf("hive.exec.orc.split.strategy", "ETL")
val df = hive.sql("select * from silver_ep.zj_v limit 10")
df.rdd.collect()

From the info log , I find: although I use "limit 10" to tell spark that I just want the first 10 records , but spark still scan and read all files(in my case, the source data of this view contains 100 files and each file's size is about 1G) of the view , So , there are nearly 100 tasks , each task read a file , and all the task is executed serially. I use nearlly 15 minutes to finish these 100 tasks!!!!! but what I want is just to get the first 10 records.

So , I don't know what to do and what is wrong;

Anybode could give me some suggestions?

wuchang
  • 3,003
  • 8
  • 42
  • 66
  • Having same problem,https://stackoverflow.com/questions/47565131/my-spark-sql-limit-is-very-slow have you solved the problem? – no123ff Nov 30 '17 at 05:35

0 Answers0