44

I am a newbie in Spark SQL world. I am currently migrating my application's Ingestion code which includes ingesting data in stage,Raw and Application layer in HDFS and doing CDC(change data capture), this is currently written in Hive queries and is executed via Oozie. This needs to migrate into a Spark application(current version 1.6). The other section of code will migrate later on.

In spark-SQL, I can create dataframes directly from tables in Hive and simply execute queries as it is (like sqlContext.sql("my hive hql") ). The other way would be to use dataframe APIs and rewrite the hql in that way.

What is the difference in these two approaches?

Is there any performance gain with using Dataframe APIs?

Some people suggested, there is an extra layer of SQL that spark core engine has to go through when using "SQL" queries directly which may impact performance to some extent but I didn't find any material substantiating that statement. I know the code would be much more compact with Datafrmae APIs but when I have my hql queries all handy would it really worth to write complete code into Dataframe API?

Thank You.

Trying
  • 14,004
  • 9
  • 70
  • 110
PPPP
  • 561
  • 1
  • 4
  • 14
  • 2
    pls check my answer!. moreover dataframe uses tungsten memory representation , catalyst optimizer used by sql as well as dataframe – Ram Ghadiyaram Aug 01 '17 at 15:34
  • 1
    One more thing to note. With Dataset API, you have more control on the actual execution plan than with SparkSQL. – Ross Brigoli Nov 21 '19 at 03:53

4 Answers4

30

Question : What is the difference in these two approaches? Is there any performance gain with using Dataframe APIs?


Answer :

There is comparative study done by horton works. source...

Gist is based on situation/scenario each one is right. there is no hard and fast rule to decide this. pls go through below..

RDDs, DataFrames, and SparkSQL (infact 3 approaches not just 2):

At its core, Spark operates on the concept of Resilient Distributed Datasets, or RDD’s:

  • Resilient - if data in memory is lost, it can be recreated
  • Distributed - immutable distributed collection of objects in memory partitioned across many data nodes in a cluster
  • Dataset - initial data can from from files, be created programmatically, from data in memory, or from another RDD

DataFrames API is a data abstraction framework that organizes your data into named columns:

  • Create a schema for the data
  • Conceptually equivalent to a table in a relational database
  • Can be constructed from many sources including structured data files, tables in Hive, external databases, or existing RDDs
  • Provides a relational view of the data for easy SQL like data manipulations and aggregations
  • Under the hood, it is an RDD of Row’s

SparkSQL is a Spark module for structured data processing. You can interact with SparkSQL through:

  • SQL
  • DataFrames API
  • Datasets API

Test results:

  • RDD’s outperformed DataFrames and SparkSQL for certain types of data processing
  • DataFrames and SparkSQL performed almost about the same, although with analysis involving aggregation and sorting SparkSQL had a slight advantage

  • Syntactically speaking, DataFrames and SparkSQL are much more intuitive than using RDD’s

  • Took the best out of 3 for each test

  • Times were consistent and not much variation between tests

  • Jobs were run individually with no other jobs running

Random lookup against 1 order ID from 9 Million unique order ID's GROUP all the different products with their total COUNTS and SORT DESCENDING by product name

enter image description here

Leighton Ritchie
  • 501
  • 4
  • 15
Ram Ghadiyaram
  • 28,239
  • 13
  • 95
  • 121
  • 2
    This study is relevant for spark 1.6. Spark 2.3 has Tungsten and does lots of optimization via codegen and column based internal storage so results could be many times faster than ones of Spark 1.6 – Den Roman Aug 05 '18 at 09:42
  • can we do caching of data at intermediate level when we have spark sql query?? because we can easily do it by splitting the query into many parts when using dataframe APIs. – akash patel Oct 08 '20 at 06:39
  • sad results above. 3 times slower for DF interface????!!! – Hanan Shteingart Mar 05 '23 at 19:07
24

In your Spark SQL string queries, you won't know a syntax error until runtime (which could be costly), whereas in DataFrames syntax errors can be caught at compile time.

Arun Sharma
  • 443
  • 4
  • 6
  • 9
    You can use printSchema() to catch syntax error during lazy evaluation in spark SQL. If the schema prints, that means there are no syntax errors. – MAC Apr 19 '20 at 02:46
  • Not true, for example you may mistype a column name/other literal or code may pass the scala type system (ie. will compile) but will fail at runtime – jonathanChap Jul 22 '23 at 09:31
1

If query is lengthy, then efficient writing & running query, shall not be possible. On the other hand, DataFrame, along with Column API helps developer to write compact code, which is ideal for ETL applications.

Also, all operations (e.g. greater than, less than, select, where etc.).... ran using "DataFrame" builds an "Abstract Syntax Tree(AST)", which is then passed to "Catalyst" for further optimizations. (Source: Spark SQL Whitepaper, Section#3.3)

G.S.Tomar
  • 290
  • 2
  • 14
  • 1
    what you mean by "efficient writing & running query, shall not be possible". – vikrant rana Sep 06 '19 at 08:04
  • comparatively less chances of syntax/semantics errors, while authoring queries. If you have authored queries in JDBC Vs Hibernate Criteria API , then you can understand the intent very well – G.S.Tomar Sep 09 '19 at 12:35
  • @G.S.Tomar can we do caching of data at intermediate leveL when we have spark sql query?? because we can easily do it by splitting the query into many parts when using dataframe APIs. – akash patel Oct 08 '20 at 06:37
1

Couple more additions. Dataframe uses tungsten memory representation , catalyst optimizer used by sql as well as dataframe. With Dataset API, you have more control on the actual execution plan than with SparkSQL

Blue Clouds
  • 7,295
  • 4
  • 71
  • 112