Can anyone provide me with some examples to read a DataFrame and Dataset(in Spark 2.0) from phoenix (complete table and also using a query) and write a DataFrame and Dataset(in Spark 2.0) to phoenix, in Apache Spark in java. There aren't any documented examples present for these in java.
Also provide multiple ways if possible like to read from phoenix one way is that we can use PhoenixConfigurationUtil to set a input class and input query and then read newAPIHadoopRDD from sparkContext and another way is to use sqlContext.read().foramt("jdbc").options(pass a map with configuration keys like driver,url,dbtable).load()
and there is one more way to read using sqlContext.read().format("org.apache.phoenix.spark").option(pass a map with configuration keys like url,table).load()
.
While searching I found these ways in other questions for Spark 1.6 with dataFrames but the examples weren't complete, these methods were present only in bits and pieces, so I was not able to make out the complete steps. I couldn't find any example for Spark 2.0