Also, CDH 6 is in beta stage and do they support spark 2.3 without any bells and whistles? is it possible to run the same old spark 2.x versions (2.3 specifically) on hadoop 3 enabled CDH or Hadoop clusters?
I'm interested in knowing the backwards compatibility changes with yarn , hdfs and mapreduce API's.
Is anyone using this in production?