In sake of Spark low latency jobs, Spark Job Server provides a Persistent Context option. But I'm not sure, does persistent context contains metadata, block locations & any other information required for query planning?. By default Spark should read this information from Hive Metastore (disk IO/network).
Does Spark has any option for keeping in-memory all information necessary for query planning?