I have a file in master node that should be read by each node. How can I make this possible? In Hadoop's MapReduce I used the
DistribuitedCache.getLocalCacheFiles(context.getConfiguration())
How Spark works for file sharing between nodes? Do I have to load file in RAM and broadcast variable? Or can I only to indicate (absolute?) path of file in SparkContext configuration and it becames instantly available for all nodes?