7

I have 50 GB dataset which doesn't fit in 8 GB RAM of my work computer but it has 1 TB local hard disk.

The below link from offical documentation mentions that Spark can use local hard disk if data doesnt fit in the memory.

http://spark.apache.org/docs/latest/hardware-provisioning.html

Local Disks

While Spark can perform a lot of its computation in memory, it still uses local disks to store data that doesn’t fit in RAM, as well as to preserve intermediate output between stages.

For me computational time is not at all a priority but fitting the data into a single computer's RAM/hard disk for processing is more important due to lack of alternate options.

Note: I am looking for a solution which doesn't include the below items

  1. Increase the RAM
  2. Sample & reduce data size
  3. Use cloud or cluster computers

My end objective is to use Spark MLLIB to build machine learning models. I am looking for real-life, practical solutions that people successfully used Spark to operate on data that doesn't fit in RAM in standalone/local mode in a single computer. Have someone done this successfully without major limitations?

Questions

  1. SAS have similar capability of out-of-core processing using which it can use both RAM & local hard disk for model building etc. Can Spark be made to work in the same way when data is more than RAM size?

  2. SAS writes persistent the complete dataset to hardisk in ".sas7bdat" format can Spark do similar persistent to hard disk?

  3. If this is possible, how to install and configure Spark for this purpose?
Community
  • 1
  • 1
GeorgeOfTheRF
  • 8,244
  • 23
  • 57
  • 80

1 Answers1

4

Look at http://spark.apache.org/docs/latest/programming-guide.html#rdd-persistence You can use various persistence models as per your need. MEMORY_AND_DISK is what will solve your problem . If you want a better performance, use MEMORY_AND_DISK_SER which stores data in serialized fashion.

Preeti Khurana
  • 369
  • 1
  • 11