1

Suppose there is following map reduce job

Mapper:

setup() initializes some state

map() add data to state, no output

cleanup() ouput state to context

Reducer:

aggregare all states into one output

How such job could be implemented in spark?

Additional question: how such job could be implemented in scalding? I'm looking for example wich somehow makes the method overloadings...

Julias
  • 5,752
  • 17
  • 59
  • 84

2 Answers2

3

Spark map doesn't provide an equivalent of Hadoop setup and cleanup. It assumes that each call is independent and side effect free.

The closest equivalent you can get is to put required logic inside mapPartitions or mapPartitionsWithIndex with simplified template:

rdd.mapPartitions { iter => {
   ... // initalize state
   val result = ??? // compute result for iter
   ... // perform cleanup
   ... // return results as an Iterator[U]
}}
zero323
  • 322,348
  • 103
  • 959
  • 935
1

A standard approach to setup in scala would be to use use a lazy val:

lazy val someSetupState = { .... }
data.map { x =>
  useState(someSetupState, x)
  ...

The above works as long as the someSetupState can be instantiated on the tasks (i.e. it does not use some local disk of the submitting node). This does not address cleanup. For cleanup, scalding has a method:

    TypedPipe[T]#onComplete(fn: () => Unit): TypedPipe[T]

which is run on each task at the end. Similar to the mapping example, you can do a shutdown:

    data.map { x =>
      useState(someSetupState, x)
    }
    .onComplete { () =>
      someSetupState.shutdown()
    }

I don't know the equivalent for spark.

Oscar Boykin
  • 1,974
  • 2
  • 11
  • 16
  • Thank a lot, I'll definitely try it, But how to output the "someSetupState" from the .onComplete() and not .map()? – Julias Oct 20 '16 at 10:50