39

Is it possible to pass extra arguments to the mapping function in pySpark? Specifically, I have the following code recipe:

raw_data_rdd = sc.textFile("data.json", use_unicode=True)
json_data_rdd = raw_data_rdd.map(lambda line: json.loads(line))
mapped_rdd = json_data_rdd.flatMap(processDataLine)

The function processDataLine takes extra arguments in addition to the JSON object, as:

def processDataLine(dataline, arg1, arg2)

How can I pass the extra arguments arg1 and arg2 to the flaMap function?

zero323
  • 322,348
  • 103
  • 959
  • 935
Stan
  • 1,042
  • 2
  • 13
  • 29
  • Consider reading [this](http://stackoverflow.com/questions/26959221/pyspark-broadcast-variables-from-local-functions) – Avihoo Mamka Oct 08 '15 at 15:11
  • Thanks @AvihooMamka. As I understood I need to use partial function. But I've not got how to apply it to my case? – Stan Oct 08 '15 at 15:17
  • 2
    Why not send to the partial function the processDataLine function and the arguments you want after broadcasting it? – Avihoo Mamka Oct 08 '15 at 15:21

1 Answers1

53
  1. You can use an anonymous function either directly in a flatMap

     json_data_rdd.flatMap(lambda j: processDataLine(j, arg1, arg2))
    

    or to curry processDataLine

     f = lambda j: processDataLine(j, arg1, arg2)
     json_data_rdd.flatMap(f)
    
  2. You can generate processDataLine like this:

     def processDataLine(arg1, arg2):
         def _processDataLine(dataline):
             return ... # Do something with dataline, arg1, arg2
         return _processDataLine
    
     json_data_rdd.flatMap(processDataLine(arg1, arg2))
    
  3. toolz library provides useful curry decorator:

     from toolz.functoolz import curry
    
     @curry
     def processDataLine(arg1, arg2, dataline): 
         return ... # Do something with dataline, arg1, arg2
    
     json_data_rdd.flatMap(processDataLine(arg1, arg2))
    

    Note that I've pushed dataline argument to the last position. It is not required but this way we don't have to use keyword args.

  4. Finally there is functools.partial already mentioned by Avihoo Mamka in the comments.

Paul
  • 3,920
  • 31
  • 29
zero323
  • 322,348
  • 103
  • 959
  • 935
  • 1
    @guilhermecgs You can benchmark this on local collections but explicit nesting (2.) should be the most efficient followed by using anonymous function (1.) Currying / partials could be slightly slower because mechanism is much more sophisticated than the previous two. Not that I would really worry about it here. – zero323 Apr 12 '17 at 13:30
  • I think there is a mistake in the first example, you mean: f = lambda j: processDataLine( **j** , arg1, arg2) – Gara Walid Mar 14 '22 at 23:49