14

Currently, when I STORE into HDFS, it creates many part files.

Is there any way to store out to a single CSV file?

JasonA
  • 314
  • 2
  • 4
  • 11

2 Answers2

17

You can do this in a few ways:

  • To set the number of reducers for all Pig opeations, you can use the default_parallel property - but this means every single step will use a single reducer, decreasing throughput:

    set default_parallel 1;

  • Prior to calling STORE, if one of the operations execute is (COGROUP, CROSS, DISTINCT, GROUP, JOIN (inner), JOIN (outer), and ORDER BY), then you can use the PARALLEL 1 keyword to denote the use of a single reducer to complete that command:

    GROUP a BY grp PARALLEL 1;

See Pig Cookbook - Parallel Features for more information

Chris White
  • 29,949
  • 4
  • 71
  • 93
15

You can also use Hadoop's getmerge command to merge all those part-* files. This is only possible if you run your Pig scripts from the Pig shell (and not from Java).

This as an advantage over the proposed solution: as you can still use several reducers to process your data, so your job may run faster, especially if each reducer output few data.

grunt> fs -getmerge  <Pig output file> <local file>
DoctorBug
  • 470
  • 5
  • 9