8

My hadoop job generate large number of files on HDFS and I want to write a separate thread which will copy these files from HDFS to S3.

Could any one point me to any java API that handles it.

Thanks

RandomQuestion
  • 6,778
  • 17
  • 61
  • 97
  • Another approach can be Use S3 instead of HDFS with Hadoop you can find all the merits and demerits of this approach here And if you think that it would be suitable to setup S3 for Hadoop Cluster you can refer here – user1855490 Dec 07 '12 at 12:17

1 Answers1

9

"Support for the S3 block filesystem was added to the ${HADOOP_HOME}/bin/hadoop distcp tool in Hadoop 0.11.0 (See HADOOP-862). The distcp tool sets up a MapReduce job to run the copy. Using distcp, a cluster of many members can copy lots of data quickly. The number of map tasks is calculated by counting the number of files in the source: i.e. each map task is responsible for the copying one file. Source and target may refer to disparate filesystem types. For example, source might refer to the local filesystem or hdfs with S3 as the target. "

Check out Running Bulk Copies in and out of S3 here http://wiki.apache.org/hadoop/AmazonS3

Joe Stein
  • 1,255
  • 2
  • 11
  • 14
  • The `distcp` tool works great to copy files between hdfs and s3, until you hit the 5GB PUT limit on S3. [Hadoop 2.4 fixes this](https://issues.apache.org/jira/browse/HADOOP-9454) but if you have an earlier version, be aware. – Steve Armstrong Apr 08 '15 at 19:10