3

I am trying Hbase - bulkLoad through Java MapReduce program. I am running my program in Eclipse.

But I am getting the following error:

12/06/14 20:04:28 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/06/14 20:04:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
12/06/14 20:04:29 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/06/14 20:04:29 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
12/06/14 20:04:29 INFO input.FileInputFormat: Total input paths to process : 1
12/06/14 20:04:29 WARN snappy.LoadSnappy: Snappy native library not loaded
12/06/14 20:04:29 INFO mapred.JobClient: Running job: job_local_0001
12/06/14 20:04:29 INFO mapred.MapTask: io.sort.mb = 100
12/06/14 20:04:29 INFO mapred.MapTask: data buffer = 79691776/99614720
12/06/14 20:04:29 INFO mapred.MapTask: record buffer = 262144/327680
12/06/14 20:04:29 WARN mapred.LocalJobRunner: job_local_0001
java.lang.IllegalArgumentException: Can't read partitions file
    at org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:111)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210)
Caused by: java.io.FileNotFoundException: File _partition.lst does not exist.
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:383)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
    at org.apache.hadoop.fs.FileSystem.getLength(FileSystem.java:776)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419)
    at org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.readPartitions(TotalOrderPartitioner.java:296)
    at org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:82)
    ... 6 more
12/06/14 20:04:30 INFO mapred.JobClient:  map 0% reduce 0%
12/06/14 20:04:30 INFO mapred.JobClient: Job complete: job_local_0001
12/06/14 20:04:30 INFO mapred.JobClient: Counters: 0

I googled a lot but didn't find any solution.

I tried to run this same program from the console and the following error came:

 hadoop jar /home/user/hbase-0.90.4-cdh3u2/lib/zookeeper-3.3.3-cdh3u2.jar /home/user/hadoop-0.20.2-cdh3u2/Test.jar BulkLoadHBase_1 /bulkLoad.txt /out
Exception in thread "main" java.lang.NoSuchMethodException: org.apache.zookeeper.server.quorum.QuorumPeer.main([Ljava.lang.String;)
    at java.lang.Class.getMethod(Class.java:1605)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:180)

My Code:

import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.HFileOutputFormat;
import org.apache.hadoop.hbase.mapreduce.PutSortReducer;
import org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat;

public class BulkLoadHBase_1 {

    public static class BulkLoadHBase_1Mapper 
            extends Mapper<Text, Text, ImmutableBytesWritable, Put>{

        public void map(Text key, Text value, Context context
                        ) throws IOException, InterruptedException {

            System.out.println("KEY  "+key.toString());
            System.out.println("VALUES : "+value);
            System.out.println("Context : "+context);

            ImmutableBytesWritable ibw =
                    new ImmutableBytesWritable(Bytes.toBytes(key.toString()));

            String val = value.toString();
            byte[] b = Bytes.toBytes(val);
            Put p = new Put(Bytes.toBytes(key.toString()));

            p.add(Bytes.toBytes("cf"),Bytes.toBytes("c"),Bytes.toBytes(val));

            context.write(ibw, p);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();

        Job job = new Job(conf, "bulk-load");

        job.setJarByClass(BulkLoadHBase_1.class);
        job.setMapperClass(BulkLoadHBase_1Mapper.class);

        job.setReducerClass(PutSortReducer.class);
        job.setOutputKeyClass(ImmutableBytesWritable.class);
        job.setOutputValueClass(Put.class);
        job.setPartitionerClass(TotalOrderPartitioner.class);
        job.setInputFormatClass(KeyValueTextInputFormat.class);

        FileInputFormat.addInputPath(job,
                     new Path("/home/user/Desktop/bulkLoad.txt"));
        HFileOutputFormat.setOutputPath(job,
                     new Path("/home/user/Desktop/HBASE_BulkOutput/"));     

       System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}
Michael Myers
  • 188,989
  • 46
  • 291
  • 292
Pradeep Bhadani
  • 4,435
  • 6
  • 29
  • 48

2 Answers2

4

Did you started HBase in distributed mode?! If so this line:

org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210)

in your stack trace shows that your map reduce job is running in local mode instead of distributed mode.

Also note that if you want to run command from within console your input files must reside on your hadoop file system NOT on your regular (e.g. NTFS or EXT3) file system.

Regards

محمدباقر
  • 285
  • 2
  • 13
0

The Problem was : this program needs to be run in Distributed Mode. And required JAR should be shipped...

Pradeep Bhadani
  • 4,435
  • 6
  • 29
  • 48