2

I'm currently trying to find a way to deal with unexpected HBase failures in my application. More specifically, what I'm trying to solve is a case where my application inserts data to HBase and then HBase fails and restarts.

In order to check how my application reacts to that scenario I wrote an application that uses HBase Async client by doing a tight loop and saving the results in HBase. When I start the application I can see rows are saved into the table, if during this time I intentionally fail my HBase server and restart it the client seems to reconnect but new insertions are not saved into the table

The code looks like this:

HConnection connection = HConnectionManager.createConnection();
HBaseClient hbaseClient = new HBaseClient(connection);

IntStream.range(0, 10000)
                .forEach(new IntConsumer() {
                    @Override
                    public void accept(int value) {
                        try {
                            System.out.println("in value: " + value);
                            Thread.sleep(2000);
                            Get get = new Get(Bytes.toBytes("key"));
                            hbaseClient.get(TableName.valueOf("testTable"), get, new ResponseHandler<Result>() {
                                            @Override
                                                public void onSuccess(Result response) {
                                                    System.out.println("SUCCESS");
                                                }

                                @Override
                                public void onFailure(IOException e) {
                                    System.out.println("FAILURE");
                                }
                            });
                            urlsClient.save("valuekey", "w" + value, new FailureHandler<IOException>() {
                                @Override
                                public void onFailure(IOException failure) {
                                    System.out.println("FAILURE");
                                }
                            });
                        } catch (InterruptedException e) {
                            e.printStackTrace();
                        }
                    }
                });

This is obviously just a simple test but what I'm trying to achieve is that the async client will successfully save new rows after I restarted my HBase server. What the asynchronous HBase clients prints to me if I actually print the stacktrace in the "onFailure" method is:

org.apache.hadoop.hbase.ipc.RpcClient$CallTimeoutException: Call id=303, waitTime=60096, rpcTimeout=60000
    at org.apache.hadoop.hbase.ipc.AsyncRpcChannel.cleanupCalls(AsyncRpcChannel.java:612)
    at org.apache.hadoop.hbase.ipc.AsyncRpcChannel$1.run(AsyncRpcChannel.java:119)
    at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:581)
    at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:655)
    at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:367)
    at java.lang.Thread.run(Thread.java:745)

And so my questions are:

  • How should one deal with a situation like I mentioned using the specified async client?
  • If this async client is no longer relevant could someone suggest a different async client that can perform asynchronous puts? I tried the BufferedMutator but it does not seem to actually flush any contents but just fails with the following java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator (but this gets a little off topic so I wont expand anymore)

Thanks

Gideon
  • 2,211
  • 5
  • 29
  • 47
  • 1
    I'm not 100% sure if my response should be the answer but basically I discovered that the way to go in my case was to use HBase high availability. This made my async client not fail when one of the masters failed. Hopefully this will help anyone – Gideon Jun 30 '16 at 12:33

1 Answers1

0

It's been quite a long time since I asked this question but I ended up using the HBase high availability instead of finding a way to solve it with code

Gideon
  • 2,211
  • 5
  • 29
  • 47