Questions tagged [hbase]

HBase is the Hadoop database (columnar). Use it when you need random, real time read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware.

HBase is an open source, non-relational, distributed,versioned, column-oriented database modeled after Google's Bigtable and is written in Java. Bigtable: A Distributed Storage System for Structured by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Hadoop Distributed File System(HDFS). HBase includes: It is developed as part of Apache Software Foundation's Apache Hadoop project and runs on top of HDFS (Hadoop Distributed File System), providing Bigtable-like capabilities for Hadoop.

  • Convenient base classes for backing Hadoop MapReduce jobs with HBase tables including cascading, hive and pig source and sink modules
  • Query predicate push down via server side scan and get filters
  • Optimizations for real time queries
  • A Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options
  • Extensible jruby-based (JIRB) shell
  • Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX
6961 questions
2
votes
1 answer

HBase: truncate table via Java API enable the table truncated

I am experiencing an unexpected behaviour using the Java API to truncate an HBase table. In detail, I am doing the following operations: Disable the table Truncate the table Enable the table The code corresponding to these operations is the…
riccardo.cardin
  • 7,971
  • 5
  • 57
  • 106
2
votes
2 answers

SparkSQL+Hive+Hbase+HbaseIntegration doesn't work

I am getting error when I am trying to connect hive table (which is being created through HbaseIntegration) in spark Steps I followed : Hive Table creation code : CREATE TABLE test.sample(id string,name string) STORED BY…
user6608138
  • 381
  • 1
  • 4
  • 20
2
votes
1 answer

How to get all the version of hbase row

I am trying to do the following command in hbase: scan 'testLastVersion' {VERSIONS=>8} And it return only the last version of the row. Do you know how can I get all the versions of row through command shell and through java code? Thanks!
MosheCh
  • 99
  • 3
  • 12
2
votes
0 answers

how to export source code from db to csv?

in my database i have a text field named source code and save source codes from any programming language including all characters specially in comments. i want to have exact code. how can i export to csv and show whole a code in one cell? when i…
reihaneh
  • 225
  • 4
  • 18
2
votes
1 answer

Happybase filtering using rows function

I would like to perform a rows query with Happybase for some known row keys and add a value filter so that only rows matching the filter are returned. In HBase shell you can supply a filter to a get command, like so: get 'meta', 'someuser', {FILTER…
dsimmie
  • 189
  • 1
  • 2
  • 14
2
votes
1 answer

HBase Cell Tags returned in Scan but not in Get

I am reading HBase Cell tags via HBase client. I write the tags via Put.addImmutable(cf, col, version, value, tags). I can verify these tags have been written correctly by scanning HBase: Scan s = new Scan(); s.setFilter(new PageFilter(100)); …
user1310957
2
votes
1 answer

Can i delete a single data point in OpenTSDB

I am using OpenTSDB to store my time series data, but if I want to delete any data point, I am not able to find a right solution, if I go according to their documentation, the data for that whole hour also gets deleted, which is not exactly serving…
Dark Lord
  • 415
  • 5
  • 16
2
votes
0 answers

Difference between ImmutableBytesWritable class and BytesWritable class in Hadoop

org.apache.hadoop.hbase.io has class ImmutableBytesWritable and org.apache.hadoop.io has Class BytesWritable to use byte sequence as key or value. I'm not able to understand the difference between the two in relation with the backing buffer as…
insanely_sin
  • 986
  • 1
  • 14
  • 22
2
votes
2 answers

Hbase Java API TableNotDisabledException

I have configured Apache hbase 0.94.14 on my local system. I have to communicate with hbase via java API. I have written simple code to add a new column family in existing hbase table. Code for Java class // Instantiating configuration…
Hafiz Muhammad Shafiq
  • 8,168
  • 12
  • 63
  • 121
2
votes
1 answer

HBase Scan TimeRange Does not Work in Scala

I write scala code to retrieve data based on its time range. Here're my code : object Hbase_Scan_TimeRange { def main(args: Array[String]): Unit = { //===Basic Hbase (Non Deprecated)===Start Logger.getLogger(this.getClass) …
questionasker
  • 2,536
  • 12
  • 55
  • 119
2
votes
0 answers

SparkSQL/DataFrames does not work in spark-shell and application

It does not work when I tried to follow the example from HBase Guide . It occurs the following errors:java.lang.NullPointerException, the whole info is as follows. I guess a object is null , then call the method by the object of null , but I don't…
StrongYoung
  • 762
  • 1
  • 7
  • 17
2
votes
0 answers

syntax error :(hbase):37: lbunknown regex options - lb

I have fired below query to loading from hdfs file to Hbase. HDFS input path is 'sales_data/ex1data.csv' Query : hadoop jar /usr/lib/hbase/hbase.jar importtsv -libjars /usr/lib/hbase/lib/guava-11.0.2.jar '-Dimporttsv.separator=,'…
Hema
  • 29
  • 2
2
votes
2 answers

A connection error in remote mode of Titan-1.0.0+Hbase-0.98.20 using java

I am learning Titan database. I have run it successfully in local-mode. Now, I am trying to use Titan database in "Remote Server Mode" introduced in Titan-documentation. My Titan version is Titan-1.0.0-hadoop1. I have clusters in my LAN including…
Andrew Lee
  • 75
  • 6
2
votes
2 answers

Saving Huge data to HBase has been very slow

I am saving 14.5Million records to HBase. Each row has 20+ columns. I tried first inserting 0.7 Million records, which went very smooth and finished in 1.7 mins. Then i tried to insert actual and full data which is 14.5 Millions. If i tried to…
Srini
  • 3,334
  • 6
  • 29
  • 64
2
votes
1 answer

Does hbase really scales linearly?

I started to learn hbase and I don't understand how it scales linearly. The problem is that before you install hbase you have to have an hdfs cluster. The HDFS cluster have a master node which can be only one in the whole cluster, so it is a…
Oleksandr
  • 3,574
  • 8
  • 41
  • 78