Questions tagged [apache-phoenix]

For questions about Apache Phoenix. For the Elixir web framework, use phoenix-framework.

For questions about Apache Phoenix. For the Elixir web framework, use .

702 questions
3
votes
1 answer

Can't get the location for replica 0 in Phoenix HDP2.6

I enabled Phoenix for HBase on the HDP server. But, if I try to enable sqlline by using the below command: ./sqlline.py localhost:2181:/hbase_unsecure It encounters an error: Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't…
ForeverLearner
  • 1,901
  • 2
  • 28
  • 51
3
votes
1 answer

HBase - locality always zero

We have a cluster (Apache Phoenix as co-processor) of 6 data nodes co-located with HBase Region servers. We have set all the options to enable: dfs.client.read.shortcircuit true
3
votes
0 answers

How to pass a pig variable as a parameter in a LOAD statement?

I'm trying to load data from Phoenix into a pig script for processing I have a pig script like so - grain1 = LOAD 'cache' USING USING PigStorage(',') AS (partitionNumber: chararray, Id: chararray); DUMP grain1;// grain 1 dumps Id's correctly.…
seeker
  • 6,841
  • 24
  • 64
  • 100
3
votes
0 answers

PHOENIX - Could not find or load main class $PHOENIX_OPTS

I am trying to execute a sqlline-thin.py file on terminal , but I get an exception in windows command prompt Error: Could not find or load main class $PHOENIX_OPTS In Linux- Its Executed properly. But, There is no value assigned $PHOENIX_OPTS. Let…
BASS KARAN
  • 181
  • 6
3
votes
2 answers

Save CSV file to hbase table using Spark and Phoenix

Can someone point me to a working example of saving a csv file to Hbase table using Spark 2.2 Options that I tried and failed (Note: all of them work with Spark 1.6 for me) phoenix-spark hbase-spark it.nerdammer.bigdata :…
abstractKarshit
  • 1,355
  • 2
  • 16
  • 34
3
votes
0 answers

pass custom object to reducer ,get null at all

I'm new to mapreduce , my mapper output a DbWritable object , in reducer procedure, I cann't get any value from passed object ,maybe it hadn't passed at all ? here is my code DBWritable public class StWritable implements DBWritable,Writable { …
iwish
  • 31
  • 5
3
votes
0 answers

Spark dataframe "current_timestamp()" issue with apache phoenix

I am saving a spark 1.6 below DF into phoenix table, the problem I am facing is "With Column ("create_ts", current_timestamp ())" insert same timestamp for the entire DF. Please see below example. I want to have unique timestamp in milliseconds for…
nilesh1212
  • 1,561
  • 2
  • 26
  • 60
3
votes
2 answers

how to change TTL of hbase table from phoenix

I am able to give TTL of an HBase table at table creation time. How to change the TTL of the table after creation. Is it possible to change the TTL at runtime with out disabling the table? Thanks in advance :) Using Hortonworks 2.6 HDP Phoenix…
Rahul
  • 459
  • 2
  • 13
3
votes
2 answers

Implementing change in ThreadPoolSize on client side - JDBC driver Apache Phoenix

I have recently set up a JDBC driver to connect to Hadoop db using Apache Phoenix. Basic queries on Squirrel have worked well (for example, "select * from datafile"), but as soon as I ask a slightly more complicated query (ie, "select column1 from…
mmcclarty
  • 31
  • 3
3
votes
0 answers

Phoenix Upsert is delayed

I use the following line to upsert data into Hbase using Phoenix. upsert into fraud_xxx_y.detection_details (call_date_hour, msisdn, insert_time, call_time, source, shutdown_file,imsi, imei, cell_id, lac, vendor, model, calls, cdt_duration ) values…
sparkDabbler
  • 518
  • 2
  • 7
  • 20
3
votes
0 answers

Can Spark DataFrames be stored as values in HashMap and accessed later?

I am new to Spark/Scala. I am trying to use Spark in the following scenario - There is an input transaction table reference or lookup tables All tables are stored in HBASE and accessed in Spark via Phoenix jdbc driver. The lookup tables can be…
mbaxi
  • 1,301
  • 1
  • 8
  • 28
3
votes
1 answer

Filtering from phoenix when loading a table

I would like to know how this exactly works, df = sqlContext.read \ .format("org.apache.phoenix.spark") \ .option("table", "TABLE") \ .option("zkUrl", "10.0.0.11:2181:/hbase-unsecure") \ .load() if this is…
Pablo Castilla
  • 2,723
  • 2
  • 28
  • 33
3
votes
0 answers

Spark phoenixTableAsRDD not fetching complete record values

I'm having an odd problem, it seems that when im fetching the data from HBase using the Spark phoenix val rdd = sc.phoenixTableAsRDD(tableName, allColumns, zkUrl = Some(hostPort).map(tupleToObject) I'm getting an RDD with all the record, but few…
Felix
  • 140
  • 10
3
votes
3 answers

Apache Spark ways to Read and Write From Apache Phoenix in Java

Can anyone provide me with some examples to read a DataFrame and Dataset(in Spark 2.0) from phoenix (complete table and also using a query) and write a DataFrame and Dataset(in Spark 2.0) to phoenix, in Apache Spark in java. There aren't any…
Kiba
  • 399
  • 1
  • 4
  • 16
3
votes
0 answers

trigger-like mechanism in Apache Phoenix - Hbase

I have to ingest loads of data every day into hbase. Averagely I load 102*(10^6) records into hbase. However, I can't just load this data into Hbase since I have to compare each record with 1 month older data and check for duplicates. In case there…
Viglia
  • 71
  • 6