Questions tagged [apache-calcite]

Apache Calcite is a data management framework. Not to be confused with Apache (HTTP Server).

Apache Calcite is a data management framework. It has an implementation of relational algebra, an extensible cost-based optimizer, and an optional SQL parser and JDBC driver.

Calcite is used by Apache Hive and Apache Drill as their query optimizers, and you can use it as a framework to build your own data engine.

Calcite was previously known as Optiq.

255 questions
0
votes
1 answer

Calcite SqlParser fails on "int(11)" data-type specification in CREATE TABLE statement using MYSQL_5 conformance

I have a project that requires using Calcite's SQL Parser to parse large amounts of DDL statements which are in heavy MySQL dialect. I have isolated an issue which can be illustrated with this specific example: create table `t1` (x int(11)) The…
Alex R
  • 11,364
  • 15
  • 100
  • 180
0
votes
1 answer

What syntax can I use for quoted identifiers in Flink Table SQL?

I'm trying to use quoted identifiers in Flink (mainly because I have some column names that conflict with keywords like year). But I can't make it parse. I boiled down to a minimal failing example: EnvironmentSettings settings =…
RubenLaguna
  • 21,435
  • 13
  • 113
  • 151
0
votes
1 answer

Connecting to JDBC database via Calcite driver using sqlline

I would like to connect to JDBC DB, e.g. Postgres, via Calcite driver using sqlline shell script wrapper included in the calcite git repo. I'm facing the problem how to specify the target JDBC Postgres driver. Initially I tried…
David Kubecka
  • 127
  • 1
  • 7
0
votes
2 answers

How to read Druid data using JDBC driver with spark?

How can I read data from Druid using spark and Avatica JDBC Driver? This is avatica JDBC document Reading data from Druid using python and Jaydebeapi module, I succeed like below code. $ python import jaydebeapi conn =…
0
votes
1 answer

Using Spark JDBC and Avatica to read records from a table in Apache Druid

I am trying to create a Dataframe in Spark that would contain all records from a table in Apache Druid and I am doing this using JDBC. Druid seems to be using the Calcite-Avatica JDBC driver (mentioned here). df =…
thisisshantzz
  • 1,067
  • 3
  • 13
  • 33
0
votes
2 answers

RelNode of a query in which FROM clause itself has a query

I want to achieve result from a table where I ORDER BY column id and I don't want id to be present in the result. I can achieve this using the following query. SELECT COALESCE (col1, '**') FROM (select col1, id FROM myDataSet.myTable WHERE col4 =…
Abhishek Dasgupta
  • 578
  • 1
  • 8
  • 20
0
votes
1 answer

How to achive STRING_AGG in RelNode?

I have a query where I want to concat all the rows with a delimiter ,. I can easily achieve this in sql using STRING_AGG. How to create a relNode for the following query ? SELECT STRING_AGG(CONCAT(col1, col2, col3), ',') FROM table; Is there a…
Abhishek Dasgupta
  • 578
  • 1
  • 8
  • 20
0
votes
1 answer

Create RelNode of a select query with concat

I went through the documentation of Apache Calcite. Is the relNode correct for the following query in BigQuery? SELECT CONCAT('a or b',' ', '\n', first_name) FROM foo.schema.employee WHERE first_name = 'name'; relNode = builder …
Abhishek Dasgupta
  • 578
  • 1
  • 8
  • 20
0
votes
1 answer

Where to set calcite elasticsearch username/password?

I am trying to use Apache Calcite to connect to ElasticSearch, and am running into problems setting the Username Password. I have tried to config username/password with operand(based on JSON), with Properties(DriverManager.getConnection(String url,…
0
votes
1 answer

There are not enough rules to produce a node with desired properties

I would like to use calcite volcano planner to optimise a query. It doesn't work and return me the exception: There are not enough rules to produce a node with desired properties: convention=NONE, sort=[]. All the inputs have relevant nodes, however…
0
votes
1 answer

in Nifi and with QueryRecord processor can we add a new column that is a regex of another column

in Nifi and with QueryRecord processor can we add a new column that is a regex of another column in a query? Like : SELECT info, SUBSTRING(info, "([^\s]+)") as f_name FROM FLOWFILE I don't want to split my flowfile, ExtractText, UpdateAttributes,…
mongotop
  • 7,114
  • 14
  • 51
  • 76
0
votes
0 answers

How do I add an Apache Calcite test to SqlToRelConverterTest.java?

In Apache-Calcite, how do I add a test to SqlToRelConverterTest.java? I've cloned and renamed a simple existing test case as a proof of concept, but when I run the new test I get: plan ==> expected: <${plan}> but was: < LogicalSort(sort0=[$0],…
seanb
  • 1
0
votes
1 answer

Calcite SQLParser is threadsafe .?

while running a test like this, getting into different exceptions with each test run. private static void testInParallelCaliciteParser() { SqlParser parser = SqlParser.create("select * from test", …
0
votes
2 answers

How to select a set of fields from input data as an array of repeated fields in beam SQL

Problem Statement: I have an input PCollection with following fields: { firstname_1, lastname_1, dob, firstname_2, lastname_2, firstname_3, lastname_3, } then I execute a Beam SQL operation such that output of…
0
votes
0 answers

Adding calcite UDF with one argument as SQL Type (varchar,int etc.) like CONVERT() in SQL

I am trying to work with Apache Calcite's SQlParser and I need to implement a udf with syntax similar to CONVERT() function i.e., FUNC_NAME(col_name,SQL_TYPE) How can I achieve the same behaviour? Edit: With return type same as second argument of…
Faiz
  • 1
  • 2