1

I'm trying to create a table with two columns like below:

CREATE TABLE test (col1 INT ,col2 Array<Decimal>) USING column options(BUCKETS '5');

It is creating successfully but when i'm trying to insert data into it, it is not accepting any format of array. I've tried the following queries:

insert into test1 values(1,Array(Decimal("1"), Decimal("2")));

insert into test1 values(1,Array(1,2));

insert into test1 values(1,[1,2,1]);

insert into test1 values(1,"1,2,1");

insert into test1 values(1,<1,2,1>);

etc..

Please help!

techie95
  • 515
  • 3
  • 16

1 Answers1

1

There is an open ticket for this: https://jira.snappydata.io/browse/SNAP-1284 which will be addressed in next release for VALUES strings (JSON string and Spark compatible strings).

The Spark Catalyst compatible format will work:

insert into test1 select 1, array(1, 2);

When selecting the data is by default shipped in serialized form and shown as binary. For now you have to use "complexTypeAsJson" hint to show as JSON:

select * from test1 --+complexTypeAsJson(true);

Support for displaying in simpler string format by default will be added in next release.

One other thing that can be noticed in your example is a prime value for buckets. This was documented as preferred in previous releases but as of 1.0 release it is recommended to use a power of two or some even number (e.g the total number of cores in your cluster can be a good choice) -- perhaps some examples are still using the older recommendation.

Sumedh
  • 383
  • 2
  • 7
  • Thanks. The insertion works but when i'm trying to retrieve the data it is not displaying anythin. – techie95 Nov 02 '17 at 04:55
  • Sorry, the JSON display was broken in 1.0 release https://jira.snappydata.io/browse/SNAP-2056. You will need a build off master or wait for 1.0.1 for the fix to that. If you remove the hint then it will show as binary hex format. As of now you will not be able to see it but can get the original objects either using Spark APIs, or use ComplexTypeSerializer on client-side (https://github.com/SnappyDataInc/snappydata/blob/master/examples/src/main/scala/org/apache/spark/examples/snappydata/JDBCWithComplexTypes.scala). Will be cleaned up in 1.0.1 – Sumedh Nov 02 '17 at 05:12
  • Is there any clear documentation about these complexdata types? We are using python for the CRUD. So we need the complex data to satisfy our schema. – techie95 Nov 02 '17 at 05:14
  • If you use pyspark for operations with Spark API, then these should just work like in Spark. For JDBC the example linked above shows the usage. Still in the process of cleaning up JDBC support for these types (main problem being how to efficiently reconstruct the objects on client-side where required classes may not be present). – Sumedh Nov 02 '17 at 10:21