1

I have a table which contains json object and array as data types for two fields.My table schema in scala is like

snSession.sql("CREATE TABLE subscriber_new14 (ID int,skills Map<STRING,INTEGER> ) USING column OPTIONS (PARTITION_BY 'ID',OVERFLOW 'true',EVICTION_BY 'LRUHEAPPERCENT' )");

My code in java is

PreparedStatement s2 = snappy.prepareStatement("insert into APP.SUBSCRIBER_NEW11(ID ,SKILLS ) values(?,?)");
JSONObject obj = new JSONObject();
String str = obj.toString();
obj.put(1, 1);
obj.put(2, 2);
s2.setObject(26,obj);
l1= s2.executeBatch();

getting this error when execute this

    SEVERE: null
java.sql.SQLException: (SQLState=XCL12 Severity=20000) An attempt was made to put a data value of type 'org.json.simple.JSONObject' into a data value of type 'Blob' for column '26'.
    at com.pivotal.gemfirexd.internal.shared.common.error.DefaultExceptionFactory30.getSQLException(DefaultExceptionFactory30.java:44)
    at com.pivotal.gemfirexd.internal.shared.common.error.DefaultExceptionFactory30.getSQLException(DefaultExceptionFactory30.java:63)
    at com.pivotal.gemfirexd.internal.shared.common.error.ExceptionUtil.newSQLException(ExceptionUtil.java:158)
    at io.snappydata.thrift.common.Converters.newTypeSetConversionException(Converters.java:3014)
    at io.snappydata.thrift.common.Converters.newTypeSetConversionException(Converters.java:3021)
    at io.snappydata.thrift.common.Converters$14.setObject(Converters.java:2126)
    at io.snappydata.thrift.common.Converters$21.setObject(Converters.java:2874)
    at io.snappydata.thrift.internal.ClientPreparedStatement.setObject(ClientPreparedStatement.java:611)
    at snappy.SnappyOps.upsert(SnappyOps.java:117)
    at snappy.Mailthread.DataPush(Mailthread.java:55)
    at snappy.Mailthread.run(Mailthread.java:36)
    at java.lang.Thread.run(Thread.java:748)
    Blob blob = snappy.createBlob();
    blob.setBytes(1, str.getBytes());

so i changed json object to blob type by adding this

 Blob blob = snappy.createBlob();
 blob.setBytes(1, str.getBytes());

but when i retrieve from snappy database through

select skills from subscriber_new11 limit 10;

snappy data becomes down with this error

while querying this

`select skills from  subscriber_new11 limit 10;`

getting error

ERROR 38000: (SQLState=38000 Severity=20000) (Server=host1/103.18.248.32[1529] Thread=ThriftProcessor-0) The exception 'Job aborted due to stage failure: Task 0 in stage 18.0 failed 4 times, most recent failure: Lost task 0.3 in stage 18.0 (TID 29, host1, executor 103.18.248.32(332515):52609): java.lang.AssertionError: assertion failed
    at scala.Predef$.assert(Predef.scala:156)
    at org.apache.spark.sql.catalyst.util.SerializedMap.pointTo(SerializedMap.scala:78)
    at org.apache.spark.sql.execution.row.ResultSetDecoder.readMap(ResultSetDecoder.scala:134)
    at org.apache.spark.sql.execution.row.ResultSetDecoder.readMap(ResultSetDecoder.scala:32)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:180)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenRDD$$anon$2.hasNext(WholeStageCodegenExec.scala:571)
    at org.apache.spark.sql.execution.WholeStageCodegenRDD$$anon$1.hasNext(WholeStageCodegenExec.scala:508)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:389)
    at org.apache.spark.sql.CachedDataFrame$.apply(CachedDataFrame.scala:451)
    at org.apache.spark.sql.CachedDataFrame$.apply(CachedDataFrame.scala:409)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:95)
    at org.apache.spark.scheduler.Task.run(Task.scala:126)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:326)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at org.apache.spark.executor.SnappyExecutor$$anon$2$$anon$3.run(SnappyExecutor.scala:57)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:' was thrown while evaluating an expression.
pavs
  • 141
  • 1
  • 8

1 Answers1

0

You can refer the JDBCWithComplexTypes.scala class from the examples which explains how to deal with complex datatypes using JDBC client connection. You should use the ComplexTypeSerializer to serialize the array object before setting the values in PreparedStatement.

SonalA
  • 1