I have created a Table in Cassandra with CQL3 and I want to store some Data from Pig into it. I created the Table with
CREATE TABLE test
(firstname VARCHAR,
surname VARCHAR,
averagekm DOUBLE,
firstyear INT,
lastyear INT,
km MAP<INT,INT>,
PRIMARY KEY (firstname, surname)
);
My dataset in Pig is:
grunt> describe cqlFormat;
cqlFormat: {((chararray,Firstname: chararray), (chararray,Surname: chararray)), (Averagekm: double,FirstYear: int,LastYear: int,Km: (chararray, (Year: int,Km: int)))}
And I want to store it with
grunt> store cqlFormat into 'cql://test/test?output_query=UPDATE+test.test+set+averagekm+%3D+%3F%2C+firstyear+%3D+%3F%2C+lastyear+%3D+%3F%2C+km+%3D+%3F' using CqlStorage();
I will get this error in my Logfile:
Backend error message
---------------------
java.io.IOException: java.io.IOException: InvalidRequestException(why:Not enough bytes to read a map)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:465)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:428)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:408)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:262)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:652)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)
at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:260)
Caused by: java.io.IOException: InvalidRequestException(why:Not enough bytes to read a map)
at org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:248)
Caused by: InvalidRequestException(why:Not enough bytes to read a map)
at org.apache.cassandra.thrift.Cassandra$execute_prepared_cql3_query_result.read(Cassandra.java:41868)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_execute_prepared_cql3_query(Cassandra.java:1689)
at org.apache.cassandra.thrift.Cassandra$Client.execute_prepared_cql3_query(Cassandra.java:1674)
at org.apache.cassandra.hadoop.cql3.CqlRecordWriter$RangeClient.run(CqlRecordWriter.java:232)
The storing of data works well, by storing without the Map of Kilometers.
I am using Datastax DSE 3.1.2 with Pig 0.9.2 (r1234438), cqlsh 3.1.2, Cassandra 1.2.6.5, CQL spec 3.0.0. Additionaly I created the Tuples by reading the Links:
http://www.datastax.com/docs/datastax_enterprise3.1/solutions/about_pig#pig-read-write
Thank you!