0

I have created 2 tables in Hive

CREATE external TABLE avro1(id INT,name VARCHAR(64),dept VARCHAR(64)) PARTITIONED BY (yoj VARCHAR(64)) STORED AS avro;

CREATE external TABLE avro2(id INT,name VARCHAR(64),dept VARCHAR(64)) PARTITIONED BY (yoj VARCHAR(64)) STORED AS avro;

Entered data into table avro1 from hive console:-

INSERT INTO TABLE avro1 PARTITION (yoj = 2015) (id,name,dept) VALUES (1,'Mohan','CS');
INSERT INTO TABLE avro1 PARTITION (yoj = 2015) (id,name,dept) VALUES (2,'Rahul','HR');
INSERT INTO TABLE avro1 PARTITION (yoj = 2016) (id,name,dept) VALUES (3,'Kuldeep','EE');

Now ran,a spark structured Streaming application to enter data into table avro2 Now when I am reading from hive console or using Spark,from table avro 2 I am getting this error

Failed with exception java.io.IOException:org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Failed to obtain maxLength value for varchar field from file schema: "string

Serkan Arslan
  • 13,158
  • 4
  • 29
  • 44

1 Answers1

-1

Could you please try following command to insert data in hiive table from spark-shell,

spark.sql("INSERT INTO TABLE avro1 PARTITION (yoj = 2015) (id,name,dept) VALUES (1,'Mohan','CS')"); spark.sql("INSERT INTO TABLE avro1 PARTITION (yoj = 2015) (id,name,dept) VALUES (2,'Rahul','HR')"); spark.sql("INSERT INTO TABLE avro1 PARTITION (yoj = 2016) (id,name,dept) VALUES (3,'Kuldeep','EE')");