-1

I was trying to execute below spark-sql code in data bricks which is doing Insert Overwriting on other table. which are have same no.of columns with same names.

INSERT OVERWRITE TABLE cs_br_prov
  SELECT NAMED_STRUCT('IND_ID',stg.IND_ID,'CUST_NBR',stg.CUST_NBR,'SRC_ID',stg.SRC_ID,
  'SRC_SYS_CD',stg.SRC_SYS_CD,'OUTBOUND_ID',stg.OUTBOUND_ID,'OPP_ID',stg.OPP_ID, 
  'CAMPAIGN_CD',stg.CAMPAIGN_CD,'TREAT_KEY',stg.TREAT_KEY,'PROV_KEY',stg.PROV_KEY, 
  'INSERTDATE',stg.INSERTDATE,'UPDATEDATE',stg.UPDATEDATE,'CONTACT_KEY',stg.CONTACT_KEY) AS key,
  stg.MEM_KEY,
  stg.INDV_ID,
  stg.MBR_ID,
  stg.OPP_DT,
  stg.SEG_ID,
  stg.MODA,
  stg.E_KEY, 
  stg.TREAT_RUNDATETIME
  FROM cs_br_prov_stg stg

Error which i am getting was :

AnalysisException: Cannot write to 'delta.`path`', not enough data columns;
target table has 20 column(s) but the inserted data has 9 column(s)
Jacek Laskowski
  • 72,696
  • 27
  • 242
  • 420
venkat
  • 111
  • 1
  • 1
  • 11

1 Answers1

0

The reason is as the exception says, the SELECT subquery creates a logical plan with just 9 columns (not 20 as the cs_br_prov table expects).

Unless the table uses generated columns, the exception is perfectly fine.

Jacek Laskowski
  • 72,696
  • 27
  • 242
  • 420