I was executing below simple Spark-SQL code azure databricks.
val df2=spark.sql(
s"""
select
mbrgm.mbrgm_id as case_id,
case
when mbr_hist.meck is not null
and mbr_hist.efdt is not null
and mbr_hist.efdt <= mbr_pgm.credttm
and (
mbr_hist.exp_dt is null
or mbr_hist.exp_dt > mbrgm.creat_dttm
) then mbr_hist.meck
else mbrgm.facmbid
end as mb_fid,
.....
from
tempview1 mbrgm
left join left outer join tempview2 mbr_hist on (mbrgm.mrid = mbr_hist.mrid
and mbr_hist.efdt <= mbrgm.credttm
and mbr_hist.exdt > mbrgm.credttm
Every time I execute I get else condition value for mb_fid field i.e, mbrgm.facmbid. I have checked My data and compared with logic. As per logic it should go for then condition. I think while comparing mbr_hist.efdt <= mbr_pgm.credttm
it is always not true.
I am having mbr_hist.efdt as a String type ex: 2017-07-22 21:58:46 and mbr_pgm.credttm as a timestamp ex:2011-08-13T11:00:00.910+0000. Is it like because of different in length of values ,my comparison is failing. What I can use to compare correctly.