My data is partitioned as Year,month,day in s3 Bucket. I have a requirement to read last six months of data everyday.I am using below code to read the data but it is selecting negative value in months. is there a way to read the correct data for last six months?
from datetime import datetime
d = datetime.now().day
m = datetime.now().month
y = datetime.now().year
df2=spark.read.format("parquet") \
.option("header","true").option("inferSchema","true") \
.load("rawdata/data/year={2021,2022}/month={m-6,m}/*")