I've got a pandas dataframe with an area in the first column and 8 years of quarterly data in the rest. There's about 4400 rows. Here is a sample:
idx Q12000 Q22000 Q32000 Q42000 Q12001 Q22001 Q32001 Q42001 Q12002 Q22002 Q32002 Q42002
0 4085280.0 4114911.0 4108089.0 4111713.0 4055699.0 4076430.0 4043219.0 4039370.0 4201158.0 4243119.0 4231823.0 4254681.0
1 21226.0 21566.0 21804.0 22072.0 21924.0 23232.0 22748.0 22258.0 22614.0 22204.0 22500.0 22660.0
2 96400.0 102000.0 98604.0 97086.0 96354.0 103054.0 97824.0 95958.0 115938.0 123064.0 120406.0 120648.0
3 23820.0 24116.0 24186.0 23726.0 23504.0 23574.0 23162.0 23078.0 22306.0 22334.0 22152.0 22080.0
4 7838.0 7906.0 7714.0 7676.0 7480.0 7520.0 7102.0 6722.0 8324.0 8166.0 8208.0 8326.0
Here is an image depicting what I'm trying to calculate: timeline
- nadir: the lowest point (min)
- nadir_qtr: the quarter at which the nadir happens
- pre-peak: the highest point before the nadir
- pre-peak_qtr: the quarter at which the pre-peak happens
- post-peak: the highest point after the nadir
- post-peak_qtr: the quarter at which the post-peak happens recover: the quarter after the nadir where the numbers surpass those of the pre-peak
I'm able to calculate the nadir pretty easily.
df['nadir'] = df.iloc[:,2:].min(axis=1)
df['nadir_qtr'] = df.iloc[:,2:].idxmin(axis=1)
idx Q12000 Q22000 Q32000 Q42000 Q12001 Q22001 Q32001 Q42001 Q12002 Q22002 Q32002 Q42002 nadir nadir_qtr
0 4085280.0 4114911.0 4108089.0 4111713.0 4055699.0 4076430.0 4043219.0 4039370.0 4201158.0 4243119.0 4231823.0 4254681.0 4039370.0 Q42001
1 21226.0 21566.0 21804.0 22072.0 21924.0 23232.0 22748.0 22258.0 22614.0 22204.0 22500.0 22660.0 21226 Q12000
2 96400.0 102000.0 98604.0 97086.0 96354.0 103054.0 97824.0 95958.0 115938.0 123064.0 120406.0 120648.0 95958.0 Q42001
3 23820.0 24116.0 24186.0 23726.0 23504.0 23574.0 23162.0 23078.0 22306.0 22334.0 22152.0 22080.0 22080.0 Q42002
4 7838.0 7906.0 7714.0 7676.0 7480.0 7520.0 7102.0 6722.0 8324.0 8166.0 8208.0 8326.0 6722.0 Q42001
But when it comes to getting the pre or post peak values or quarters, I get stuck hard. The closest I've come is something like this:
df['pre-peak'] = df.loc[:,:df['nadir_qtr'].max(axis=1)
df['pre-peak_qtr'] = df.loc[:,:df['nadir_qtr']].idxmax(axis=1)
Expected output:
idx Q12000 Q22000 Q32000 Q42000 Q12001 Q22001 Q32001 Q42001 Q12002 Q22002 Q32002 Q42002 nadir nadir_qtr pre-peak pre-peak_qtr
0 4085280.0 4114911.0 4108089.0 4111713.0 4055699.0 4076430.0 4043219.0 4039370.0 4201158.0 4243119.0 4231823.0 4254681.0 4039370.0 Q42001 4114911.0 Q22000
1 21226.0 21566.0 21804.0 22072.0 21924.0 23232.0 22748.0 22258.0 22614.0 22204.0 22500.0 22660.0 21226.0 Q12000 NaN NaN
2 96400.0 102000.0 98604.0 97086.0 96354.0 103054.0 97824.0 95958.0 115938.0 123064.0 120406.0 120648.0 95958.0 Q42001 103054.0 Q22001
3 23820.0 24116.0 24186.0 23726.0 23504.0 23574.0 23162.0 23078.0 22306.0 22334.0 22152.0 22080.0 22080.0 Q42002 24816.0 Q32000
4 7838.0 7906.0 7714.0 7676.0 7480.0 7520.0 7102.0 6722.0 8324.0 8166.0 8208.0 8326.0 6722.0 Q42001 7906.0 Q2200
But any variations of this give me the wrong data or errors (the most common is)
TypeError: reduction operation 'argmax' not allowed for this dtype
I've tried lots of strategies, brute-forcing iterating through each row as a numpy array, splitting each row. I'm really stuck.