0

I have big table of data that I read from excel in Python where I perform some calculation my dataframe looks like this but my true table is bigger and more complex but the logic stays the same: enter image description here

with : My_cal_spread=set1+set2 and Errors = abs( My_cal_spread - spread)

My goal is to find using Scipy Minimize to the only same combination of (Set1 and Set 2) that can be used in each row so My_cal_spread is as close as possible to Spread by optimizing in finding the minimum sum of errors Possible.

this is the solution that I get when I am using excel solver, I'm looking for implementing the same solution using Scipy. Thanks

My code looks like this :

lnt=len(df['Spread'])

df['my_cal_Spread']=''
i=0
while i<lnt:

    df['my_cal_Spread'].iloc[i]=df['set2'].iloc[i]+df['set1'].iloc[i]
    df['errors'].iloc[i] = abs(df['my_cal_Spread'].iloc[i]-df['Spread'].iloc[i])
    i=i+1

errors_sum=sum(df['errors'])

enter image description here

Gogo78
  • 267
  • 2
  • 10
  • 1
    Please add raw data instead of images and also add excel solver condition which is you using in excel sheet. – bharatk Oct 11 '19 at 06:13
  • 1
    Please provide a specific bit of code that you are struggling with. We can help you solve problems, we're not here to write your code for you. – Kraay89 Oct 11 '19 at 06:55
  • @Kraay89 I've added my code, please check If you have any questions I can answer them. – Gogo78 Oct 11 '19 at 08:53

0 Answers0