0

During the least squares computation:

 x = N(-1) * At * Q(-1) * (yo -b)

Instead of doing:

 Xnew = Xold + x[0] 

 Ynew = Yold + x[1]

In order to obtain the convergence I have to change the '+' into a '-':

 Xnew = Xold - x[0] 

 Ynew = Yold - x[1]

Does anyone know the reason? I am doing something wrong?

Here is my Python Code:

import numpy as np
x_m = np.dot(np.linalg.pinv(N_matrix),A_matrix.T)
x_mat = np.dot(corr_m,np.linalg.pinv(Q_matrix))
x_matrix = np.dot(corr_mat,delta_y)    
xnew.iloc[0,0] = xold.iloc[0,0] - x_matrix[0]
xnew.iloc[0,1] = xold.iloc[0,1] - x_matrix[1]
codester_09
  • 5,622
  • 2
  • 5
  • 27
Chiara
  • 1
  • 1
  • 1
    You should include the original equation that you are trying to solve and explain your notation. Also, there are a number of things in your code that we don't know what they are. Without this context, how is one supposed to know what you have in mind? – ATony Apr 26 '22 at 16:28
  • Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking. – Community Apr 27 '22 at 08:07

0 Answers0