During the least squares computation:
x = N(-1) * At * Q(-1) * (yo -b)
Instead of doing:
Xnew = Xold + x[0]
Ynew = Yold + x[1]
In order to obtain the convergence I have to change the '+' into a '-':
Xnew = Xold - x[0]
Ynew = Yold - x[1]
Does anyone know the reason? I am doing something wrong?
Here is my Python Code:
import numpy as np
x_m = np.dot(np.linalg.pinv(N_matrix),A_matrix.T)
x_mat = np.dot(corr_m,np.linalg.pinv(Q_matrix))
x_matrix = np.dot(corr_mat,delta_y)
xnew.iloc[0,0] = xold.iloc[0,0] - x_matrix[0]
xnew.iloc[0,1] = xold.iloc[0,1] - x_matrix[1]