1

I just want to solve pH of Ammonia solution which is mentioned here.

The full equations in detail is mentioned here and here is my numeric solving:

import numpy as np
import scipy.optimize as optimize

C = np.array([0, 0, 0.1, 0])
N = np.array([
    [1, 1, 0, 0],
    [0, 1, -1, 1]
])
K = np.array([1E-14, 1.77E-5])

def eq1(X):
    return N.transpose() @ np.array(X) + C

def eq2(X):
    return N @ np.log(eq1(X)) - np.log(K)

X = optimize.fsolve(eq2, [1E-14, 1E-14])
print("X=",X)
print("Y=",eq1(X))

But I got these warnings:

RuntimeWarning: invalid value encountered in log
  return N @ np.log(eq1(X)) - np.log(K)
/usr/local/lib64/python3.6/site-packages/scipy/optimize/minpack.py:175: RuntimeWarning: The iteration is not making good progress, as measured by the
  improvement from the last ten iterations.
  warnings.warn(msg, RuntimeWarning)
X= [2.39338797e-06 2.27841240e-05]
Y= [2.39338797e-06 2.51775120e-05 9.99772159e-02 2.27841240e-05]

The problem is tricky for fsolve because in the correct solution [7.57E-12 1.32E-3] X1 is too close to zero.

I tried other initial value [1E-7, 1E-7], [1E-1, 1E-1], etc, and none can avoid this, until I tried the pair that is close to the correct solution, e.g. [1E-12, 1E-3].

Is there any parameters for fsolve to avoid such problem? Or any approaches to give a proper initial value?

auntyellow
  • 2,423
  • 2
  • 20
  • 47

1 Answers1

0

Try this arrangement

import numpy as np
import scipy.optimize as optimize

C = np.array([0, 0, 0.1, 0])
N = np.array([
    [1, 1, 0, 0],
    [0, 1, -1, 1]
])

K = np.array([1e-14, 1.77e-5])

x1 = 1e-2
x2 = 2e-2
y1 = 3e-2
y2 = 4e-2
y3 = 5e-2
y4 = 6e-2
xmin = 1e-10
xmax = 1e0
XY = np.array([x1,x2,y1,y2,y3,y4])

def eq1(XY):
    X = np.array([XY[0],XY[1]])
    Y = np.array([XY[2],XY[3],XY[4],XY[5]]) 
    return N.transpose() @ np.array(X) + C - Y

def eq2(XY):
    Y = np.array([XY[2],XY[3],XY[4],XY[5]])
    return N @ np.log(abs(Y)) - np.log(K)

def obj(XY):
    return np.linalg.norm(eq1(XY),2) + np.linalg.norm(eq2(XY),2)

bounds = ((xmin,xmax),(xmin,xmax),(xmin,xmax),(xmin,xmax),(xmin,xmax),(xmin,xmax))

    
result = optimize.minimize(obj, XY, bounds = bounds, method = 'trust-constr')

print(result.x)

print(obj(result.x))

with the residual verification

print(eq1(result.x))
print(eq2(result.x))
Cesareo
  • 131
  • 4
  • I got `success: False` :-( – auntyellow Jul 10 '21 at 15:27
  • The `success: False` is due to the ill conditioning for the approximations of `gradient` and `hessian` when using the quasi-Newton approximations. Important is the residual verification that seems acceptable, Here we are trying to solve an ill conditioned nonlinear system of equations. – Cesareo Jul 10 '21 at 16:36