2

I am trying to calculate n in equation: f = n^2 in iterative way so that f was close to 3 with precision 3 decimal places from bottom. For illustration I am writing here a code which does succesfully what I want. A problem with this code is that instead of f=n^2 I am starting external script and using this approach entire iteration becomes extremely slow and therefore can no be used. It requires too many cycles to converge to given precision. Therefore I am searching for faster way to achieve convergence. I mean with "faster' to converge with smaller number of cycles. One recommended to me binary search algorithm, could you help with creation of it with this little example f = n^2?

Here is my code which demonstrates what I want.

n = 1; % initial value
f=n^2; % function to calculate
tol = 3;   % tolerance
accuracy = 0.0000001; % accuracy

while f <= tol
    n = n+accuracy;
    f=n^2;
end
n = n-accuracy;
f=n^2;
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

Update: Here is my example for setting of function and its derivative for Newton-Raphson algorithm. Unfortunately, it is probably incorrect, so I would like to ask you for help.

n = 7.76; % initial guess
f_error = [7.73 9 12.404 7.659 8.66];
df_error = diff(f_error)
xNew = n - f_error./df_error
Aaa A
  • 79
  • 5
  • 2
    Your code doesn't even do a grid search, it just calculates all values linearly with your gives step (`accuracy`) until you either reach the desired one (`tol`), or until forever. This doesn't "converge" in any way. Binary search isn't going to help you, try [bisection search.](https://en.wikipedia.org/wiki/Bisection_method). – Adriaan May 18 '22 at 13:31
  • 1
    You are skipping 5 centuries of numerical mehtods. Please don't just guess how to do this, find a search method, such as bisection, Newton-Raphson, etc. – Ander Biguri May 18 '22 at 13:48
  • @beaker. You're right. I've been looking at python for so long, I didn't notice that I wasn't in Kansas anymore. – Mad Physicist May 18 '22 at 14:52
  • @MadPhysicist I kinda figured ;) – beaker May 18 '22 at 14:52

1 Answers1

1

Newton-Raphson Method

f_error(x) = x^(1/2) - N
f'_error(x) = (1/2)*x^(-1/2)
xNew = x - f_error(x)/f'_error(x)
repeat:
xNew = x - (x^(1/2) - N) / ((1/2)*x^(-1/2))

approaches to targeted value quicker than trying to scan all floating-point representable values between them:

n = 3; % initial guess
nNew = n;
val = 3; % value
f=val^2; % function to calculate

accuracy = 0.0001; % accuracy
error = 10000;
count = 0;

while abs(error) > accuracy
    nNew=n-(n^(1/2) - val)/((1/2)*n^(-1/2));
    error = nNew-n
    n=nNew
    count = count +1
end

disp(['c = ' num2str(count,10)])
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

output:

c = 5 <--- only 5 iterations
n = 9
f = 9

Your method fails to complete the calculation within time-limit of https://octave-online.net server:

n = 1; % initial value
f=n^2; % function to calculate
tol = 3;   % tolerance
accuracy = 0.0000001; % accuracy
count = 0;

while f <= tol
    n = n+accuracy;
    f=n^2;
    count = count+1;
end
n = n-accuracy;
f=n^2;
disp(['c = ' num2str(count,10)])
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

output:

!!! OUT OF TIME !!!
c = 847309
n = 1.0847309
f = 1.176641126

it moved only 0.1766 within 5 seconds so it needs at least 1-2 minutes to complete while Newton-Raphson method took only 5 iterations to complete. n=n+accuracy method also does not work for big n values because of rounding error. If next binary-representable n floating point value is bigger than n+accuracy then it gets stuck in infinite loop.


The Newton Raphson method above uses square root. There is CPU instruction for sqrt for many x86 CPUs so it is highly probably accelerated but you can also find the square root by another Newton-Raphson method:

f_error(x) = x^2 - N
f'_error(x) = 2*x
xNew = x - f_error(x)/f'_error(x)
xNew = x - (1/2)*x - (1/2)*(N/x)

but it still has a division for N/x. If CPU has no division then you can use another Newton Raphson for it:

f_error(x) = 1/x - N
f'_error(x) = -1/(x*x)
xNew = x - f_error(x)/f'_error(x)
xNew = x*(2-N*x)

applying this for both other methods would result in a fully multiplication-based version and probably slower since you'd require cubic-power of iterations required so it could need hundreds of iterations in total.


Edit: for unknown f function that is run on remote server, you can use simple two-point derivative. This version of derivative is the simplest of discrete derivatives and has higher error than 3-point, 5-point, etc type of derivatives:

% remote server function that has unknown algorithm
% so only numerical derivative will be used
function result = blackBoxFunction(x)
    result = x^(1/2); 
end

% numerical (discrete two-point 1-dimensional) derivative
% beware: the error will be h^2 so you should keep h small
% add more points (with coefficients) for h^3 or better error
function result = twoPointDerivative(x,y,h)
    result = (y-x)/(2*h); 
end

n = 3; % initial guess
nNew = n;
val = 3; % value
f=val^2; % function to calculate

accuracy = 0.0001; % accuracy
error = 10000;
count = 0;

% Newton-Raphson with a blackbox function
% f_error(n) = blackBoxFunction(n) - val
% f'_error(n) = derivative_of_blackbox(n)
% nNew = n - f_error(n)/f'_error(n)
% repeat:
% nNew = n - (blackBoxFunction(n)-val)/derivative_of_blackbox(n) 

while abs(error) > accuracy
    point1 = blackBoxFunction(n-accuracy)
    point2 = blackBoxFunction(n+accuracy)
    nNew=n-(blackBoxFunction(n)-val)/twoPointDerivative(point1,point2,accuracy);
    error = nNew-n
    n=nNew
    count = count +1
end

disp(['c = ' num2str(count,10)])
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

output:

c = 5
n = 9
f = 9

If you need to do even less iterations than 5, you should have a better initial guess. Perhaps by a second-order polynomial approximation to the x^2.(and it is obviously c1 + c2x + c3x*x --> c1=0, c2=0, c3=1 for the x-square problem here but you can always get help from series expansions, genetic algorithms and other heuristics to find a good set of coefficients to minimize Newton-Raphson iterations for whole input range, not just 3).

For example, you have cos(x) as a blackbox function and you want to reach acos(x) so you can use a polynomial initial guess effectively:

% remote server function that has unknown algorithm
% so only numerical derivative will be used
function result = blackBoxFunction(x)
    result = cos(x); 
end

% numerical (discrete two-point 1-dimensional) derivative
% beware: the error will be h^2 so you should keep h small
% add more points (with coefficients) for h^3 or better error
function result = twoPointDerivative(x,y,h)
    result = (y-x)/(2*h); 
end

val = 0.04; % value that is the input of this problem: what is acos(0.04) ??
n = 1 - (val^2)/2 + (val^4)/24; % initial guess by polynomial
nNew = n;
f=acos(val); % function to calculate

accuracy = 0.0001; % accuracy
error = 10000;
count = 0;

% Newton-Raphson with a blackbox function
% f_error(n) = blackBoxFunction(n) - val
% f'_error(n) = derivative_of_blackbox(n)
% nNew = n - f_error(n)/f'_error(n)
% repeat:
% nNew = n - (blackBoxFunction(n)-val)/derivative_of_blackbox(n) 

while abs(error) > accuracy
    point1 = blackBoxFunction(n-accuracy)
    point2 = blackBoxFunction(n+accuracy)
    nNew=n-(blackBoxFunction(n)-val)/twoPointDerivative(point1,point2,accuracy);
    error = nNew-n
    n=nNew
    count = count +1
end

disp(['c = ' num2str(count,10)])
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

output:

c = 3 <-- 2 less iterations !!
n = 1.530785652
f = 1.530785652

With five-point derivative:

% remote server function that has unknown algorithm
% so only numerical derivative will be used
function result = blackBoxFunction(x)
    result = cos(x); 
end

% numerical (discrete four-point 1-dimensional) derivative
% beware: the error will be h^4 so you should keep h small
% add more points (with coefficients) for h^5 or better error
function result = fivePointDerivative(x,h,f)
    result = (f(x-2*h)-8*f(x-h)+8*f(x+h)-f(x+2*h))/(12*h); 
end

val = -1; % value that is the input of this problem: what is acos(-1) ?? (it is PI)
n = 1 - (val^2)/2 + (val^4)/24; % initial guess by polynomial
nNew = n;
f=acos(val); % function to calculate

accuracy = 0.00000000001; % accuracy
error = 1;
count = 0;

% Newton-Raphson with a blackbox function
% f_error(n) = blackBoxFunction(n) - val
% f'_error(n) = derivative_of_blackbox(n)
% nNew = n - f_error(n)/f'_error(n)
% repeat:
% nNew = n - (blackBoxFunction(n)-val)/derivative_of_blackbox(n) 
h_diff=0.0000001
while abs(error) > accuracy

    nNew=n-(blackBoxFunction(n)-val)/fivePointDerivative(n,h_diff,@blackBoxFunction);
    error = nNew-n
    n=nNew
    count = count +1
end

disp(['c = ' num2str(count,15)])
disp(['n = ' num2str(n,15)])
disp(['f = ' num2str(f,15)])

output:

c = 29
n = 3.14159265683019
f = 3.14159265358979
              ^ 3e-9 error

Also, if the blackbox function is very expensive, then this simple derivative will be slow but you can still do some search for non-uniform discrete derivative methods to re-use the blackbox output from earlier iterations to minimize run-time.

huseyin tugrul buyukisik
  • 11,469
  • 4
  • 45
  • 97
  • Hello Huseyn, your answer is more than excellent! Thank you very much. As I mentioned in the first message, I will run in my script external solver. The solver gives me value Q. Input for the solver is value p which can be in range from 0.9e5 to 1e5. Let's say that Q has to be 7.76 with accuracy +- 0.001. How would than the script for approaching final value Q look like? I see that at the beginning we start with function f_error(x) and we do derivative of it f'_error(x). But in the case with external solver I have no such function at the beginning. Could you help with it? – Aaa A May 18 '22 at 18:39
  • Thank you! I tried to modify it for using with external solver. Probably it is wrong, but I can't do it better without help. What will be in my case basic function to calculate instead of f=val^2? – Aaa A May 18 '22 at 19:50
  • Yes, it is continuous. Even though it looks like linear. I calculated a few values: pQ=[99.924e+3 7.579e+0; 99.925e+3 7.479e+0; 99.926e+3 7.38e+0]. From given values I calculated compact pencil: pencilf = (7.38-7.579)/(2*1) = -99.5000e-003 – Aaa A May 18 '22 at 21:43
  • OK, thank you. I have now more points: pQ = [ 0.99471e+5 20.014; 0.99871e+5 4.89; 99795.887596 7.735; 99795.941823 7.733; 99795.964903 7.732; 99795.976825 7.731; 99796.01 7.73 ]. How would final code look like? n = 7.76; % initial guess f_error = ?; df_error = compact stencil? xNew = n - f_error./df_error – Aaa A May 19 '22 at 09:28
  • Thank you. And what will be my f and df if I have these values? pQ = [ 0.99471e+5 20.014; 0.99871e+5 4.89; 99795.887596 7.735; 99795.941823 7.733; 99795.964903 7.732; 99795.976825 7.731; 99796.01 7.73 ] p = pQ(:,1) and Q = pQ(:,2). p is input and Q is output of external solver. I want to search for such p that Q is as close as possible to 7.76 with given accuracy. – Aaa A May 19 '22 at 11:05
  • I edited question (ending part) that shows the derivative. – huseyin tugrul buyukisik May 19 '22 at 12:09
  • You can also ask it in other sites like https://physics.stackexchange.com/ or https://scicomp.stackexchange.com/ Im deleting unrelated comments here. – huseyin tugrul buyukisik May 20 '22 at 08:46
  • Hello Huseyin, your help is first class. Thank you very much for it! I need some time to digest it and test it. – Aaa A May 20 '22 at 11:21