1

In Newton's method, to solve a nonlinear system of equations we need to find the Jacobian matrix and the determinant of the inverse of the Jacobian matrix.

Here are my component functions,

real function f1(x,y)
    parameter (pi = 3.141592653589793)
    f1 = log(abs(x-y**2)) - sin(x*y) - sin(pi)  
end function f1

real function f2(x,y)
    f2 = exp(x*y) + cos(x-y) - 2 
end function f2

For the 2x2 case I am computing the Jacobian matrix and determinant of the inverse of Jacobian matrix like this,

    x = [2,2]
    h = 0.00001
    .
    .
        ! calculate approximate partial derivative
        ! you can make it more accurate by reducing the 
        ! value of h
        j11 = (f1(x(1)+h,x(2))-f1(x(1),x(2)))/h
        j12 = (f1(x(1),x(2)+h)-f1(x(1),x(2)))/h
        j21 = (f2(x(1)+h,x(2))-f2(x(1),x(2)))/h
        j22 = (f2(x(1),x(2)+h)-f2(x(1),x(2)))/h
        ! calculate the Jacobian
        J(1,:) = [j11,j12]
        J(2,:) = [j21,j22]
        ! calculate inverse Jacobian 
        inv_J(1,:) = [J(2,2),-J(1,2)]
        inv_J(2,:) = [-J(2,1),J(1,1)] 
        DET=J(1,1)*J(2,2) - J(1,2)*J(2,1)
        inv_J = inv_J/DET
        .
        .

How do I in Fortran extend this to evaluate a Jacobian for m functions evaluated at n points?

  • You have multiple question )lease ask a single question. Regaring the derivatives, that is a very broad topic. There is a large amount of literature about numerical derivatives and about using symbolic manipulation to derive derivatives. Consider https://scicomp.stackexchange.com – Vladimir F Героям слава Mar 21 '22 at 07:25
  • AS Vladimir says scicomp is probably the place for this (the way you are evaluating those derivatives is, for instance, not likely to be very accurate), but could you just explain what you mean by " not so efficient and scalable"? – Ian Bush Mar 21 '22 at 07:28
  • Like, I'm manually doing this thing for system of two equations. I was thinking if the system has more equations than this process will be a nightmare. @lanBush – WhyMeasureTheory Mar 21 '22 at 07:29
  • Loops, arrays and LAPACK are your friends. Loops and arrays are designed for repetitive actions on unknown scale, and if you must invert a matrix (solving the equation is almost certainly better) LAPACK is the way (as it is for all linear algebra, including equation solving) – Ian Bush Mar 21 '22 at 07:45
  • If the question is how do I in Fortran extend this to evaluate a Jacobian for m functions evaluated at n points I think this is quite a good question. But I would delete *by editing the question* the last part and clarify what you want in the main part, otherwise it is likely to be closed for lack of focus. Ask the second part in scicomp. If you do this I'll answer - but I am very busy over the next two days. – Ian Bush Mar 21 '22 at 08:20
  • Actually, I can't use any external package to solve my problem. @lanBush. I will definitely wait to see your answer. Thank you for your response. – WhyMeasureTheory Mar 21 '22 at 08:29
  • Why can you not use any external package? That's stupid, LAPACK is open source and universally available. – Ian Bush Mar 21 '22 at 08:31
  • My course teacher couldn't allow external package. That's the main issue. @lanBush – WhyMeasureTheory Mar 21 '22 at 09:05
  • Is this a mathematical (numerical analysis) course or a programming course? – Vladimir F Героям слава Mar 21 '22 at 09:45
  • Lab course where we create programs, what we learnt in numerical analysis course @VladimirFГероямслава – WhyMeasureTheory Mar 21 '22 at 09:49
  • Numerical analysis is really the topic of Scicomp https://scicomp.stackexchange.com/, Stack Overflow is really about programming. It is important to understand that when asking about numerical methods, people are likely to suggest stuff that is out of the scope of an introductory NA course. But the extension to *m* functions in *n* ponits is likely on-topic here. I would expect some more own attempt though, if it is a homework. – Vladimir F Героям слава Mar 21 '22 at 09:53
  • I have already done the homework part. I was just curious how to extend my solution to m functions case. Because in my version of code, I do a lot of stuff manually. @VladimirFГероямслава – WhyMeasureTheory Mar 21 '22 at 09:58
  • In real life, you can try an auto-diff functionality to evaluate the jacobian from the expressions directly (http://www.autodiff.org/?module=Tools&language=Fortran95). – John Alexiou Mar 21 '22 at 10:08

3 Answers3

1

Here is a more flexible jacobian calculator.

The results with the 2×2 test case are what you expect

arguments (x)
   2.00000000000000
   2.00000000000000

 values (y)
   1.44994967586787
   53.5981500331442

 Jacobian
  0.807287239448229        3.30728724371454
   109.196300248300        109.196300248300

I check the results against a symbolic calculation for the given inputs of

fig2

Console.f90

program Console1
use ISO_FORTRAN_ENV
implicit none

! Variables
integer, parameter :: wp = real64
real(wp), parameter :: pi = 3.141592653589793d0

! Interfaces
interface 
    function fun(x,n,m) result(y)
    import
        integer, intent(in) :: n,m
        real(wp), intent(in) :: x(m)
        real(wp) :: y(n)
    end function
end interface

real(wp) :: h
real(wp), allocatable :: x(:), y(:), J(:,:)
! Body of Console1

x = [2d0, 2d0]
h = 0.0001d0

print *, "arguments"
print *, x(1)
print *, x(2)

y = test(x,2,2)
print *, "values"
print *, y(1)
print *, y(2)

J = jacobian(test,x,2,h)

print *, "Jacobian"
print *, J(1,:)
print *, J(2,:)

contains

function test(x,n,m) result(y)
! Test case per original question
    integer, intent(in) :: n,m
    real(wp), intent(in) :: x(m)
    real(wp) :: y(n)
    
    y(1) = log(abs(x(1)-x(2)**2)) - sin(x(1)*x(2)) - sin(pi)  
    y(2) = exp(x(1)*x(2)) + cos(x(1)-x(2)) - 2 
    
end function
   
function jacobian(f,x,n,h) result(u)
    procedure(fun), pointer, intent(in) :: f
    real(wp), allocatable, intent(in) :: x(:)
    integer, intent(in) :: n
    real(wp), intent(in) :: h
    real(wp), allocatable :: u(:,:)
    integer :: j, m
    real(wp), allocatable :: y1(:), y2(:), e(:)
    
    m = size(x)
    allocate(u(n,m))
    
    do j=1, m
        e = element(j, m)    ! Get kronecker delta for j-th value
        y1 = f(x-e*h/2,n,m)
        y2 = f(x+e*h/2,n,m)
        u(:,j) = (y2-y1)/h   ! Finite difference for each column
    end do        
    
end function

function element(i,n) result(e)
! Kronecker delta vector. All zeros, except the i-th value.
integer, intent(in) :: i, n
real(wp) :: e(n)
    e(:) = 0d0
    e(i) = 1d0        
end function

end program Console1
John Alexiou
  • 28,472
  • 11
  • 77
  • 133
  • Could you explain the role of Interfaces here? @JohnAlexiou and what `wp` stand for? And I got error while run that program while compiling using gfortran the latest version, You can check the error log [here](https://rextester.com/ZSSD43187) – WhyMeasureTheory Mar 21 '22 at 18:10
  • I was unable to debug your code as it was too hard to trace the error because of allocatable thing. But what make me happy of code is that the element function and getting the Finite difference using that. I will be more happy if you solve the error part. – WhyMeasureTheory Mar 21 '22 at 18:20
  • @WhyMeasureTheory - there might be differences between gfortran and intel fortran I am using. `wp` is an integer designating the working precision. So `real(wp)` defines a 64bit floating-point number. As for why the function pointer isn't working, I am not sure. The interface `fun` defines a general function, and `procedure(fun), pointer` means a pointer to a function. Here I am pointing to `test()` as the first argument of the `jacobian()` function. – John Alexiou Mar 22 '22 at 00:57
0

I will answer about evaluation in different points. This is quite simple. You just need an array of points, and if the points are in some regular grid, you may not even need that.

You may have an array of xs and array of ys or you can have an array of derived datatype with x and y components.

For the former:

real, allocatable :: x(:), y(:)

x = [...  !probably read from some data file
y = [...

do i = 1, size(x)
  J(i) = Jacobian(f, x(i), y(i))
end do

If you want to have many functions at the same time, the problem is always in representing functions. Even if you have an array of function pointers, you need to code them manually. A different approach is to have a full algebra module, where you enter some string representing a function and you can evaluate such function and even compute derivatives symbolically. That requires a parser, an evaluator, it is a large task. There are libraries for this. Evaluation of such a derivative will be slow unless further optimizing steps (compiling to machine code) are undertaken.

Numerical evaluation of the derivative is certainly possible. It will slow the convergence somewhat, depending on the order of the approximation of the derivative. You do a difference of two points for the numerical derivative. You can make an interpolating polynomial from values in multiple points to get a higher-order approximation (finite difference approximations), but that costs machine cycles.

  • I think the question is really how to write the function `Jacobian(f,i,j)` that returns the _numeric difference_ of value `f(i)` with respect to variable `x(j)`. – John Alexiou Mar 21 '22 at 12:49
  • 1
    @JohnAlexiou Yes, I forgot the indexing in `i` (your `j`) but I hopefully explained why I won't even bother to show the `f(i)` bit. I surely could show how to index an array of function pointers (through a derived type), but is it really worth it? I do not think it is. – Vladimir F Героям слава Mar 21 '22 at 15:02
0

Normally we can use auto difference tools as @John Alexiou mentioned. However in practise I prefer using MATLAB to analytically solve out the Jacobian and then use its build-in function fortran() to convert the result to a f90 file. Take your function as an example. Just type these into MATLAB

syms x y
Fval=sym(zeros(2,1));
Fval(1)=log(abs(x-y^2)) - sin(x*y) - sin(pi);
Fval(2)=exp(x*y) + cos(x-y) - 2;
X=[x;y];
Fjac=jacobian(Fval,X);
fortran(Fjac) 

which will yield

  Fjac(1,1) = -y*cos(x*y)-((-(x-y**2)/abs(-x+y**2)))/abs(-x+y**2)
  Fjac(1,2) = -x*cos(x*y)+(y*((-(x-y**2)/abs(-x+y**2)))*2.0D0)/abs(-
      &x+y**2)
  Fjac(2,1) = -sin(x-y)+y*exp(x*y)
  Fjac(2,2) = sin(x-y)+x*exp(x*y)

to you. You just get an analytical Jacobian fortran function.

Meanwhile, it is impossible to solve the inverse of a mxn matrix because of rank mismatching. You should simplify the system of equations to get a nxn Jacobin.

Additionally, when we use Newton-Raphson's method we do not solve the inverse of the Jacobin which is time-consuming and inaccurate for a large system. An easy way is to use dgesv in LAPACK for dense Jacobin. As we only need to solve the vector x from system of linear equations

Jx=-F

dgesv use LU decomposition and Gaussian elimination to solve above system of equations which is extremely faster than solving inverse matrix.

If the system of equations is large, you can use UMFPACK and its fortran interface module mUMFPACK to solve the system of equations in which J is a sparse matrix. Or use subroutine ILUD and LUSOL in a wide-spread sparse matrix library SPARSEKIT2.

In addition to these, there are tons of other methods which try to solve the Jx=-F faster and more accurate such as Generalized Minimal Residual (GMRES) and Stabilized Bi-Conjugate Gradient (BICGSTAB) which is a strand of literature.