1

I am trying to analyze the results for different iterative sub-solvers of the Newton equation system, using a Newton-Fischer reformulation of the LCP(linear complementarity problem). So far I have implemented the exact solver - Gauss-Siedel, and a Newton-modified method which uses bicg matlab as a sub-solver of the J*h = -p equation (where J is the jacobian, p is the value of Fischer function and h is my actualization step).

Part of the code where I implemented the bicg subsolver and the exact solver:

if itt
    % use iterative solver for newton eq
    while ~all(fischer(x, A*x+b) == 0) & its < max_it
        % compute the Jacobian
        J = eval_jacobian(A, b, x);
        % compute the value of the Fischer function
        p = fischer(x, A*x + b);
        % the natural merit function for convergence measure
        residual(its) = .5*(p'*p);
        % the newton eq, solve J*h = -p
        h = bicg(J, -p, eps, s_its);
        % update the solution vector 
        x = x + h;
        % increment the iteration counter
        its = its + 1;
    end
else
    % the exact solver for newton equation
    while ~all(fischer(x, A*x+b) == 0) & its < max_it
        % compute the Jacobian
        J = eval_jacobian(A, b, x);
        % compute the value of the Fischer function
        p = fischer(x, A*x + b);
        % the natural merit function for convergence measure
        residual(its) = .5*(p'*p);
        % the newton eq, solve J*h = -p
        h = - J/p;
        % update the solution vector 
        x = x + h;
        % increment the iteration counter
        its = its + 1;
    end

So, my question would be what other iterative sub-solvers would you use? I don't mind implemeting code for them if there is no function for them in matlab. I realize this is more of a theoretical question. Thank you.

Nanami
  • 3,319
  • 3
  • 19
  • 19

1 Answers1

3

In addition to bicg there are a number of other iterative solvers for general unsymmetric systems that can sometimes offer better convergence, such as bicgstab and gmres. Both of these methods are similar in spirit to bicg, but involve different update strategies within the krylov sub-space. MATLAB has implementations for both these methods, so you could test them out for your problem.

In general, when using krylov iterative solvers, you can often achieve better performance by incorporating preconditioning. This is a pretty involved subject - if you're not already familiar, here is as good a place as any to start. I'd recommend starting with a simple jacobi preconditioner diag(J) and go from there.

Iterative solvers can also sometimes gain additional efficiency based on the so-called "matrix-free" approach - the solver actually only needs to calculate the matrix-vector product J*p, it doesn't need the full matrix J. It may be possible to write an efficient sub-routine that calculates J*p without forming J explicitly, but this will depend on the details of your particular problem.

A last consideration is the programming laguage itself - MATLAB uses highly optimised library code for it's direct solvers J\p, while the krylov methods are implemented as m-code, which generally incurs a performance penalty.

Hope this helps.

Darren Engwirda
  • 6,915
  • 4
  • 26
  • 42