1

I am newbie in Julia programming language, so I don't know much of how to optimize a code. I have heard that Julia should be faster in comparison to Python, but I've written a simple Julia code for solving the FitzHugh–Nagumo model , and it doesn't seems to be faster than Python.

The FitzHugh–Nagumo model equations are:

function FHN_equation(u,v,a0,a1,d,eps,dx)
  u_t = u - u.^3 - v + laplacian(u,dx)
  v_t = eps.*(u - a1 * v - a0) + d*laplacian(v,dx)
  return u_t, v_t
end

where u and v are the variables, which are 2D fields (that is, 2 dimensional arrays), and a0,a1,d,eps are the model's parameters. Both parameters and the variables are of type Float. dx is the parameter that control the separation between grid point, for the use of the laplacian function, which is an implementation of finite differences with periodic boundary conditions.

If one of you expert Julia coders can give me a hint of how to do things better in Julia I will be happy to hear.

The Runge-Kutte step function is:

function uv_rk4_step(Vs,Ps, dt)
  u = Vs.u
  v = Vs.v
  a0=Ps.a0
  a1=Ps.a1
  d=Ps.d
  eps=Ps.eps
  dx=Ps.dx
  du_k1, dv_k1 =    FHN_equation(u,v,a0,a1,d,eps,dx)
  u_k1 = dt*du_k1י
  v_k1 = dt*dv_k1
  du_k2, dv_k2 =    FHN_equation((u+(1/2)*u_k1),(v+(1/2)*v_k1),a0,a1,d,eps,dx)
  u_k2 = dt*du_k2
  v_k2 = dt*dv_k2
  du_k3, dv_k3 =    FHN_equation((u+(1/2)*u_k2),(v+(1/2)*v_k2),a0,a1,d,eps,dx)
  u_k3 = dt*du_k3
  v_k3 = dt*dv_k3
  du_k4, dv_k4 =    FHN_equation((u+u_k3),(v+v_k3),a0,a1,d,eps,dx)
  u_k4 = dt*du_k4
  v_k4 = dt*dv_k4
  u_next    =   u+(1/6)*u_k1+(1/3)*u_k2+(1/3)*u_k3+(1/6)*u_k4
  v_next    =   v+(1/6)*v_k1+(1/3)*v_k2+(1/3)*v_k3+(1/6)*v_k4
  return u_next, v_next
end

And I've used imshow() from PyPlot package to plot the u field.

Cœur
  • 37,241
  • 25
  • 195
  • 267
Ohm
  • 2,312
  • 4
  • 36
  • 75
  • Please give a minimal working example that can be run. You need to give the laplacian function and an example call. – David P. Sanders Jun 30 '16 at 11:26
  • 1
    At first glance you should remove the vectorised operations and write them as explicit loops. – David P. Sanders Jun 30 '16 at 11:26
  • @DavidP.Sanders the code can be downloaded from the link I've given at GitHub, and in the README I wrote the lines to run the code. – Ohm Jun 30 '16 at 11:51
  • @DavidP.Sanders do you mean that in the FHN_equation function I should write all of the equations as for loops? – Ohm Jun 30 '16 at 11:52
  • 1
    Sorry I hadn't realised that you had a link to the code. Yes, I mean you should try rewriting everything in laplacian and fhn as for loops. – David P. Sanders Jun 30 '16 at 12:03
  • 1
    You should also avoid creating a new matrix for the laplacian at every step. – David P. Sanders Jun 30 '16 at 12:04
  • 2
    Although it isn't necessary for speed purposes (assuming well-formed code), for questions like this it is often useful to include the type information in your function argument. This makes it *much* easier for people not familiar with FitzHugh-Nagumo to offer suggestions on your code. – Colin T Bowers Jul 01 '16 at 00:48
  • 2
    Even better would be to provide a stand-alone subroutine that simulates inputs and times your function that we can copy and paste into a REPL and then try to improve upon... these are the tricks that usually result in getting the best possible answer out of StackOverflow. – Colin T Bowers Jul 01 '16 at 00:50
  • 2
    Supporting @ColinTBowers comments, it is almost always worth it to add type info regarding the dimensions (not element types) of vectors, matrices. Since the functions would not really "work" for types with other interfaces. This also gives a better interface to human readers of the functions. The types can be made as Abstract as possible to keep the functions as general as possible going forward. – Dan Getz Jul 01 '16 at 01:42
  • 2
    A final comment that might save you time. I'm willing to make a large wager that you've come from a Matlab background (same as me). Your linked github appears to have one function per file (or one type per file). Classic Matlab. You don't need to worry about that with Julia (or most othe languages). You can put multiple functions and types in one file and make it a module. It is *way* more convenient once you get used to it. – Colin T Bowers Jul 01 '16 at 02:32
  • @ColinTBowers actually I'm using Python usually, but I thought that Julia suppose to be more like Matlab than Python in sense of how to organize the program.. – Ohm Jul 02 '16 at 19:58
  • @Ohm Ha. So I lost my wager :-) No, the standard way to organise code in Julia is in modules. Often an entire module will be one file, although for larger modules the code will be split into several different files that all get collect together using `include`. I've always thought the [Distances](https://github.com/JuliaStats/Distances.jl) package is a great example of how to structure Julia code. – Colin T Bowers Jul 04 '16 at 06:32

1 Answers1

2

This is not a complete answer, but a taste of an optimization attempt on the laplacian function. The original laplacian on a 10x10 matrix gave me the @time:

0.000038 seconds (51 allocations: 12.531 KB)

While this version:

function laplacian2(a,dx)
          # Computes Laplacian of a matrix
          # Usage: al=laplacian(a,dx)
          # where  dx is the grid interval
          ns=size(a,1)
          ns != size(a,2) && error("Input matrix must be square")
          aa=zeros(ns+2,ns+2)

          for i=1:ns
              aa[i+1,1]=a[i,end]
              aa[i+1,end]=a[i,1]
              aa[1,i+1]=a[end,i]
              aa[end,i+1]=a[1,i]
          end
          for i=1:ns,j=1:ns
              aa[i+1,j+1]=a[i,j]
          end
          lap = Array{eltype(a),2}(ns,ns)
          scale = inv(dx*dx)
          for i=1:ns,j=1:ns
              lap[i,j]=(aa[i,j+1]+aa[i+2,j+1]+aa[i+1,j]+aa[i+1,j+2]-4*aa[i+1,j+1])*scale
          end
          return lap
end

Gives @time:

0.000010 seconds (6 allocations: 2.250 KB)

Notice the reduction in allocations. Extra allocations usually indicate the potential for optimization.

Dan Getz
  • 17,002
  • 2
  • 23
  • 41