0

my current code is following:

stock_values = np.zeros([path, steps+1])
stock_values[:, 0] = s
for y in range(0, steps):
    stock_values[:, y+1] = stock_values[:, y] * (
        np.exp(change[:,y]))

with:

change = (r_d - 0.5*(sigma_d ** 2)) * deltat + sigma_d * np.sqrt(deltat) * np.random.normal(0, 1, size=(path, steps)) + np.random.poisson(lambda_j*deltat,size=(path, steps))* np.random.normal(r_j,sigma_j, size=(path, steps))

Stock_values and change are both an array with 1 000 000 x 1015 elements So, I run a Monte Carlo Simulation with GBM and Jump Diffusion, 1 000 000 paths and 1045 steps. Like this, the computing time is pretty slow, esp. since I would rather like to use 100 000 000 paths for this. Unfortunately python only uses one kernel for the loop and lets the 7 others unused. For the "Change" matrix, it is able to use all kernels... (Sorry, do not have good technical/hardware skills and knowledge...)

I am looking for a function to exchange the "for" loop which calculates column y+1 based on value in col y, y+2 based on y+1 etc until y+1044.

Any ideas? Many thanks!

1 Answers1

0

One first easy improvement if you have enough memory is to move np.exp out of the loop:

stock_values = np.zeros([path, steps+1])
stock_values[:, 0] = s
e = np.exp(changes)
for y in range(0, steps):
    stock_values[:, y+1] = stock_values[:, y] * e[:,y]
One Lyner
  • 1,964
  • 1
  • 6
  • 8