0

I am trying to vectorise, this equation to make it faster (i.e not use a loop) this was the idea of sslow as oppsed to sfast.

mu2 = [1.0, 0.11264281499520618, 0.012799179048180226]
alpha = np.array([   52.64173932, -1016.96156872,  4514.08903276])
def sslow(alpha):
    t0 = time()
    u = lambda x: np.exp(-(1+np.poly1d(list(reversed(alpha)))(x)))
    k = sp.integrate.quad(lambda x: u(x), 1e-16, 1)[0]+np.dot(mu2,alpha),(time()-t0)
return k
def sfast(alpha):
    t0 = time()
    def int1(b):
        j = 1
        for q in range(0,len(alpha)):
            j = j + alpha[q]*(b**q)
        return np.exp(-j)
    ans, err = sp.integrate.quad(int1, 1e-16, 1)
    u = ans+np.dot(mu2,alpha);
    return u,(time()-t0)
t = []
r = int(1e3)
for d in range(0,r):
    t = (np.append(t,sslow(alpha)[1]))
print sum(t)/r
t = []
for d in range(0,r):
    t = (np.append(t,sfast(alpha)[1]))
print sum(t)/r

Here we can see "the result,time taken" of both methods

Am I missing something completely? Is there a better way to the dot product between a vector and a polynomial basis and then integrate?

EntropicFox
  • 727
  • 5
  • 11
  • 1
    To really get a meaningful speed test, you must not run it only once. Try to do it 1000 times and then divide the time by that. It might be that what you see is the overhead to invoke the scipy function and that indeed your function with the loop is faster for very small problems. – Joe Jan 08 '18 at 17:18
  • @Joe I have done exactly that and the factor of roughy of 5 is still there. – EntropicFox Jan 08 '18 at 23:36
  • Could you please edit your code to a minimum working example, e.g. add the variables `mu2`, `alpha`, etc. that you used and add the imports. You can also add the timing, so people can just copy-paste and run it on their machines easily. – Joe Jan 09 '18 at 13:44
  • @Joe done. Can you think of a better way to create a dot product of polynomials and floats and then integrating? – EntropicFox Jan 09 '18 at 17:10
  • Will take a look. Your example is missing the imports. Does this answer help you? https://stackoverflow.com/a/24066766/7919597 – Joe Jan 10 '18 at 14:21
  • Please try using a numpy's `polyval` in combination with a `lambda` instead of poly1d, it might be faster. There are some other things you need to fix: replace `list(reversed(alpha))` with `alpha[::-1]`, the former will slow down your function. You don't need a list and the `[::-1]` will reverse the order. And you don't need to call `quad(lambda x: u(x), ...` in sslow. The lamda is not needed, just use `quad(u, ...`. This will already speed up things a bit. – Joe Jan 10 '18 at 14:28
  • For such small polynomials the for-loops might be fastest, also mentioned here https://stackoverflow.com/a/24067326/7919597. I also saw another speedup when using the newer polynomial class `np.polynomial.polynomial.Polynomial(alpha)` – Joe Jan 10 '18 at 14:55

1 Answers1

0

The dot product in your function is fine. If mu and alpha2 only have three entries it might be faster to calculate explicitly, as np.dot has a slight overhead, e.g. use np.sum(mu * alpha) or even alpha[0] * mu[0] + alpha[1] * mu[1] + alpha[2] * mu[2]. Not beautiful, but I found that for very small calculates it outperforms the numpy functions.

I modified your sslow function and it is two times faster than sfast on my machine. Feel free to add the explicit dot product.

def sslow5(alpha):
    t0 = time.time()

    u = lambda x: np.exp(-(1 + (alpha[2]*x + alpha[1])*x + alpha[0]))

    k = quad(u, 1e-16, 1)[0] + np.dot(mu2, alpha),(time.time()-t0)
    return k

Well, this only works for degrees of very low degree where you can explicitly state the equation (see https://stackoverflow.com/a/24067326/7919597).

Joe
  • 6,758
  • 2
  • 26
  • 47