2

I have two time series A and B. A with length m and B with length n. m << n. Both have the dimension d.

I calculate the distance between A and all subsequencies in B by sliding A over B. In python the code looks like this.

def sliding_dist(A,B)
    n = len(B)
    dist = np.zeros(n)
    for i in range(n-m):
        subrange = B[i:i+m,:]
        distance = np.linalg.norm(A-subrange)
        dist[i] = distance
    return dist

Now this code takes a lot of time to execute and I have very many calculations to do. I need to speed up the calculations. My guess is that I could do this by using convolutions, and multiplication in frequency domain(FFT). However, I have been unable to implement it.

Any ideas? :) Thanks

Carl Rynegardh
  • 538
  • 1
  • 5
  • 22
  • This question is about optimizing code. Those questions belong here: https://codereview.stackexchange.com/ – Anton vBR Dec 23 '17 at 20:27
  • You can indeed use FFTs to speed up convolutions via the frequency domain. However, you don't have a convolution here. – Oliver Charlesworth Dec 23 '17 at 20:30
  • Well, not at the current implementation. Wouldn't it be possible to rewrite the problem to an equivalent convolution though? – Carl Rynegardh Dec 23 '17 at 20:35
  • Have you tried creating a super-A that is just a concatenation of copies of A of length `n`, so that you only have to do a single subtraction and a single call to `norm()`? Not sure what exactly is the bottleneck. Can you specify the order of magnitude for `n` and `m`? – Pavel Dec 23 '17 at 20:36
  • m ~ 10, n ~ 500 – Carl Rynegardh Dec 23 '17 at 20:53
  • It is always good optimization to represent algorithm in matrix form, to have as less as possible numpy calls. I think this can be done with 2*m+1 numpy calls. One that set d=numpy.square(A[0]*B[:-m]). Than m-1 calls that adds d+=numpy.square(A[i]*B[i:-m+i]). And numpy.sqrt(d, out=d). – Ante Dec 25 '17 at 14:41

2 Answers2

6

norm(A - subrange) isn't a convolution in itself, but it may be expressed as:

sqrt(dot(A, A) + dot(subrange, subrange) - 2 * dot(A, subrange))

How to calculate each term fast:

  • dot(A, A) - this is just a constant.

  • dot(subrange, subrange) - this can be calculated in O(1) (per position) using a recursive approach.

  • dot(A, subrange) - this is a convolution in this context. So this can be calculated in the frequency domain via the convolution theorem.1

Note, however, that you're unlikely to see a performance improvement if the subrange size is only 10.


1. AKA fast convolution.

Oliver Charlesworth
  • 267,707
  • 33
  • 569
  • 680
2

Implementation with matrix operations, like I mentioned in the comment. Idea is to evaluate norm step by step. In your case i'th value is:

d[i] = sqrt((A[0] - B[i])^2 + (A[1] - B[+1])^2 + ... + (A[m-1] - B[i+m-1])^2)

First three lines calculate sum of squares, and last line is doing sqrt().

Speed-up is ~60x.

import numpy
import time

def sliding_dist(A, B):
    m = len(A)
    n = len(B)
    dist = numpy.zeros(n-m)
    for i in range(n-m):
        subrange = B[i:i+m]
        distance = numpy.linalg.norm(A-subrange)
        dist[i] = distance
    return dist

def sd_2(A, B):
    m = len(A)
    dist = numpy.square(A[0] - B[:-m])
    for i in range(1, m):
        dist += numpy.square(A[i] - B[i:-m+i])
    return numpy.sqrt(dist, out=dist)

A = numpy.random.rand(10)
B = numpy.random.rand(500)
x = 1000
t = time.time()
for _ in range(x):
    d1 = sliding_dist(A, B)
t1 = time.time()
for _ in range(x):
    d2 = sd_2(A, B)
t2 = time.time()

print numpy.allclose(d1, d2)
print 'Orig %0.3f ms, second approach %0.3f ms' % ((t1 - t) * 1000., (t2 - t1) * 1000.)
print 'Speedup ', (t1 - t) / (t2 - t1)

Update

This is 're-implementation' of norm you need in matrix operations. It is not flexible if you want some other norm that numpy offers. Different approach is possible, to create matrix of B sliding windows and make norm on that whole array, since norm() receives parameter axis. Here is implementation of that approach, but speed-up is ~40x, which is slower than previous.

def sd_3(A, B):
    m = len(A)
    n = len(B)
    bb = numpy.empty((len(B) - m, m))
    for i in range(m):
        bb[:, i] = B[i:-m+i]
    return numpy.linalg.norm(A - bb, axis=1)
Ante
  • 5,350
  • 6
  • 23
  • 46
  • This is great! Could you give an explanation to why this work? :) I'm not completely following. – Carl Rynegardh Dec 28 '17 at 12:34
  • @CarlRynegardh I added some comments. I hope it is enough :-) – Ante Dec 28 '17 at 14:05
  • sd_3 can be implemented using [numpy.lib.stride_tricks.sliding_window_view](https://numpy.org/devdocs/reference/generated/numpy.lib.stride_tricks.sliding_window_view.html). – YouJiacheng May 11 '23 at 22:18