0

I have two distributions where the probability density from [-0.05, 0) is 0 and defined using interpolation for [0,1].

import matplotlib.pyplot as plt
import numpy as np
from scipy import signal

step = 1e-3
x = np.arange(-0.05, 1, step)

x_0minus = x[x<0]
x_0plus = x[x>0]

pdf1 = np.concatenate([np.repeat(0, len(x_0minus)), np.interp(x_0plus, [0, 0.08, 0.28], [0, 80, 0])])
pdf2 = np.concatenate([np.repeat(0, len(x_0minus)), np.interp(x_0plus, [0, 0.1, 0.3, 0.31], [60, 60, 60, 0])])

We plot these for reference. plt.scatter(x = x, y = pdf1) Pdf1

plt.scatter(x = x, y = pdf2) Pdf2

I don't understand why when I convolve these using fftconvolve, the resultant distribution has a non-zero probability density for values below 0. I'd like to remedy this without setting the range to be [-1,1], because that would be a computational waste in my actual use-case.

res = signal.fftconvolve(pdf1 / pdf1.sum(), pdf2 / pdf2.sum(), 'same')
plt.scatter(x = x, y = res)

Convolution

matsuo_basho
  • 2,833
  • 8
  • 26
  • 47
  • Looks to me like the x-axis isn't right. After a convolution, the first value corresponds to the sum of the left ends of the ranges of the arguments, and the last value corresponds to the sum of the right ends of the ranges of the arguments. In this case, the range of the convolution is therefore going to be -0.10 to 2, and the number of elements in that range is going to be n1 + n2 - 1 if I recall correctly. That's assuming fftconvolve is computing exactly the discrete convolution -- you'll want to verify that. My advice is to reduce the number of steps until you get it working. – Robert Dodier May 04 '23 at 01:04
  • The number of elements in the result is equal to 1050, the length of x. Also, by 'reducing the number of steps' - do you mean reducing the step size from 0.001 to something even lower, or do you mean something else? – matsuo_basho May 04 '23 at 03:03
  • 1
    Looking at the documentation for fftconvolve (https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.fftconvolve.html), I think you want `full` (or maybe `valid` -- I'm not sure I understand the description of that) instead of `same`. Something to try, you could work with Octave or Matlab and use the `conv` function -- that's the discrete convolution (probably implemented via FFT) with no surprises. Maybe get the calculation working with `conv` and then port the solution back to Python if you need to. By "reduce the steps" I mean e.g. 20 (larger) steps instead of 1000. – Robert Dodier May 04 '23 at 04:29

0 Answers0