4

I have encountered problem with pseudo-real data deconvolution. What I want to do is to create some pulse profile, convolve with gaussian, add some little noise and deconvolve. However this adding of slight discrepancy (having finite number of measurements) will destroy deconvolution result.

Images of original pulse, pulse convolved with scipy.signal.convolve and by function described later

As you can see in this figure the profiles for both convolved profiles are 'almost' identical. However...

Here is the data after deconvolution

You can see I got two completely different results :/ Any ideas how to handle this problem?

The code which I used is here:

import random
import numpy
import matplotlib.pyplot as plt
import scipy.signal

def create_pulse(lenght, peak):
    #creates pulse shape
    pulse_zeros = numpy.zeros(lenght)

    for i in range(len(pulse_zeros)):
        if i <= peak:
            pulse_zeros[i] = float(i)/float(peak)
        if i > peak:
            pulse_zeros[i] = float((len(pulse_zeros)-i)*peak)/(float(i)*(len(pulse_zeros)-peak))
    return pulse_zeros


plt.plot(create_pulse(200,10))
plt.show()


def normalize(data):
    data = data/max(data)
    return data


def mesh_up(pulse, region_l, sigma, repetitons):
    #convolves mutliple pulses via addition at random points with sigma distribution around region_l/2
    pulse_len = len(pulse)
    region = numpy.zeros(region_l)
    starts = numpy.random.normal(loc= region_l/2, scale= sigma, size = repetitons).round().astype(numpy.int)

    for pos in starts:
        #print(pos)
        region[pos: pos + pulse_len] += pulse

    return region



def one_up(pulse, region_l, sigma, repetitons):
    #create pulse for scipy convolve
    pulse_len = len(pulse)
    region = numpy.zeros(region_l)
    starts = [int(region_l/2)]

    for pos in starts:
        #print(pos)
        region[pos: pos + pulse_len] += pulse

    return region



result_a = mesh_up(create_pulse(200,10), 1000, 25/1.42, 10000)

signal = one_up(create_pulse(200,10), 1000, 25, 1)
result_b = scipy.signal.convolve(signal, gauss, mode='same')

plt.plot(signal, label = 'original')
plt.plot(normalize(result_a), label = 'rand_generated')
plt.plot(normalize(result_b), label = 'convolve')
plt.legend()
plt.show()


gauss = numpy.exp(-((numpy.linspace(0,100,101)-50.)/float(25.))**2 )

plt.plot(gauss)
plt.show()


deconv_a,  _ = scipy.signal.deconvolve(normalize(result_a), gauss)
deconv_b,  _ = scipy.signal.deconvolve(normalize(result_b), gauss)

plt.plot(normalize(deconv_a), label = 'rand_generated')
plt.plot(normalize(deconv_b), label = 'convolve')
plt.legend()
plt.show()
Petr Synek
  • 143
  • 1
  • 8
  • Could you explain the correspondence between 2 plots of yours? I can't see randomly generated signal on your first plot. On the second one, the function called "random" clearly demonstrates a Gibbs phenomenon near the right side - probably due to point of discontinuity there Fourier series starts its crazy oscillations. You can disregard it and otherwise "random" is just a constant. – Boris Burkov Mar 15 '17 at 10:17
  • I am sorry i will make new plot, but basicaly both plots overlap so perfectly that it is barely noticable :) I dont understand the part with disregarding and random as constant – Petr Synek Mar 15 '17 at 17:36
  • Ok i have looked up Gibbs phenomenon, I understand the basics but I have no time now to dig myself into it. Is there some simple solution (some pice of already written code) which will allow me to perform deconvolution with simplicity of deconvolve yet handling discontinuities? – Petr Synek Mar 15 '17 at 20:21
  • I'm really puzzled by the fact that your pulse wasn't reproduces on the `rand_generated` plot. I suspect that domain of your `result_b` is too short and there are lots of zeros at the right side of result_b. They probably hide your initial pulse totally. May be you could try enlarging its domain and everything works then? – Boris Burkov Mar 16 '17 at 09:35
  • Thanks for answer, unfortunately i cant say I understand what you mean. signal_b is created by convolution of single pulse profile by using scipy.convolve and as gaussian kernel is smaller than sampe the end should be zeroes by definition. – Petr Synek Mar 16 '17 at 22:00
  • I disagree that the end should be zeroes, even if gaussian kernel is smaller. I understand convolution with gauss kernel as a gaussian blur in photoshop. Imagine an old image with a dot pattern: https://upload.wikimedia.org/wikipedia/commons/7/75/Dot_pattern_on_old_photograph.jpg. You want to get rid of those dots. So, you take each horizontal line of this image (each coordinate being a single dot's intensity from 0 to 256) and replace the color of each pixel with a weighted sum of its neighbors, where weights are gaussian - closest pixels have largest impact. You won't get zeros on the ends. – Boris Burkov Mar 17 '17 at 09:29
  • 1
    Well, yes... but i have data list where i have 1000 points from which only 500-600 are non zeroes. So if i use 100 point broad gaussian kernel points up to 400 and over 700 must from definition be zeros. – Petr Synek Mar 18 '17 at 17:57

0 Answers0