I am in the process of making my own system monitoring tool. I'm looking to run a filter (like a Gaussian filter or similar) on a continuous stream of raw data that i'm receiving from a device (My cpu % in this case).
The collection of data values is n
elements long. Every time this piece of code runs it appends the new cpu value and removes the oldest keeping the collection at a length of n
essentially a deque([float('nan')] * n, maxlen=n)
where n
is the length of the graph i'm plotting to.
then it filters the whole collection through a Gaussian filter creating the smoothed data points and then plots them, creating an animated graph similar to most system monitors cpu % graphs found on your computer.
This works just fine... However there has to be a more efficient way to filter the incoming data instead of running a filter on the whole data set every time a new data val is added (in my case the graph updates every .2 sec)
I can think of ways to do it without filtering the whole list but im not sure they are very efficient. Is there anything out there in the signal processing world that will work for me? Apologies if my explanation is a bit confusing, I'm very new to this.
from scipy.ndimage.filters import gaussian_filter1d
# Not my actual code but hopefully describes what im doing
def animate(): # function that is called every couple of milliseconds to animate the graph
# ... other stuff
values.append(get_new_val) # values = collection of data vals from cpu
line.set_ydata(gaussian_filter1d(values, sigma=4)) # line = the line object used for graphing
# ... other stuff
graph_line(line) # function that graphs the line
tl;dr: looking for an optimized way to smooth raw streaming data instead of filtering the whole data set every pass.