Let's assume I have local measurements of the temperature, wind speed, air pressure, humidity and so on, in the form of time series, and thats all I know from the world. From time to time, a tornado goes over my probe.
Because a tornado is not just a random stuff, there is a pattern, that a trained eye can recognize in the time series... some changes in the temperature, wind speed etc. correlated together in some fashion, with unpredictable fluctuations around.
I'd like to do that in some automatic way to recognize intervals in the times series that would corresponds to periods when a tornado was "seen" by my detector.
Which machine learning method would be more appropriate to recognize them, and give me some corresponding "reliability coef".
Note that, because the tornado is a inherently unsteady object, which furthermore moves in some erratic way, the detector do not always see the same variations of temperature, wind speed etc. as the tornado can move back and forth over the detector, locally changes its shape etc. I guess what I want to say is that the time series measurements do not correspond to the actual spatial profiles of these quantities one could plot in the "rest frame" of the tornado. However, it always see "kind of" the same features with some randomness around that my eye alone would recognize, and which makes me think it is an appropriate task for ML.
Other question : is there a python ML library that would implement the recommended method? (PyBrain, Scikit ? ...?)