1

I want to improve step-by-step, whilst unevenly-sampled data are coming, the value of the first derivative at t = 0 s. For example, if you want to find the initial velocity in a projectile's motion, but you do not know its final position and velocity, however, you are receiving (slowly) the measurements of the projectile's current position and time.


Update - 26 Aug 2018

I would like to give you more details:

"Unevenly-sampled data" means the time intervals are not regular (irregular times between successive measurements). However, data have almost the same sampling frequency, i.e., it is about 15 min. Thus, there are some measurements without changes, because of the nature of the phenomenon (heat transfer). It gives an exponential tendency and I can fit data to a known model, but an important amount of information is required. For practical purposes, I only need to know the value of the very first slope for the whole process.

I tried a progresive Weighted Least Squares (WLS) fitting procedure, with a weight matrix such as

W = diag((0.5).^(1:kk)); % where kk is the last measurement id

But it was using preprocessed data (i.e., jitter-removing, smoothing, and fitting using the theoretical functional). I gave me the following result:

This is a real example of the problem and its "current solution"

It is good for me, but I would like to know if there is an optimal manner of doing that, but employing the raw data (or smoothed data).

Jorge Crvz
  • 79
  • 2
  • 5
  • Do you mean that you want to extrapolate the speed ? –  Aug 25 '18 at 18:13
  • Dear @RoryDaulton, thanks for your welcoming words. I'm sorry, I wasn't crystal-clear. I updated the question with more details. – Jorge Crvz Aug 27 '18 at 00:30
  • Your question is much better now, so I have removed my downvote and vote to close. However, the improved question is outside my areas of expertise so I cannot help. – Rory Daulton Aug 27 '18 at 00:34
  • @YvesDaoust It is more like a reverse problem. I want to estimate the initial velocity from the position and the measurement time. The more information arrives, the estimation will improve (and stabilise)... – Jorge Crvz Aug 27 '18 at 00:42
  • @RoryDaulton, I appreciate your comments and time. – Jorge Crvz Aug 27 '18 at 00:44
  • @JorgeCruz: beware that stabilization does not mean that you are converging to the true value: you achieve variance reduction, most probably with a bias on the mean. –  Aug 27 '18 at 12:25
  • This problem in treated in Cornelius Lanczos' Applied Analysis, and Corless' book "A Graduate Introduction to Numerical Analysis", problem 11.19. – user14717 Feb 09 '19 at 23:40

1 Answers1

1

IMO, additional data is not relevant to improve the estimate at zero. Because perturbations come into play and the correlation between the first and last samples goes decreasing.

Also, the asymptotic behavior of the phenomenon is probably not known rigorously (is it truly a first order linear model) ? And this can introduce a bias in the measurements.

I would stick to the first points (say up to t=20) and fit a simple model, say quadratic.


If in fact what you are trying to do is to fit a first order linear model to the data, then least-squares fitting on the raw data is fine. If there are significant outliers, robust fitting is preferable.