0

I'm working on a camera scanning application, and I've noticed that there is a degree of vibration after movement (some axes are worse than others). I've attached an I2C accelerometer/gyroscope (MPU 6050) as close to the image sensor as I can get, and I'd like to determine the start and stop of the vibration periods after moving an axis so that I can delay image capture until movement has returned to a baseline.

My current approach is to get a baseline, normalize the data coming from each axis of the accelerometer (not really using the gyroscope data at the moment), and then use the normalized baseline and the mix and max values of the baseline to determine a threshold that is one magnitude above the difference between the baseline mix/max. I sample every 50ms and wait for a return to baseline.

Is there a better/preferred way to do this, or a way to improve on my approach?

flimsy
  • 204
  • 1
  • 10

0 Answers0