0

I'm implementing Monte-Carlo localization for my robot that is given a map of the enviroment and its starting location and orientation. My approach is as follows:

  1. Uniformly create 500 particles around the given position
  2. Then at each step:
    • motion update all the particles with odometry (my current approach is newX=oldX+ odometryX(1+standardGaussianRandom), etc.)
    • assign weight to each particle using sonar data (formula is for each sensor probability*=gaussianPDF(realReading) where gaussian has the mean predictedReading)
    • return the particle with biggest probability as the location at this step
    • then 9/10 of new particles are resampled from the old ones according to weights and 1/10 is uniformly sampled around the predicted position

Now, I wrote a simulator for the robot's enviroment and here is how this localization behaves: http://www.youtube.com/watch?v=q7q3cqktwZI

I'm very afraid that for a longer period of time the robot may get lost. If add particles to a wider area, the robot gets lost even easier.

I expect a better performance. Any advice?

Andrei Ivanov
  • 651
  • 1
  • 9
  • 23
  • Please don't ask the same question on [multiple stack exchange sites](http://robotics.stackexchange.com/q/2337/37). If you accidentally ask on the wrong site, it can be migrated to the correct one. – Mark Booth Jan 22 '14 at 02:13
  • migrate it to robotics – Neo Feb 04 '14 at 06:43

1 Answers1

5

The biggest mistake is that you assume the particle with the highest weight to be your posterior state. This disagrees with the main idea of the particle filter.

The set of particles you updated with the odometry readings is your proposal distribution. By just taking the particle with the highest weight into account you completely ignore this distribution. It would be the same if you just randomly spread particles in the whole state space and then take the one particle that explains the sonar data the best. You only rely on the sonar reading and as sonar data is very noisy your estimate is very bad. A better approach is to assign a weight to each particle, normalize the weights, multiply each particle state by its weight and sum them up to obtain your posterior state.

For your resample step i would recommend to remove the random samples around the predicted state as they corrupt your proposal distribution. It is legit to generate random samples in order to recover from failures, but those are supposed to be spread over the whole state space and explicitly not around your current prediction.

  • Hello! Can you please take a look at my question and answer? http://robotics.stackexchange.com/questions/11685/markov-localization-using-control-as-an-input – Aidos Feb 23 '17 at 13:49