Void’s Vault

Knowledge source for efficiency.

Filtering With Monte Carlo Localization Algorithm

Here’s how I implemented the MCL algorithm, with the help of the book of Probabilistic Robotics from Thrun, Burgard and Fox.

I am using the velocity model:

1
2
3
x = [x y theta]
u = [v w]
z(i) = [range bearing signature]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
function [ X1, wslow, wfast ] = Augmented_MCL( X0, u1, z1, c1, map, wslow, wfast, mapdim)
M = numel(X0(1,:));   %Number of particules
aslow = 0.05;
afast = 0.75;
X1 = zeros(3,M);
x = zeros(3,M);
w = zeros(M,1);
wavg = 0;

for m=1:1:M
    x(:,m) = sample_motion_model_velocity(u1, X0(:,m));
    w(m) = landmark_model_known_correspondence(x(:,m),z1,c1,map, sigmaZ);
    wavg = wavg + (1/M)*w(m);
end

wslow = wslow + aslow*(wavg - wslow);
wfast = wfast + afast*(wavg - wfast);
totalWeight = sum(w(:));

for m=1:1:M
    probRandomInjection = rand*max([0 (1-(wfast/wslow))]);
    if rand < probRandomInjection
        X1(1,m) = rand*mapdim(1);
        X1(2,m) = rand*mapdim(2);
        X1(3,m) = rand*2*pi - pi;
    else
        n = rand*totalWeight; %Uniformely drawed pseudo-random number
        i = 1;
        n = n - w(i);
        while n > 0
            i = i + 1;
            n = n - w(i);
        end
        X1(:,m) = x(:,i);
    end
end

The function named sample_motion_model_velocity could be virtually anything, but I used one from chapter 5 (Table 5.4): Watch it though, be sure that the values you choose for a(1) to a(6) do not underestimate the error on the mesurement SigmaZ! Otherwise, this could decimate almost all the samples you have when you will weighting (because v and w could become too big).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
function [ x1 ] = sample_motion_model_velocity(u1, x0)
dt = 1;
a(1) = 0.01;
a(2) = 0.01;
a(3) = 0.01;
a(4) = 0.01;
a(5) = 0.001;
a(6) = 0.001;
v = u1(1);
w = u1(2);
v_ = v + randn*sqrt(a(1)*(v.^2) + a(2)*(w.^2));
w_ = w + randn*sqrt(a(3)*(v.^2)+a(4)*(w.^2));
g_ = randn*sqrt(a(5)*(v.^2)+a(6)*(w.^2));
x1 = zeros(3,1);
x1(1) = x0(1) - (v_/w_)*sin(x0(3)) + (v_/w_)*sin(x0(3) + w_*dt);
x1(2) = x0(2) + (v_/w_)*cos(x0(3)) - (v_/w_)*cos(x0(3) + w_*dt);
x1(3) = x0(3) + w_*dt + g_*dt;
x1(3) = mod(x1(3) + pi,2*pi)-pi; %[-pi,pi]
end

Also, the function called landmark_model_known_correspondence can also be anything and I used the algorithm from Table 6.4:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
function [ p ] = landmark_model_known_correspondence(x, z, c, m, sigmaZ)
    q = zeros(1,numel(c));

    for i = 1:1:numel(c)
        j = c(i);
        r_ = sqrt((m(1,j) - x(1)).^2 + (m(2,j) - x(2)).^2);
        b_ = wrapToPi(atan2(m(2,j) - x(2),m(1,j) - x(1)) - x(3));
        q(i) = prob(z(1,j)-r_,sigmaZ(1)) * prob(wrapToPi(z(2,j)-b_),sigmaZ(2)); %If sigmaZ is underestimated (see parameter a(1) to a(6) from the sampling model)
    end

    p = prod(q);
end

function [p] = prob(x,sigma)
    p = normpdf(x, 0, sigma)/normpdf(0,0,sigma); %Put p between 0 and 1, because small sigmas could put p way over 1
end

Of course, many parameters can be tuned up in order to optimize the algorithm. Enjoy!