I am trying to implement the quantile regression process with a simple setup in Matlab. This page contains a description of the quantile regression as a linear program, and displays the appropriate matrices and vectors. I've tried to implement it in Matlab, but I do not get the correct last element of the bhat
vector. It should be around 1 but I get a very low value (<1e-10). Using another algorithm I have, I get a value of 1.0675. Where did I go wrong? I'm guessing A
, b
or f
are wrong.
I have tried playing with optimset
, but I don't think that is the problem. I think I've made a conversion mistake when going from math to code, I just can't see where.
% set seed
rng(1);
% set parameters
n=30;
tau=0.5;
% create regressor and regressand
x=rand(n,1);
y=x+rand(n,1)/10;
% number of regressors (1)
m=size(x,2);
% vektors and matrices for linprog
f=[tau*ones(n,1);(1-tau)*ones(n,1);zeros(m,1)];
A=[eye(n),-eye(n),x;
-eye(n),eye(n),-x;
-eye(n),zeros(n),zeros(n,m);
zeros(n),-eye(n),zeros(n,m)];
b=[y;
y
zeros(n,1);
zeros(n,1)];
% get solution bhat=[u,v,beta] and exitflag (1=succes)
[bhat,~,exflag]=linprog(f',A,b);