I have developed a fairly simple multivariate regression econometrics model. I am now attempting to run Robust Regressions (EViews calls them Robust Least Square). I can easily run a Robust Regression M-estimation. But, every time I run a Robust Regression MM-estimation I run into the same error: "Maximum number of singular subsamples reached." I have played around with the MM-estimation specifications by increasing/decreasing number of iterations, convergence level, etc... Invariably, I run into the same error.
At an EViews forum, another fellow ran into the exact same problem for both MM-estimation and S-estimation. The forum moderator indicated that if a model has the presence of dummy variables without that many observations, such estimations may not reach convergence and generate the error as mentioned above. My model does have dummy variables. And, some of them do not have that many observations (8 consecutive ones out of a time series data with 217 observations). However, I am unclear if this is a limitation of EViews or if this is truly an algorithm limitation. I may attempt to rerun MM-estimation in R. And, see if it is feasible.
Following up on the above, I did just that. And, ran Robust Regression using R with MASS package using rlm() function. Just as in EViews I had no problem running an M-estimation. Similarly, I firt ran into trouble when attempting an MM-estimation. Just as in EViews I got an error message stating the regression/simulation did not reach convergence after 20 iterations. So, I reran my MM-estimation by first eliminating all my dummy variables. As predicted, it worked. Next, I added just one single dummy variable at a time and each time I reran my MM-estimation. I did that to observe when the MM-estimation model would break down. To my surprise, it never did. And, now I eventually could run my MM-estimation with all the dummy variables. I don't know why I could not run it at first with all the dummy variables in at once (maybe I did an error in coding).
This leads me to conclude that R is somewhat more flexible than EViews on this count. After closer inspection, I noticed that the EViews M-estimation I ran was of the bisquare type (vs. the regular Huber one). This makes a big difference. When I did run in R an M-estimation of the bisquare type I almost got the exact same results as EViews. There were small differences between the two. This can be expected given that the solving process is iterative.