-1

I'm wondering how one goes about treating outliers at scale. Based on my experiences, I usually need to understand why there are outliers from the first place. What causes it, are there any patterns, or it just happens randomly. I know that, theoretically, we usually define outliers as data points outside of 3 standard deviation. But in the case where data is so big that you can't treat each feature one by one, and don't know if the 3 standard deviation rule is applicable anymore because of sparsity, how do we most effectively treat the outliers.

My intuition about high dimensional data is that data is sparse so the definition of "outliers" is harder to determine. Do you guys think we would be able to just get away with using ML algorithms that are more robust to outliers (tree based models, robust SVM, etc) instead of trying to treat outliers during preprocessing step? And if we really want to treat it, what is the best way to do it?

Lita
  • 1
  • 2
  • Way too broad, plus not a *programming* question, hence arguably off-topic here; perhaps suited for [Cross Validated](https://stats.stackexchange.com/help/on-topic). – desertnaut Jul 25 '19 at 08:08

1 Answers1

0

I would first propose a frame work for understanding the data. Imagine you are handed a dataset with no explanation of what it is. Analytics could actually be used to enable us to get understanding. Usually rows are observations and columns parameters of some sort regarding the observations. You first want to have a frame work for what you are trying to achieve. Now matter is going on, all data centers around the interest of people...that is why we decided to record it in some format. Given that, we are at most interested in:

1.) Object 2.) Attributes of object 3.) Behaviors of object 4.) Preferences of object 4.) Behaviors and preferences of object over time 5.) Relationships of object to other objects 6.) Affects of attributes, behaviors, preferences and other objects on object

So you are wanting to identify these items. So you open a data set and maybe you instantly recognize a time stamp. You then see some categorical variables and start doing relationship analysis for what is one to one, one to many, many to many. You then identify continuous variables. These all come together to give a foundation for identifying what is an outlier.

If we are evaluating objects of over time....is the rare event indicative of something that happens rarely, but we want to know about. Forest fire are outlier events...but they are events of great concern. If I am analyzing machine data and having rare events, but these rare events are tied to machine failure, then it matters. Basically.....does the rare event-parameter show evidence that it correlates to something that you care about?

Now if you have so many dimensions that the above approach is not feasible to your judgement, then you are seeking dimension reduction alternatives. I am currently employing Single Value Decomposition as at technique. I am already seeing situations where I am accomplishing the same level of predictive ability with 25% of the data. Which segways into my final thought; find a mark to decide whether the outliers matter or not.

Begin with leaving them in then begin your analysis, and run the work again with them removed. What were the affects. I believe that when you are in doubt, simply do both and see how different the results are. If there is little difference than maybe you are good to go. If there is significant difference of concern, then you are wanting to take an evidenced based approach of the outlier occurring. Simply because it is rare in your data does not mean it is rare. Think of certain type crimes that are under-reported (via arrest records). Lack of data showing politicians being arrested for insider trading does not mean that politicians are not doing insider trader en masse.

rayphaistos1
  • 11
  • 1
  • 3