I have been working on a machine learning project and I have been trying to scale the features before feeding it to my model. I know Min-Max Normalization and Quantile Transformation scales down the features in the range of 0 and 1. I was wondering if there’s any other scaling method or way through which the range can be scaled down between 0 and 1.
Asked
Active
Viewed 20 times
-2
-
Please see the intro and NOTE in https://stackoverflow.com/tags/machine-learning/info – desertnaut Aug 16 '23 at 18:39
1 Answers
0
- You scale the features based on their
measure of central tendency and variability
. - Eg. If there are outliers You can use
RobustScaler()
, to remove the outliers and then use eitherStandardScaler
orMinMaxScaler
for preprocessing the data - For scaling down the features into particular range, you have
Normalization
techniques, other than that you have to write your method to scale the features into particular range without loosing the information. - look into this notebook for feature scaling techniques https://www.kaggle.com/code/aimack/complete-guide-to-feature-scaling .

Sauron
- 551
- 2
- 11