1

I want to use KNN for imputing categorical features in a sklearn pipeline (muliple Categorical features missing).

I have done quite a bit research on existing KNN solution (fancyimpute, sklearn KneighborRegressor). None of them seem to be working in terms

  • work in a sklearn pipeline
  • impute categorical features

Some of my questions are (any advice is highly appreciated):

  1. is there any existing approach to allow using KNN (or any other regressor) to impute missing values (categorical in this case) to work with sklearn pipeline
  2. fancyimpute KNN implementation seems not use hamming distance for imputing missing values (which is ideal for categorical features).
  3. is there any fast KNN method implementation available considering KNN is time consuming when imputing missing values (i.e., run prediction on missing values against the whole datasets)
tudou
  • 467
  • 1
  • 7
  • 20

3 Answers3

1
  1. The default KNeighborRegressor is supposed to be able to work with regressing missing values, however, with numeric values only. Therefore for categorical value, I believe you most likely need to encode it first, then impute the missing values.

  2. KNNImpute, most likely uses mean/mode etc

  3. iterativeimputer from sklearn can run the imputation against the whole datasets

Enxi
  • 101
  • 2
0
  1. KNNImputer is new as of sklearn version 0.22.0

  2. KNNImputer uses a euclidean distance metric by default, but you can pass in your own custom distance metric.

  3. I can't speak to the speed of KNNImputer, but I'd imagine there have been some optimizations done on it if it's made it into sklearn.

skeller88
  • 4,276
  • 1
  • 32
  • 34
0

KNeighborRegressor and KNNImpute do not behave the same as explained here: https://scikit-learn.org/stable/auto_examples/impute/plot_iterative_imputer_variants_comparison.html#sphx-glr-auto-examples-impute-plot-iterative-imputer-variants-comparison-py

With KNeighborRegressor, you have to use sklearn IterativeImputer class. Missing values are initialized with the column mean. For each missing cell, you then perform iterative imputations using the K nearest neighbours. The algorithm stop after convergence. This is stochastic (i.e. will produce different imputation each time).

With KNNImputer, a distance measure that can handle missing value is computed (default is nan-euclidean-distance). Empty cells are filled with the mean of the K nearest neighbors that have a value for the corresponding variable. Doc: https://scikit-learn.org/stable/modules/impute.html#nearest-neighbors-imputation

Florian Lalande
  • 494
  • 4
  • 13