What is the difference between the two? It seems that both create new columns, which their number is equal to the number of unique categories in the feature. Then they assign 0 and 1 to data points depending on what category they are in.
-
3[When to use One Hot Encoding vs LabelEncoder vs DictVectorizor?](https://datascience.stackexchange.com/questions/9443/when-to-use-one-hot-encoding-vs-labelencoder-vs-dictvectorizor) – pault May 22 '18 at 17:30
-
Does it have something to do with one-vs-all instead of one-vs-k encoding? When encoding labels every class must be present. When encoding variables the last one(?) should not be encoded because it has a dependency on the others and most models want independent variables. Although, with large number of dimensions this may not matter much. – Andrew Lavers May 22 '18 at 18:47
-
1@AndrewLavers Even when encoding variables, if you are expecting new categorical values for this variable to be present in the validation set / test set / production environment, you should encode all variables. Otherwise, there would be no difference between the "last value" and a new out-of-vocabulary value. – Elisio Quintino Oct 03 '19 at 09:56
5 Answers
A simple example which encodes an array using LabelEncoder, OneHotEncoder, LabelBinarizer is shown below.
I see that OneHotEncoder needs data in integer encoded form first to convert into its respective encoding which is not required in the case of LabelBinarizer.
from numpy import array
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import LabelBinarizer
# define example
data = ['cold', 'cold', 'warm', 'cold', 'hot', 'hot', 'warm', 'cold',
'warm', 'hot']
values = array(data)
print "Data: ", values
# integer encode
label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(values)
print "Label Encoder:" ,integer_encoded
# onehot encode
onehot_encoder = OneHotEncoder(sparse=False)
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
onehot_encoded = onehot_encoder.fit_transform(integer_encoded)
print "OneHot Encoder:", onehot_encoded
#Binary encode
lb = LabelBinarizer()
print "Label Binarizer:", lb.fit_transform(values)
Another good link which explains the OneHotEncoder is: Explain onehotencoder using python
There may be other valid differences between the two which experts can probably explain.

- 405
- 1
- 6
- 13

- 707
- 5
- 7
-
24Minor error in your remarks: [According to the docs](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html#sklearn.preprocessing.OneHotEncoder), the OneHotEncoder does **not** need integer-encoded data to produce its sparse matrix. On further study, it seems the difference is the OneHotEncoder produces a SciPy spares-matrix by default, while the LabelBinarizer produces a dense NumPy array by default. – stephenjfox Dec 03 '18 at 03:01
-
@stevethecoder is `dense Numpy array` basically the out-of-box array type? – F.S. Feb 07 '19 at 22:30
-
5Is which situation shall we use `LabelBinarizer` then, if at all? – Supratim Haldar Apr 20 '19 at 04:56
-
1I think, `LabelBinarizer` is supposed to be used to encode one dimensional label vectors, rather than multi column (2 dimensional) data. For which you would use the `OneHotEncoder`. – Soerendip May 23 '20 at 21:12
A difference is that you can use OneHotEncoder
for multi column data, while not for LabelBinarizer
and LabelEncoder
.
from sklearn.preprocessing import LabelBinarizer, LabelEncoder, OneHotEncoder
X = [["US", "M"], ["UK", "M"], ["FR", "F"]]
OneHotEncoder().fit_transform(X).toarray()
# array([[0., 0., 1., 0., 1.],
# [0., 1., 0., 0., 1.],
# [1., 0., 0., 1., 0.]])
LabelBinarizer().fit_transform(X)
# ValueError: Multioutput target data is not supported with label binarization
LabelEncoder().fit_transform(X)
# ValueError: bad input shape (3, 2)

- 6,510
- 1
- 21
- 25
Scikitlearn suggests using OneHotEncoder for X matrix i.e. the features you feed in a model, and to use a LabelBinarizer for the y labels.
They are quite similar, except that OneHotEncoder could return a sparse matrix that saves a lot of memory and you won't really need that in y labels.
Even if you have a multi-label multi-class problem, you can use MultiLabelBinarizer for your y labels rather than switching to OneHotEncoder for multi hot encoding.
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html

- 301
- 3
- 6
The results of OneHotEncoder() and LabelBinarizer() are almost similar [there might be differences in the default output type.
However, to the best of my understanding, LabelBinarizer() should ideally be used for response variables and OneHotEncoder() should be used for feature variables.
Although, at present, I am not sure why do we need different encoders for similar tasks. Any pointer in this direction would be appreciated.
A quick summary:
LabelEncoder – for labels(response variable) coding 1,2,3… [implies order]
OrdinalEncoder – for features coding 1,2,3 … [implies order]
Label Binarizer – for response variable, coding 0 & 1 [ creating multiple dummy columns]
OneHotEncoder - for feature variables, coding 0 & 1 [ creating multiple dummy columns]
A quick example can be found here.

- 632
- 1
- 10
- 23
One more note on this...
If the input data has just 2 categories then the output of the LabelBinarizer
has just one column as this is enough for the binary representation:
> data = np.array([['b'], ['a']])
array([[0., 1.],
[1., 0.]])
> LabelBinarizer().fit_transform(data)
array([[1],
[0]])
compare with:
> data = np.array([['b'], ['a'], ['c']])
array([[0., 1., 0.],
[1., 0., 0.],
[0., 0., 1.]])
> LabelBinarizer().fit_transform(data)
array([[0, 1, 0],
[1, 0, 0],
[0, 0, 1]])

- 2,076
- 1
- 14
- 17