0

I'm trying to create a simple classifier for the CIFAR-10 data, but when I'm trying to execute this python code:

import cPickle
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import LinearSVC


def unpickle(file):
    with open(file, 'rb') as fo:
        dict = cPickle.load(fo)
    return dict


def main():
    s="data_batch_"
    dicts=[None]*5
    for i in xrange(1,6):
        dicts[i-1]=unpickle(s+str(i))

    X, y = dicts[0]['data'], dicts[0]['labels']
    for i in xrange(1,5):
       X = np.concatenate((X, dicts[i]['data']))
       y = np.concatenate((y, dicts[i]['labels']))
    classifier=OneVsRestClassifier(LinearSVC(random_state=0)).fit(X, y)

as long as the size of X and y is not too big - 10000,a little more or less, it works fine. but when i tried 20000 samples from 2 batches (or 50000 samples from all the 5 batches), I got pop-up window of "Python.exe stop working". Is Something wrong with the code itself or the memory run out?

If the memory did run out, what should I do? is it possible to execute fit(X,y) 5 times, each loop for each batch?

hohihohi
  • 1
  • 1

2 Answers2

0

Your classifier LinearSVC has no support for (mini-)batches.

You will need to select one of those given in this list.

From these, the SGDClassifier can be parameterized to work as linear SVM (default!).

So you can try to use it directly on all your data, or abstract your input-data generation and do use partial_fit manually. But use preprocessing/normalization and also check the hyper-parameters (learning-rate and learning-rate schedules).

sascha
  • 32,238
  • 6
  • 68
  • 110
-1

For some applications the amount of examples, features (or both) and/or the speed at which they need to be processed are challenging for traditional approaches. In these cases scikit-learn has a number of options you can consider to make your system scale.

Out-of-core (or “external memory”) learning is a technique used to learn from data that cannot fit in a computer’s main memory (RAM). Here is sketch of a system designed to achieve this goal: 1. a way to stream instances 2. a way to extract features from instances 3. an incremental algorithm

Streaming instances Basically, 1. may be a reader that yields instances from files on a hard drive, a database, from a network stream etc. However, details on how to achieve this are beyond the scope of this documentation.

Extracting features 2. could be any relevant way to extract features among the different feature extraction methods supported by scikit-learn. However, when working with data that needs vectorization and where the set of features or values is not known in advance one should take explicit care. A good example is text classification where unknown terms are likely to be found during training. It is possible to use a stateful vectorizer if making multiple passes over the data is reasonable from an application point of view. Otherwise, one can turn up the difficulty by using a stateless feature extractor. Currently the preferred way to do this is to use the so-called hashing trick as implemented by sklearn.feature_extraction.FeatureHasher for datasets with categorical variables represented as list of Python dicts or sklearn.feature_extraction.text.HashingVectorizer for text documents.

Incremental learning Finally, for 3. we have a number of options inside scikit-learn. Although all algorithms cannot learn incrementally (i.e. without seeing all the instances at once), all estimators implementing the partial_fit API are candidates. Actually, the ability to learn incrementally from a mini-batch of instances (sometimes called “online learning”) is key to out-of-core learning as it guarantees that at any given time there will be only a small amount of instances in the main memory. Choosing a good size for the mini-batch that balances relevancy and memory footprint could involve some tuning [1].

For classification, a somewhat important thing to note is that although a stateless feature extraction routine may be able to cope with new/unseen attributes, the incremental learner itself may be unable to cope with new/unseen targets classes. In this case you have to pass all the possible classes to the first partial_fit call using the classes= parameter.

Another aspect to consider when choosing a proper algorithm is that all of them don’t put the same importance on each example over time. Namely, the Perceptron is still sensitive to badly labeled examples even after many examples whereas the SGD* and PassiveAggressive* families are more robust to this kind of artifacts. Conversely, the later also tend to give less importance to remarkably different, yet properly labeled examples when they come late in the stream as their learning rate decreases over time.

Good Luck!