I am using scikit-learn's multilayer perceptron classifier and I want to evaluate some pruning techniques for neural networks, such as Optimal Brain Damage. This method requires, iteratively, to remove weights from the network, i.e. manually setting them to 0 and retraining, and repeat the process until some criteria is satisfied.
So I would like to know if there is a simple way of setting one or more weights to zero and keeping them like that all along the training of the net. I want to point out that while it is possible to easily access the weights of the MLP (it is an attribute of the object) once it has already been trained, I don't know how to preset it before training.
PD: if you know another more automatic way of evaluating pruning methods in sklearn, that would also be helpful.