Questions tagged [regularized]

Regularization involves introducing additional information in order to solve an ill-posed problem or to prevent over-fitting by shrinking the parameters of the model stay close to zero

In mathematics and statistics, particularly in the fields of machine learning and inverse problems, regularization involves introducing additional information in order to solve an ill-posed problem or to prevent overfitting. This information is usually of the form of a penalty for complexity, such as restrictions for smoothness or bounds on the vector space norm.

From http://en.wikipedia.org/wiki/Regularization_%28mathematics%29

<

195 questions
0
votes
1 answer

How to induce "uniform" sparsity/sparse coding in machine learning model?

I have a machine learning model (namely, an autoencoder) that attempts to learn a sparse representation of an input signal via a simple l1 penalty term added to the objective function. This indeed works to promote a sparse vector representation in…
0
votes
0 answers

About adding regularization to to the cost function of neural network for regression in tensorflow

I am trying to build a neural network for linear rgression. I want to add regularization part to the cost function but the cost does not change after each iteration. The code is as follows: X = tf.placeholder(tf.float32,[n_x, None], name = "x") Y =…
0
votes
1 answer

Add Custom Regularization to Tensorflow

I am using tensorflow to optimize a simple least squares objective function like the following: Here, Y is the target vector ,X is the input matrix and vector w represents the weights to be learned. Example Scenario: , , If I wanted to augment…
Nikhil
  • 545
  • 1
  • 7
  • 18
0
votes
2 answers

Dropout when using Keras with TensorFlow backend

I read about the Keras implementation of dropout and it seems to be using the inverse dropout version of it even though it says dropout. Here is what I have understood when I read the Keras and Tensorflow documentation : When I specify Dropout(0.4)…
Tanmay Bhatnagar
  • 2,330
  • 4
  • 30
  • 50
0
votes
0 answers

Can't seem to implement L2 regularization correctly in Python — low accuracy scores

I'm trying to add regularization to my Mnist digits NN classifier, which I've created using numpy and vanilla Python. I'm currently using Sigmoid activations with Cross Entropy cost function. Without using the regularizer, I get 97% accuracy.…
Moondra
  • 4,399
  • 9
  • 46
  • 104
0
votes
0 answers

Spatial Regularization

Is there a way to include a spatial regularization penalty to cost functions in scikit-learn for clustering? More specifically, I am working with neuroscience brain data, where every voxels has a spatially inherited dependency based on their…
0
votes
1 answer

How to implement regularization / weight decay in R

I'm surprised at the number of R neural network packages that don't appear to have a parameter for regularization/lambda/weight decay. I'm assuming I'm missing something obvious. When I use a package like MLR and look at the integrated learners, I…
Kyle Ward
  • 889
  • 1
  • 8
  • 18
0
votes
1 answer

Adding Regularization Gives Slower and Worse Performance

When I added stronger regularization (e.g. L2 regularization parameter from 1 to 10, or dropout parameter from 0.75 to 0.5), it gave me slower and worse performance (e.g. 97-98% test accuracy in 3000-4000 iterations to only 94-95% test accuracy in…
0
votes
0 answers

consistent forward / backward pass with tensorflow dropout

For the reinforcement learning one usually applies forward pass of the neural network for each step of the episode in order to calculate policy. Afterwards one could calculate parameter gradients using backpropagation. Simplified implementation of…
0
votes
1 answer

Performing L1 regularization on a mini batch update

I am currently reading Neural Networks and Deep Learning and I am stuck on a problem. The problem is to update the code that he gives to use L1 regularization instead of L2 regularization. The original piece of code that uses L2 regularization…
0
votes
1 answer

How to tune Sklearn's RandomForest? max_depth Vs min_samples_leaf

max_depth VS min_samples_leaf The parameters max_depth and min_samples_leaf are confusing me the most during a multiple attempts of using GridSearchCV. To my understanding both of these parameters are a way of controlling the depth of the trees,…
0
votes
1 answer

Regularization on Sample vs Full Dataset for Machine Learning

I have recently watched a video explaining that for Deep Learning, if you add more data, you don't need as much regularization, which sort of makes sense. This being said, does this statement hold for "normal" Machine Learning algorithms like Random…
0
votes
1 answer

Does caffe multiply the regularization parameter to biased?

I have bunch of questions about the way regularization and biased are working in caffe. First, by default biased exist in the network, is it right? Or, I need to ask caffe to add them? Second, when it obtains the loss value, it does not consider…
Afshin Oroojlooy
  • 1,326
  • 3
  • 21
  • 43
0
votes
1 answer

Custom link function in h2o.glm

I looked for a generalized linear model implementation with regularization. I found that glmnet does not allow custom link function. However, h2o takes link function type as a parameter. Is it possible to define and use a custom link function under…
Naveen Mathew
  • 362
  • 2
  • 14
0
votes
1 answer

What is the R equivalent of the "C" parameter in sklearn's Logistic Regression?

In sklearn in python there is a C parameter (regularization parameter) for the LogisticRegression. Now, I'm wondering what is the equivalent in R language? When I do logistic regression in R, I do it like this: glm( ~ ,…
makansij
  • 9,303
  • 37
  • 105
  • 183
1 2 3
12
13