Questions tagged [regularized]

Regularization involves introducing additional information in order to solve an ill-posed problem or to prevent over-fitting by shrinking the parameters of the model stay close to zero

In mathematics and statistics, particularly in the fields of machine learning and inverse problems, regularization involves introducing additional information in order to solve an ill-posed problem or to prevent overfitting. This information is usually of the form of a penalty for complexity, such as restrictions for smoothness or bounds on the vector space norm.

From http://en.wikipedia.org/wiki/Regularization_%28mathematics%29

<

195 questions
0
votes
0 answers

clogitL1 - extract regression coefficients

I'm an R newbie. I'm using "clogitL1" to run a regularized conditional logistic regression for a matched case-control study with 1021 independent variables (metabolites). I'm not able to extract the regression coefficients. I've tried summary(x),…
0
votes
1 answer

How to perform elastic-net for a classification problem?

I am a noob and I have previously tackled a linear regression problem using regularised methods. That was all pretty straight forward but I now want to use elastic net on a classification problem. I have run a baseline logistic regression model and…
DSouthy
  • 169
  • 1
  • 3
  • 12
0
votes
0 answers

[Theano]TypeError: cost must be a scalar

I am undergoing a research project that requires me to write a regularizer for a DNN. import lasagne from lasagne.nonlinearities import leaky_rectify, softmax import theano, theano.tensor as T import numpy as np import sklearn.datasets,…
0
votes
1 answer

A more imbalanced approach to compute_class_weight

I have a large multi-label array with numbers between 0 and 65. I'm using the following code to generate class weights: class_weights = class_weight.compute_class_weight('balanced',np.unique(labels),labels) Where as the labels array is the array…
Bar
  • 83
  • 9
0
votes
1 answer

Regularized regression using glmnet: No difference between groups?

I am using a regularized regression to select several proteins that best discriminate health condition (binary: either disease or no disease). The purpose of using it is to reduce dimension (variable selection) so that we can have a smaller set of…
KLee
  • 105
  • 1
  • 9
0
votes
1 answer

Neural net: no dropout gives the best test score. Is that bad?

I took over some code from someone and my task was to reproduce the same model and performance in pytorch. I was given best hyper-parameters for that model as well. After playing around with it for quite sometime, I see that if I set drop out rate…
0
votes
1 answer

Which is the correct implementation of regularization in octave?

I'm currently taking Andrew Ng's machine learning course and I try implementing the stuff as I learn so as not to forget them, I just finished regularization (chapter 7). I know that theta 0 is updated normally, separate from other parameters,…
0
votes
0 answers

Regularized Custom Loss Function in Keras with Tensor flow as Backend

Instead of layer level regulaization that can be implemented in Keras directly, I want to implement following custom loss function in keras without adding regularizer in layers. Custom Loss=‖I-P‖_2+γ‖∂I‖_2 where I is the true and P is the Predicted…
Rayyan Khan
  • 117
  • 1
  • 2
  • 10
0
votes
1 answer

How to make a function that will output the clap if the numbers have elements 2, 4 or 8?

Each number call that contains a value (2,4,8) will output "Clap", if not "No clap" Call def clapping (number) def clapping (772) output: "Clap" I made the program, but it seems something is wrong. Can I ask for help to check which is…
0
votes
0 answers

custom activity regularizer to smooth the output using a laplacian operator

I am trying to build a machine learning model to denoise straight lines with random noise. Here are some examples. The red dots are the labels and blue dots are the data for training. The length of each time series data is 100. I fit the data using…
0
votes
1 answer

Should I find regularization parameter or degree first for Support Vector Regression algorithm?

I am working on a problem to predict revenue generated by a film. I am using sklearn's support vector regression algorithm with polynomial kernel. I tried to find the degree which gives best accuracy using default value of regularization parameter.…
V K
  • 1,645
  • 3
  • 26
  • 57
0
votes
1 answer

Why do we need to preseve the "expected output" during dropout?

I am very confused as to why do we need to preserve the value of the expected output when performing dropout regularisation. Why does it matter if the mean of the outputs of layer l is different in the training and testing phase? Weights that are…
0
votes
1 answer

Is there a way to add keras 'custom layer' based/specific penalty to the overall loss function?

i have a keras sequential model with some custom layers in it. Now in one of the layers, based on the input of that specific layer, i want to calculate a penalty and i want the penalty to be added to the loss function which the optimizer tries to…
0
votes
1 answer

Can I create a custom regularization term (Keras) using not only the weight matrix as a parameter?

As stated in the title, I would like to be able to penalize my model weights by creating a custom regularization term. ex: def customized_regularizer(weight_matrix, parameterA, parameterB): return(K.sum(K.dot(parameterA, weight_matrix) -…
0
votes
1 answer

Different penalty functions for lasso, elastic net, and ridge regression in R

Is their a package that allows me to change the penalty function to say, Huber, or absolute value, instead of quadratic L2 norm, when using regularization such as lasso, ridge, or elastic net in R?
Frank
  • 952
  • 1
  • 9
  • 23