Questions tagged [learning-rate]

83 questions
1
vote
1 answer

Learning rate is not affecting my artificial neural network in R

I have the following model to predict the price of houses in a particular neighborhood : set.seed(100) index_1<-sample(1:nrow(data),round(nrow(data)*0.9)) train<-data[index_1,] #578 obs. test<-data[-index_1,] #62 obs. NModel <- neuralnet(price ~ …
1
vote
1 answer

How to change Learning rate in Tensorflow dependent on number of batches and epochs?

Is there a possibility to implement the following scenario with Tensorflow: In the first N batches, the learning rate should be increased from 0 to 0.001. After this number of batches has been reached, the learning rate should slowly decrease from…
1
vote
1 answer

Forward Pass calculation on current batch in "get_updates" method of Keras SGD Optimizer

I am trying to implement a stochastic armijo rule in the get_gradient method of Keras SGD optimizer. Therefore, I need to calculate another forward pass to check if the learning_rate chosen was good. I don't want another calculation of the…
1
vote
1 answer

Pytorch: looking for a function that let me to manually set learning rates for specific epochs intervals

For example, set lr = 0.01 for the first 100 epochs, lr = 0.001 from epoch 101 to epoch 1000, lr = 0.0005 for epoch 1001-4000. Basically my learning rate plan is not letting it decay exponentially with a fixed number of steps. I know it can be…
Yilin He
  • 107
  • 1
  • 7
1
vote
1 answer

TensorFlow 2.0 - Learning Rate Scheduler

I am using Python 3.7 and TensorFlow 2.0, I have to train a neural network for 160 epochs with the following learning rate scheduler: Decreasing the learning rate by a factor of 10 at 80 and 120 epochs, where the initial learning rate = 0.01. How…
Arun
  • 2,222
  • 7
  • 43
  • 78
1
vote
1 answer

I want to add more than one argument in callbacks. How can i solve this error?

I get this error " AttributeError: 'list' object has no attribute 'set_model' " when I add learning schedule callback callbacks_list in model.fit_generator How can i solve this error?? lrate =…
Eda
  • 565
  • 1
  • 7
  • 18
1
vote
1 answer

Learning rate ,Loss and Batch size

Is Loss dependent upon learning rate and batch size. For .e.g if i keep batch size 4 and a learning rate lets say .002 then loss does not converge but if change the batch size to 32 keeping the learning rate same , i get a converging loss curve. Is…
0
votes
0 answers

Pytorch Lightning Learning Rate Tuners Giving unexpected results

I'm trying to find an optimal learning rate using python pl.tuner.Tuner but results aren't as expected The model I am running is a linear classifier on top of a BertForSequenceClassification Automodel I want to find the optimum learning rate when…
0
votes
0 answers

Update TensorFlow optimizer to allow retaking interrupted training

My model is an implementation of TensorFlow, and since it might take a long time to train, I am trying to implement a way to retake it when training has been interrupted, but I haven't been able to make sure I get the same results when training was…
0
votes
0 answers

Tensorboard is only showing end learning rate for polynomial decay learning rate with Adam optimizer?

I recently started using Tensorboard to monitor my machine learning project. I use the Adam optimizer with a decaying learning rate: early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=4) lr_schedule =…
0
votes
0 answers

Role that learning_rate plays in the reproducibility of the model in PyTorch models

I have a Bayesian neural netowrk which is implemented in PyTorch and is trained via a ELBO loss. I have faced some reproducibility issues even when I have the same seed and I set the following code: # python seed =…
0
votes
1 answer

Using different learning rates for different variables in TensorFlow

Is it possible to set different learning rates for different variables in the same layer in TensorFlow? For example, in a dense layer, how can you set a learning rate of 0.001 for the kernel while setting the learning rate for the bias to be…
mehini
  • 21
  • 2
0
votes
0 answers

ReduceLRonPlateau keeps decreasing LR across multiple models

I'm using ReduceLROnPlateau for multiple experiments, but I'm getting lower and lower initial learning rate for each conjsecutive modewl run. from tensorflow.keras.callbacks import ReduceLROnPlateau for model in models: reduce_lr =…
Mateusz Dorobek
  • 702
  • 1
  • 6
  • 22
0
votes
0 answers

Using Strong Wolfe Condition to find learning rate

I'm trying to optimize a function with gradient decent code that I wrote but I want to use another function file related to Strong Wolfe Condition to find a good alpha. Both of my codes are working with an equation and same time I get -inf for…
0
votes
0 answers

What does this curve of learning rate test result mean?

I encountered an oscillating training loss curve problem. So I followed some tutorials and tried to solve it. I did a learning rate test, and the LR ranges from 1E-7 to 0.1. The network I used is ResNet-50, and the optimizer is Adam. Below is the…
Kitty
  • 29
  • 5