Questions tagged [log-likelihood]

Only questions related to the implementation and usage of the mathematical function - log-Likelihood should use this tag.

Given a sample and a parametric family of distributions (i.e., a set of distributions indexed by a parameter) that could have generated the sample, the Likelihood is a function that associates to each parameter the probability (or probability density) of observing the given sample.

The log-Likelihood a function which is the natural logarithm of the Likelihood function

For many applications, the log-Likelihood, is more convenient to work with as compared to the Likelihood. This is because we are generally interested in where the Likelihood reaches its maximum value. Since the logarithm is a strictly increasing function, the logarithm of a function achieves its maximum value at the same points as the function itself, hence the log-likelihood can be used in place of the likelihood for maximum likelihood estimation and related techniques.

Finding the maximum of a function often involves taking the derivative of a function and solving for the parameter being maximized, and this is often easier when the function being maximized is a log-likelihood rather than the original likelihood function, because the probability of the conjunction of several independent variables is the product of probabilities of the variables and solving an additive equation is usually easier than a multiplicative one.

232 questions
1
vote
1 answer

LogLikelihood and MultinomialDistribution in mathematica

can someone explain to me why the following code LogLikelihood[ MultinomialDistribution[ countstot, {dt1/ttot, dt2/ttot, dt3/ttot, dt4/ttot, dt5/ttot}], {CR1, CR2, CR3, CR4, CR5}] does not produce a number as output, but instead…
J. Dowe
  • 99
  • 1
  • 5
1
vote
0 answers

Given a regressor built using Keras, using negative log likelihood loss, how can I get both the mean and the std as separate outputs?

I'm having a hard time getting a regressor to work correctly, using a custom loss function. I'm currently using several datasets which contain data for transprecision computing benchmark experiments, here's a snippet from one of them: | var_0 |…
1
vote
0 answers

R: Differences in MLE estimation of t distribution

I am currently working on a group project estimating VaR and ES for a series of returns. One of the tasks is to estimate the degrees of freedom of a t distribution. I am using the following approach: make_loglik <- function(x){ Vectorize(…
VDAA
  • 11
  • 2
1
vote
3 answers

Implementing negative log-likelihood function in python

I'm having having some difficulty implementing a negative log likelihood function in python My Negative log likelihood function is given as: This is my implementation but i keep getting error:ValueError: shapes (31,1) and (2458,1) not aligned: 1…
1
vote
0 answers

calculating log likelihood for multivariate linear regression using R

I want to calculate the loglikelihood for multivariate linear regression. I'm not sure whether this code is true or not. I’ve been calculated the log likelihood using dmvnorm function in mvtnorm r package. sdmvn_mle <- function(obj){ sdmvn_mle_1…
1
vote
0 answers

How to use integral2 to evaluate integral of (apparently) non-vectorized functions?

I've noticed some weird facts about integral2. These are probably due to my limitations in understanding how it works. I have some difficulties in integrating out variables when I have particular functions. For instance, look at the following…
1
vote
1 answer

Is there a way so that an already defined argument can appear as missing when using optim() function in R?

I am trying to get the Maximum Likelihood Estimators of the log-likelihood of a Gumbel distribution for survival analysis(i say that so that you dont get astranged by the log-likelihood function, i think its correct). In order to do that i have to…
nnisgia
  • 37
  • 5
1
vote
0 answers

How to implement mini-batch gradient descent for maximum likelihood estimation python?

Currently, I have some code written that finds the combination of parameters that maximizes the log-likelihood function with some field data. The model now randomly selects the parameters out of a uniform distribution and then selects the best…
mj1496
  • 61
  • 1
  • 9
1
vote
1 answer

Difference between probability and likelihood

I had to think through what it meant when I read that likelihood is not probability, but the following case occurred to me. What is the likelihood that a coin is fair, given that we see four heads in a row? We can't really say anything about…
1
vote
1 answer

Curve fitting implemented using Maximum Likelihood Estimator implementations not working

I'm implementing a Maximum Likelihood Estimator for discrete count data for the purpose of curve fitting, using the result of curve_fit as the starting point for minimize. I defined and tried these methods for multiple distributions, but will…
1
vote
1 answer

Evaluating log-likelihood of unseen data in rstan

I understand I can calculate the log likelihood of each sample during sampling, e.g. ... model { for (i in 1:N) { (y[i] - 1) ~ bernoulli(p[i, 2]); } } generated quantities { vector[N] log_lik; for (i in 1:N){ log_lik[i] =…
Jack Brookes
  • 3,720
  • 2
  • 11
  • 22
1
vote
0 answers

Approximator of Log likelihood of tanh(mean + std*z)

I have been trying to understand a blog on soft actor critic where we have a neural network representing a policy that outputs mean and std of gaussian distribution of action for a given state. Since direct back-propagation through stochastic node…
1
vote
0 answers

KERAS: Custom Loss Function from loglikehood

I'm trying to implement a custom loss function as per this paper, equation 3.8 page 19. I arrived at this implementation: import numpy as np import keras.backend as T from keras import models def costume_loss(censored): '''costume loss…
Juan Castaño
  • 67
  • 3
  • 11
1
vote
1 answer

How to add constraint into loglikelihood function?

I have a time series model (INGARCH): lambda_t = alpha0 + alpha1*(x_(t-1)) + beta1*(lambda_(t-1)) X_t ~ poisson (lambda_t) where t is the length of observation or data, alpha0, alpha1 and beta1 are the parameters. X_t is the series of data,…
Miyazaki
  • 99
  • 8
1
vote
1 answer

Efficiently evaluate Multivariate Normal

I want to evaluate datapoints that arise from multivariate normal densities. I have to evaluate each datapoint with respect to different means and covariance matrices. I have two means for each observation to evaluate the likelihood. Also, I have…
yrx1702
  • 1,619
  • 15
  • 27