Questions tagged [log-likelihood]

Only questions related to the implementation and usage of the mathematical function - log-Likelihood should use this tag.

Given a sample and a parametric family of distributions (i.e., a set of distributions indexed by a parameter) that could have generated the sample, the Likelihood is a function that associates to each parameter the probability (or probability density) of observing the given sample.

The log-Likelihood a function which is the natural logarithm of the Likelihood function

For many applications, the log-Likelihood, is more convenient to work with as compared to the Likelihood. This is because we are generally interested in where the Likelihood reaches its maximum value. Since the logarithm is a strictly increasing function, the logarithm of a function achieves its maximum value at the same points as the function itself, hence the log-likelihood can be used in place of the likelihood for maximum likelihood estimation and related techniques.

Finding the maximum of a function often involves taking the derivative of a function and solving for the parameter being maximized, and this is often easier when the function being maximized is a log-likelihood rather than the original likelihood function, because the probability of the conjunction of several independent variables is the product of probabilities of the variables and solving an additive equation is usually easier than a multiplicative one.

232 questions
0
votes
2 answers

Tensorflow log-likelihood for two probability vectors which might contain zeros

Suppose I have two tensors, p1 and p2 in tensorflow of the same shape which contain probilities, some of which might be zero or one. Is their and elegant way of calculating the log-likelihood pointwise: p1*log(p2) + (1-p1)*log(1-p2)? Implementing…
patapouf_ai
  • 17,605
  • 13
  • 92
  • 132
-1
votes
2 answers

finding log likelihood data using numpy

I am trying to use numpy to get the log likelihood for native bayes The following is the probability of getting 1 in each dimension when label is +1 and -1 repectively: positive = [0.07973422 0.02657807] negative = [0.04651163 0.02491694] #both of…
puru
  • 17
  • 6
-1
votes
1 answer

How do you compute the BIC in R with high dimensional data

I have a high dimensional data set with 200 parameters and 50 observations. I am attempting to compute the the BIC in R. I am aware that BIC=log(n)*df-2*log(L) where L is the likelihood. I am just wondering how one computes L. I believe I need to…
-3
votes
1 answer

How do I make a loss function for a Weibull Distribution model?

I want to make a model using TensorFlow which will return the 2 characteristics of a Weibull distribution. In order to make it I need to create a loss function which fits the Weibull Distribution. I found online how to make Negative log likelihood…
-3
votes
1 answer

Deep Learning Log Likelihood

I am new babie to the Deep Learning field, and I am use log-likelihood method to compare the MSE metrics.Could anyone be able to show how to calculate the following 2 predicted output examples with 3 outputs neurons each. Thanks yt = […
Kev
  • 1
-4
votes
1 answer

How can I estimate standard error in maximum likelihood destinati in

I can’t estimate standard error for parametri in maximum likelihood estimation. How could I do?
1 2 3
15
16