Questions tagged [log-likelihood]

Only questions related to the implementation and usage of the mathematical function - log-Likelihood should use this tag.

Given a sample and a parametric family of distributions (i.e., a set of distributions indexed by a parameter) that could have generated the sample, the Likelihood is a function that associates to each parameter the probability (or probability density) of observing the given sample.

The log-Likelihood a function which is the natural logarithm of the Likelihood function

For many applications, the log-Likelihood, is more convenient to work with as compared to the Likelihood. This is because we are generally interested in where the Likelihood reaches its maximum value. Since the logarithm is a strictly increasing function, the logarithm of a function achieves its maximum value at the same points as the function itself, hence the log-likelihood can be used in place of the likelihood for maximum likelihood estimation and related techniques.

Finding the maximum of a function often involves taking the derivative of a function and solving for the parameter being maximized, and this is often easier when the function being maximized is a log-likelihood rather than the original likelihood function, because the probability of the conjunction of several independent variables is the product of probabilities of the variables and solving an additive equation is usually easier than a multiplicative one.

232 questions
0
votes
0 answers

Tensorflow: Modify datapoints used in loss function evaluation after each gradient step using tf optimizer

Typically a tf optimizer flow is as follows: # Create an optimizer. opt = GradientDescentOptimizer(learning_rate=0.1) # Compute the gradients for a list of variables. grads_and_vars = opt.compute_gradients(loss, ) #…
0
votes
1 answer

Likelihood ratio test with Wald statistic

I wrote a code for the LL of my unrestricted model and my restricted model and optimized this codes with optim. My test is to check whether 2 standard deviations are the same. Now I want to check whether my constraint is true or not and I used the…
0
votes
2 answers

why _joint_log_likelihood has large negative values

How to interpret the large negative value of _joint_log_likelihood. Assume data gas only T/F class variable . # Programming assignment 2 import pickle from sklearn.model_selection import train_test_split from sklearn.naive_bayes import…
sapy
  • 8,952
  • 7
  • 49
  • 60
0
votes
2 answers

Likelihood ratio test in R

I have 2 linear models I have run in R model_1_regression <- lm(model_1$ff4f_actual_excess_return_month1 ~ model_1$Rm.Rf + model_1$SMB + model_1$HML + model_1$MOM, …
OSW
  • 13
  • 1
  • 1
  • 3
0
votes
0 answers

R- Likelihood function for Cox model with frailty

I am trying to write an R code for the likelihood function for a cox model with gamma frailty. I know there are packages in R which would do that easily. But I want to write a customized likelihood function for cox model with gamma frailty for my…
0
votes
0 answers

finding values that maximize log likelihood in python

I am trying to maximize the following function: What my goal is to find the values of the vectors x and y, that maximize L. K_i^out and K_i^in are the in_degree and the out degree of a node i, in graph G, which are basically integers from 0 to 100.…
Thomas Mc Donald
  • 407
  • 1
  • 5
  • 11
0
votes
1 answer

Computing the marginal likelihood of a Gaussian model in R with integrate()

I am trying to compute the marginal likelihood of a Gaussian model in R. More precisely, I am trying to integrate the likelihood over both a Gaussian prior on mu and a Gaussian prior on sigma, with some observations yi. In other words, I am trying…
0
votes
1 answer

Likelihood Ratio Test in R for hypothesis testing

I am trying to compute the log likelihood ratio test in R, but am having some difficulties. For some reason I keep getting a negative log likelihood value which isn't possible I do not know the reason. This is the data I am using. Here is the code…
mrsquid
  • 605
  • 2
  • 9
  • 24
0
votes
2 answers

Automating a function to return an expression with math constants and unknowns

I am trying to build a transitions matrix from Panel data observations in order to obtain the ML estimators of a weighted transitions matrix. A key step is obtaining the individual likelihood function for individuals. Say you have the following data…
Arrebimbomalho
  • 176
  • 1
  • 12
0
votes
1 answer

Sweep a log-dnorm across a training set matrix to find log-likelihood

As part of a machine learning class assignment, I am implementing a NaiveBayes classifier without using any external library. My training data set X has 8 features and one binary label for 800 rows; I have calculated 1:8 vectors for mean and sd for…
Paco Cruz
  • 73
  • 1
  • 8
0
votes
1 answer

Optimizing weights in logistic regression ( log likelihood )

In Logistic Regression: hypothesis function, h(x) = ( 1 + exp{-wx} )^-1 where, w - weights/parameters to be fit or optimized Cost function ( -ve log likelihood function ) is given as : For a single training e.g.. (x,y): …
0
votes
1 answer

Why is Log Likelihood strangely when using MFCC and Delta Coefficients

I am working on a project that requires extracting MFCC features from an audio stream. The project consists primarily of classification, although in the interest of expanding our dataset I am working on a detection algorithm to isolate the parts of…
0
votes
1 answer

Maximizing the Likelihood function

I want to maximize the likelihood function in respect the theta parameter. The likelihood function is defined as: from scipy.optimize import minimize def prloglik(theta,n,r): N=theta;k=len(n) ar1=np.sum(np.log(np.array(range(N))+1)) …
Alex
  • 99
  • 1
  • 7
0
votes
1 answer

Log-likelihood of Nakagami distribution is infinite in R

I am fitting the normalized histogram of my dataset $x \in [60,80]$ to Nakagami distribution. First I have estimated the scale and shape parameters using dnaka of VGAM package through the following MLE code: ll <- function(par) { if(par[1]>0 &…
MM Khan
  • 203
  • 1
  • 2
  • 7
0
votes
1 answer

Obtain the MLE of betas through iterative re-weighted least squares regression

I have the following data set: y <- c(5,8,6,2,3,1,2,4,5) x <- c(-1,-1,-1,0,0,0,1,1,1) d1 <- as.data.frame(cbind(y=y,x=x)) I when I fit a model to this data set with glm(), using a Poisson distribution with a log link: model <- glm(y~x, data=d1,…
1 2 3
15
16