Question 2
Part 1
We assume that the data points are IID according to a Gaussian distribution with unknown mean and variance .
The probability density function of a Gaussian distribution is given by:
Given that the data points are IID, the likelihood of observing the dataset is the product of the individual pdfs:
The log-likelihood function simplifies this into a sum:
To find the MLE of , we differentiate the log-likelihood with respect to and set the derivative equal to zero. Note that the MLE of , , is the sample mean of the data, , which simplifies the expression.
Differentiating with respect to :
Setting this equal to zero and solving:
Solving for , we find the MLE of variance:
Part 2
To derive the estimator bias for the MLE of , we need to compare the expected value of the MLE of variance, , with the true variance, .
The expected value of the MLE of variance:
Since is the sample mean, we can rewrite the square difference as:
This can be simplified to:
Further simplification leads to:
The simplification arises because the first term inside the expectation is an unbiased estimator of multiplied by , and the second term corrects for the estimation of the mean, subtracting times the variance.
The bias of the MLE of the variance is then the difference between its expected value and the true variance:
Part 3
This is similar to part 1.
Write likelihood function function as product of individual probabilities:
Write log-likelihood to turn the above into a sum:
To find the MLE, differentiate log-likelihood with respect to and set derivative to zero:
Solving this equation this for gives:
Therefore the MLE of rate parameter is: