Maximum likelihood

Maximum likelihood estimation (MLE) is a popular statistical method used to make inferences about parameters of the underlying probability distribution of a given data set.

The method was pioneered by geneticist and statistician Sir Ronald A. Fisher between 1912 and 1922 (see external resources below for more information on the history of MLE).

Contents

Prerequisites

The following discussion assumes that the reader is familiar with basic notions in probability theory such as probability distributions, probability density functions, random variables and expectation. It also assumes s/he is familiar with standard basic techniques of maximising continuous real-valued functions, such as using differentiation to find a fixed point.

The philosophy of MLE

Given a discrete probability distribution <math>D<math> with known probability mass function <math>f_D<math> and distributional parameter <math>\theta<math>, we may draw a sample <math>X_1, X_2, ... X_n<math> of <math>n<math> values from this distribution and then using the mass function we may compute the probability associated with our observed data:

<math>\mathbb{P}(\mbox{we sample values }x_1,x_2,\dots,x_n) = f_D(x_1,\dots,x_n \mid \theta)<math>

However, it may be that we don't know the value of the parameter <math>\theta<math> despite knowing (or believing) that our data comes from the distribution <math>D<math>. How should we estimate <math>\theta<math>? It is a sensible idea to draw a sample of <math>n<math> values <math>X_1, X_2, ... X_n<math> and use this data to help us make an estimate.

Once we have our sample <math>X_1, X_2, ... X_n<math>, we may seek an estimate of the value of <math>\theta<math> from that sample. MLE seeks the most likely value of the parameter <math>\theta<math> (ie we maximise the likelihood of the observed data set over all possible values of <math>\theta<math>). This is in contrast to seeking other estimators, such as an unbiased estimator of <math>\theta<math>, which may not necessarily yield the most likely value of <math>\theta<math> but which will yield a value that (on average) will neither tend to over-estimate nor under-estimate the true value of <math>\theta<math>.

To implement the MLE method mathematically, we define the likelihood:

<math>\mbox{lik}(\theta) = f_D(x_1,\dots,x_n \mid \theta)<math>

and maximise this function over all possible values of the parameter <math>\theta<math>. The value <math>\hat{\theta}<math> which maximises the likelihood is known as the maximum likelihood estimator (MLE) for <math>\theta<math>.

Notes

Examples

Discrete distribution, discrete and finite parameter space

Consider tossing an unfair coin 80 times (ie we sample something like <math>x_1=\mbox{H}<math> , <math>x_2=\mbox{T} <math> , <math>\ldots <math> , <math>x_{80}=\mbox{T}<math> and count the number of HEADS <math>\mbox{H}<math> observed). Call the probability of tossing a HEAD <math>p<math>, and the probability of tossing TAILS <math>1-p<math> (so here <math>p<math> is the parameter which we referred to as <math>\theta<math> above). Suppose we toss 49 HEADS and 31 TAILS, and suppose the coin was taken from a box containing three coins: one which gives HEADS with probability <math>p=1/3<math>, one which gives HEADS with probability <math>p=1/2<math> and another which gives heads with probability <math>p=2/3<math>. The coins have lost their labels, so we don't know which one it was. Using maximum likelihood estimation we can calculate which coin it was most likely to have been, given the data that we observed. The likelihood function (defined above) takes one of three values:

<math>

\begin{matrix} \mathbb{P}(\mbox{we toss 49 HEADS out of 80}\mid p=1/3) & = & \binom{80}{31}(1/3)^{49}(1-1/3)^{31} = 0.000 \\ &&\\ \mathbb{P}(\mbox{we toss 49 HEADS out of 80}\mid p=1/2) & = & \binom{80}{31}(1/2)^{49}(1-1/2)^{31} = 0.019 \\ &&\\ \mathbb{P}(\mbox{we toss 49 HEADS out of 80}\mid p=2/3) & = & \binom{80}{31}(2/3)^{49}(1-2/3)^{31} = 0.054 \\ \end{matrix} <math>

We see that the likelihood is maximised by parameter <math>\hat{p}=2/3<math>, and so this is our maximum likelihood estimate for <math>p<math>.

Discrete distribution, continuous parameter space

Now suppose our special box of coins from example 1 contains an infinite number of coins: one for every possible value <math>0\leq p \leq 1<math>. We must maximise the likelihood function:

<math>

\begin{matrix} \mbox{lik}(\theta) & = & f_D(\mbox{observe 49 HEADS out of 80}\mid p) = \binom{80}{49} p^{49}(1-p)^{31} \\ \end{matrix} <math>

over all possible values <math>0\leq p \leq 1<math>.

One may maximise this function by differentiating with respect to <math>p<math> and setting to zero:

<math>

\begin{matrix} 0 & = & \frac{\partial}{\partial p} \left( \binom{80}{49} p^{49}(1-p)^{31} \right) \\

 &   & \\
 & \propto & 49p^{48}(1-p)^{31} - 31p^{49}(1-p)^{30} \\
 &   & \\
 & = & p^{48}(1-p)^{30}\left[ 49(1-p) - 31p \right] \\

\end{matrix} <math>

Missing image
BinominalLikelihoodGraph.png
Likelihood of different proportion parameter values for a binomial process with t = 3 and n = 10; the ML estimator occurs at the mode with the peak (maximum) of the curve.

which has solutions <math>p=0, p=1, p=49/80<math>. The solution which maximises the likelihood is clearly <math>p=49/80<math> (since <math>p=0<math> and <math>p=1<math> result in a likelihood of zero). Thus we say the maximum likelihood estimator for <math>p<math> is <math>\hat{p}=49/80<math>.

This result is easily generalised by substituting a letter such as <math>t<math> in the place of 49 to represent the observed number of 'successes' of our Bernoulli trials, and a letter such as <math>n<math> in the place of 80 to represent the number of Bernoulli trials. Exactly the same calculation yields the maximum likelihood estimator:

<math>\hat{p}=\frac{t}{n}<math>

for any sequence of <math>n<math> Bernoulli trials resulting in <math>t<math> 'successes'.

Continuous distribution, continuous parameter space

One of the most common continuous probability distributions is the Normal distribution which has probability density function:

<math>f(x\mid \mu,\sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}}<math>

The corresponding density function for a sample of <math>n<math> independent identically distributed normal random variables is:

<math>f(x_1,\ldots,x_n \mid \mu,\sigma^2) = \left( \frac{1}{2\pi\sigma^2} \right)^\frac{n}{2} e^{-\frac{ \sum_{i=1}^{n}(x_i-\mu)^2}{2\sigma^2}}<math>

or more conveniently:

<math>f(x_1,\ldots,x_n \mid \mu,\sigma^2) = \left( \frac{1}{2\pi\sigma^2} \right)^\frac{n}{2} e^{-\frac{ \sum_{i=1}^{n}(x_i-\bar{x})^2+n(\bar{x}-\mu)^2}{2\sigma^2}}<math>

This distribution has two parameters: <math>\mu,\sigma^2<math>. This may be alarming to some, given that in the discussion above we only talked about maximising over a single parameter. However there is no need for alarm: we simply maximise the likelihood <math>\mbox{lik}(\mu,\sigma) = f(x_1,,\ldots,x_n \mid \mu, \sigma^2)<math> over each parameter separately, which of course is more work but no more complicated. In the above notation we would write <math>\theta=(\mu,\sigma^2)<math>.

When maximising the likelihood, we may equivalently maximise the log of the likelihood, since it is a continuous strictly increasing function over the range of the likelihood. [Note: the log-likelihood is closely related to information entropy and Fisher information ]. This often simplifies the algebra somewhat, and indeed does so in this case:

<math>

\begin{matrix} 0 & = & \frac{\partial}{\partial \mu} \log \left( \left( \frac{1}{2\pi\sigma^2} \right)^\frac{n}{2} e^{-\frac{ \sum_{i=1}^{n}(x_i-\bar{x})^2+n(\bar{x}-\mu)^2}{2\sigma^2}} \right) \\

 & = & \frac{\partial}{\partial \mu} \left( \log\left( \frac{1}{2\pi\sigma^2} \right)^\frac{n}{2} - \frac{ \sum_{i=1}^{n}(x_i-\bar{x})^2+n(\bar{x}-\mu)^2}{2\sigma^2} \right) \\
 & = & 0 - \frac{-2n(\bar{x}-\mu)}{2\sigma^2} \\

\end{matrix} <math>

which is solved by <math>\hat{\mu} = \bar{x} = \sum^{n}_{i=1}x_i/n <math>. This is indeed the maximum of the function since it is the only turning point in <math>\mu<math> and the second derivative is strictly less than zero.

Similarly we differentiate with respect to <math>\sigma<math> and equate to zero to obtain the maximum of the likelihood <math>\hat{\sigma}^2 = \sum_{i=1}^n(x_i-\hat{\mu})^2/n<math>. This is left as an exercise to the reader.

Formally we say that the maximum likelihood estimator for <math>\theta=(\mu,\sigma^2)<math> is:

<math>\hat{\theta}=(\hat{\mu},\hat{\sigma}^2) = (\bar{x},\sum_{i=1}^n(x_i-\bar{x})^2/n)<math>.

Properties

Functional invariance

If <math>\hat{\theta}<math> is the maximum likelihood estimator (MLE) for <math>\theta<math>, then the MLE for <math>\alpha = g(\theta)<math> is <math>\hat{\alpha} = g(\hat{\theta})<math> (provided the function <math>g(\theta)<math> is a one to one function).

Asymptotic behaviour

Maximum likelihood estimators achieve minimum variance (as given by the Cramer-Rao lower bound) in the limit as the sample size tends to infinity. When the MLE is unbiased, we may equivalently say that it has minimum mean squared error in the limit.

Bias

The bias of maximum-likelihood estimators can be substantial. Consider a case where n tickets numbered from 1 to n are placed in a box and one is selected at random (see uniform distribution). If n is unknown, then the maximum-likelihood estimator of n is the value on the drawn ticket, even though the expectation is only <math>(n+1)/2<math>. In estimating the highest number n, we can only be certain that it is greater than or equal to the drawn ticket number.

See also

  • The mean squared error is a measure of how 'good' an estimator of a distributional parameter is (be it the maximum likelihood estimator or some other estimator).
  • The article on the Rao-Blackwell theorem for a discussion on finding the best possible unbiased estimator (in the sense of having minimal mean squared error ) by a process called Rao-Blackwellisation. The MLE is often a good starting place for the process.
  • The reader may be intrigued to learn that the MLE (if it exists) will always be a function of a sufficient statistic for the parameter in question.

External resources

nl:Maximum Likelihood

Navigation

  • Art and Cultures
    • Art (https://academickids.com/encyclopedia/index.php/Art)
    • Architecture (https://academickids.com/encyclopedia/index.php/Architecture)
    • Cultures (https://www.academickids.com/encyclopedia/index.php/Cultures)
    • Music (https://www.academickids.com/encyclopedia/index.php/Music)
    • Musical Instruments (http://academickids.com/encyclopedia/index.php/List_of_musical_instruments)
  • Biographies (http://www.academickids.com/encyclopedia/index.php/Biographies)
  • Clipart (http://www.academickids.com/encyclopedia/index.php/Clipart)
  • Geography (http://www.academickids.com/encyclopedia/index.php/Geography)
    • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
    • Maps (http://www.academickids.com/encyclopedia/index.php/Maps)
    • Flags (http://www.academickids.com/encyclopedia/index.php/Flags)
    • Continents (http://www.academickids.com/encyclopedia/index.php/Continents)
  • History (http://www.academickids.com/encyclopedia/index.php/History)
    • Ancient Civilizations (http://www.academickids.com/encyclopedia/index.php/Ancient_Civilizations)
    • Industrial Revolution (http://www.academickids.com/encyclopedia/index.php/Industrial_Revolution)
    • Middle Ages (http://www.academickids.com/encyclopedia/index.php/Middle_Ages)
    • Prehistory (http://www.academickids.com/encyclopedia/index.php/Prehistory)
    • Renaissance (http://www.academickids.com/encyclopedia/index.php/Renaissance)
    • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
    • United States (http://www.academickids.com/encyclopedia/index.php/United_States)
    • Wars (http://www.academickids.com/encyclopedia/index.php/Wars)
    • World History (http://www.academickids.com/encyclopedia/index.php/History_of_the_world)
  • Human Body (http://www.academickids.com/encyclopedia/index.php/Human_Body)
  • Mathematics (http://www.academickids.com/encyclopedia/index.php/Mathematics)
  • Reference (http://www.academickids.com/encyclopedia/index.php/Reference)
  • Science (http://www.academickids.com/encyclopedia/index.php/Science)
    • Animals (http://www.academickids.com/encyclopedia/index.php/Animals)
    • Aviation (http://www.academickids.com/encyclopedia/index.php/Aviation)
    • Dinosaurs (http://www.academickids.com/encyclopedia/index.php/Dinosaurs)
    • Earth (http://www.academickids.com/encyclopedia/index.php/Earth)
    • Inventions (http://www.academickids.com/encyclopedia/index.php/Inventions)
    • Physical Science (http://www.academickids.com/encyclopedia/index.php/Physical_Science)
    • Plants (http://www.academickids.com/encyclopedia/index.php/Plants)
    • Scientists (http://www.academickids.com/encyclopedia/index.php/Scientists)
  • Social Studies (http://www.academickids.com/encyclopedia/index.php/Social_Studies)
    • Anthropology (http://www.academickids.com/encyclopedia/index.php/Anthropology)
    • Economics (http://www.academickids.com/encyclopedia/index.php/Economics)
    • Government (http://www.academickids.com/encyclopedia/index.php/Government)
    • Religion (http://www.academickids.com/encyclopedia/index.php/Religion)
    • Holidays (http://www.academickids.com/encyclopedia/index.php/Holidays)
  • Space and Astronomy
    • Solar System (http://www.academickids.com/encyclopedia/index.php/Solar_System)
    • Planets (http://www.academickids.com/encyclopedia/index.php/Planets)
  • Sports (http://www.academickids.com/encyclopedia/index.php/Sports)
  • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
  • Weather (http://www.academickids.com/encyclopedia/index.php/Weather)
  • US States (http://www.academickids.com/encyclopedia/index.php/US_States)

Information

  • Home Page (http://academickids.com/encyclopedia/index.php)
  • Contact Us (http://www.academickids.com/encyclopedia/index.php/Contactus)

  • Clip Art (http://classroomclipart.com)
Toolbox
Personal tools