Likelihood function
From Academic Kids

In statistics, a likelihood function is a conditional probability function considered a function of its second argument with its first argument held fixed, thus:
 <math>b\mapsto P(A \mid B=b),<math>
and also any other function proportional to such a function. That is, the likelihood function for B is the equivalence class of functions
 <math>L(b \mid A) = \alpha \; P(A \mid B=b)<math>
for any constant of proportionality <math>\alpha > 0<math>. Thus the numerical value <math>L(b  A)<math> is immaterial; all that matters are ratios of the form <math>\frac{L(b_2  A)}{L(b_1  A)}<math>, since these are invariant with respect to the constant of proportionality.
Likelihood as a solitary term is a shorthand for likelihood function. In the colloquial language, "likelihood" is one of several informal synonyms for "probability", but throughout this article we use only the technical definition.
In a sense, likelihood works backwards from probability: given <math>B<math>, we use the conditional probability <math>P(A  B)<math> to reason about <math>A<math>, and, given <math>A<math>, we use the likelihood function <math>P(A  B)<math> to reason about <math>B<math>. This mode of reasoning is formalized in Bayes' theorem; note the appearance of a likelihood function for <math>B<math> given <math>A<math> in:
 <math>P(B \mid A) = \frac{P(A \mid B)\;P(B)}{P(A)}<math>
since, as functions of <math>B<math>, both <math>P(A  B)<math> and <math>\frac{P(A  B)}{P(A)}<math> are likelihood functions for <math>B<math> given <math>A<math>.
For more about making inferences via likelihood functions, see also the method of maximum likelihood, and likelihoodratio testing.
Contents 
Concentrated likelihood
For a likelihood function of more than one parameter, it is sometimes possible to write some parameters as a function of other parameters, which reduces the number of independent parameters. This is concentration of the parameters and results in the concentrated likelihood function.
Historical remarks
The first use of "likelihood" in the sense explained here and the distinction between likelihood and probability were first made by R.A. Fisher in his paper "On the mathematical foundations of theoretical statistics" (1922). In that paper, Fisher also uses the term "method of maximum likelihood". Fisher argues against inverse probability as a basis for statistical inferences, and instead proposes inferences based on likelihood functions.
Likelihood function of a parametrized model
Among many applications, we consider here one of broad theoretical and practical importance. Given a parametrized family of probability density functions
 <math>x\mapsto f(x\mid\theta),<math>
where θ is the parameter (in the case of discrete distributions, the probability density functions are probability "mass" functions) the likelihood function is
 <math>L(\theta \mid x)=f(x\mid\theta),<math>
where x is the observed outcome of an experiment. In other words, when f(x  θ) is viewed as a function of x with θ fixed, it is a probability density function, and when viewed as a function of θ with x fixed, it is a likelihood function.
Note: This is not the same as the probability that those parameters are the right ones, given the observed sample. Attempting to interpret the likelihood of a hypothesis given observed evidence as the probability of the hypothesis is a common error, with potentially disastrous realworld consequences in medicine, engineering or jurisprudence. See prosecutor's fallacy for an example of this.
Example
For example, if I toss a coin, with a probability p_{H} of landing heads up ('H'), the probability of getting two heads in two trials ('HH') is p_{H}^{2}. If p_{H} = 0.5, then the probability of seeing two heads is 0.25.
In symbols, we can say the above as
 <math>P(\mbox{HH} \mid p_H = 0.5) = 0.25<math>
Another way of saying this is to reverse it and say that "the likelihood of p_{H} = 0.5 given the observation 'HH' is 0.25", i.e.,
 <math>L(p_H=0.5 \mid \mbox{HH}) = P(\mbox{HH}\mid p_H=0.5) =0.25<math>.
But this is not the same as saying that the probability of p_{H} = 0.5 given the observation is 0.25.
To take an extreme case, on this basis we can say "the likelihood of p_{H} = 1 given the observation 'HH' is 1". But it is clearly not the case that the probability of p_{H} = 1 given the observation is 1: the event 'HH' can occur for any p_{H} > 0 (and often does, in reality, for p_{H} roughly 0.5).
The likelihood function is not a probability density function  for example, the integral of a likelihood function is not in general 1. In this example, the integral of the likelihood density over the interval [0, 1] in p_{H} is 1/3, demonstrating again that the likelihood density function cannot be interpreted as a probability density function for p_{H}. On the other hand, given any particular value of p_{H}, e.g. p_{H} = 0.5, the integral of the probability density function over the domain of the random variables is 1.
See also
 Bayes factor
 Bayesian inference
 Conditional probability
 Likelihood principle
 Likelihoodratio test
 Maximum likelihood
 Principle of maximum entropy
 Score (statistics)
References
 Ronald A. Fisher. "On the mathematical foundations of theoretical statistics". Philosophical Transactions of the Royal Society, A, 222:309368 (1922). ("Likelihood" is discussed in section 6.)