This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. Please help improve this article by introducing more precise citations. (July 2023) (Learn how and when to remove this message)
|
Likelihoodist statisticsorlikelihoodism is an approach to statistics that exclusively or primarily uses the likelihood function. Likelihoodist statistics is a more minor school than the main approaches of Bayesian statistics and frequentist statistics, but has some adherents and applications. The central idea of likelihoodism is the likelihood principle: data are interpreted as evidence, and the strength of the evidence is measured by the likelihood function. Beyond this, there are significant differences within likelihood approaches: "orthodox" likelihoodists consider data only as evidence, and do not use it as the basis of statistical inference, while others make inferences based on likelihood, but without using Bayesian inferenceorfrequentist inference. Likelihoodism is thus criticized for either not providing a basis for belief or action (if it fails to make inferences), or not satisfying the requirements of these other schools.
The likelihood function is also used in Bayesian statistics and frequentist statistics, but they differ in how it is used. Some likelihoodists consider their use of likelihood as an alternative to other approaches, while others consider it complementary and compatible with other approaches; see § Relation with other theories.
While likelihoodism is a distinct approach to statistical inference, it can be related to or contrasted with other theories and methodologies in statistics. Here are some notable connections:
While likelihood-based statistics have been widely used and have many advantages, they are not without criticism. Here are some common criticisms of likelihoodist statistics:
Likelihoodism as a distinct school dates to Edwards (1972), which gives a systematic treatment of statistics, based on likelihood. This built on significant earlier work; see Dempster (1972) for a contemporary review.
While comparing ratios of probabilities dates to early statistics and probability, notably Bayesian inference as developed by Pierre-Simon Laplace from the late 1700s, likelihood as a distinct concept is due to Ronald Fisher in Fisher (1921). Likelihood played an important role in Fisher's statistics, but he developed and used many non-likelihood frequentist techniques as well. His late writings, notably Fisher (1955), emphasize likelihood more strongly, and can be considered a precursor to a systematic theory of likelihoodism.
The likelihood principle was proposed in 1962 by several authors, notably Barnard, Jenkins & Winsten (1962), Birnbaum (1962), and Savage (1962), and followed by the law of likelihoodinHacking (1965); these laid the foundation for likelihoodism. See Likelihood principle § History for early history.
While Edwards's version of likelihoodism considered likelihood as only evidence, which was followed by Royall (1997), others proposed inference based only on likelihood, notably as extensions of maximum likelihood estimation. Notable is John Nelder, who declared in Nelder (1999, p. 264):
At least once a year I hear someone at a meeting say that there are two modes of inference: frequentist and Bayesian. That this sort of nonsense should be so regularly propagated shows how much we have to do. To begin with there is a flourishing school of likelihood inference, to which I belong.
Textbooks that take a likelihoodist approach include the following: Kalbfleisch (1985), Azzalini (1996), Pawitan (2001), Rohde (2014), and Held & Sabanés Bové (2014). A collection of relevant papers is given by Taper & Lele (2004).