Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  



























Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 



















Contents

   



(Top)
 


1 Mathematical details  



1.1  Test on a single parameter  





1.2  Test(s) on multiple parameters  





1.3  Nonlinear hypothesis  



1.3.1  Non-invariance to re-parameterisations  









2 Alternatives to the Wald test  





3 See also  





4 References  





5 Further reading  





6 External links  














Wald test






Deutsch
Español
فارسی
Français
Italiano
עברית
Русский
Slovenščina
 

Edit links
 









Article
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Wikidata item
 




Print/export  



Download as PDF
Printable version
 


















From Wikipedia, the free encyclopedia
 


Instatistics, the Wald test (named after Abraham Wald) assesses constraintsonstatistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the precision of the estimate.[1][2] Intuitively, the larger this weighted distance, the less likely it is that the constraint is true. While the finite sample distributions of Wald tests are generally unknown,[3]: 138  it has an asymptotic χ2-distribution under the null hypothesis, a fact that can be used to determine statistical significance.[4]

Together with the Lagrange multiplier test and the likelihood-ratio test, the Wald test is one of three classical approaches to hypothesis testing. An advantage of the Wald test over the other two is that it only requires the estimation of the unrestricted model, which lowers the computational burden as compared to the likelihood-ratio test. However, a major disadvantage is that (in finite samples) it is not invariant to changes in the representation of the null hypothesis; in other words, algebraically equivalent expressions of non-linear parameter restriction can lead to different values of the test statistic.[5][6] That is because the Wald statistic is derived from a Taylor expansion,[7] and different ways of writing equivalent nonlinear expressions lead to nontrivial differences in the corresponding Taylor coefficients.[8] Another aberration, known as the Hauck–Donner effect,[9] can occur in binomial models when the estimated (unconstrained) parameter is close to the boundary of the parameter space—for instance a fitted probability being extremely close to zero or one—which results in the Wald test no longer monotonically increasing in the distance between the unconstrained and constrained parameter.[10][11]

Mathematical details[edit]

Under the Wald test, the estimated that was found as the maximizing argument of the unconstrained likelihood function is compared with a hypothesized value . In particular, the squared difference is weighted by the curvature of the log-likelihood function.

Test on a single parameter[edit]

If the hypothesis involves only a single parameter restriction, then the Wald statistic takes the following form:

which under the null hypothesis follows an asymptotic χ2-distribution with one degree of freedom. The square root of the single-restriction Wald statistic can be understood as a (pseudo) t-ratio that is, however, not actually t-distributed except for the special case of linear regression with normally distributed errors.[12] In general, it follows an asymptotic z distribution.[13]

where is the standard error (SE) of the maximum likelihood estimate (MLE), the square root of the variance. There are several ways to consistently estimate the variance matrix which in finite samples leads to alternative estimates of standard errors and associated test statistics and p-values.[3]: 129  The validity of still getting an asymptotically normal distribution after plugin-in the MLE estimator of into the SE relies on Slutsky's theorem.

Test(s) on multiple parameters[edit]

The Wald test can be used to test a single hypothesis on multiple parameters, as well as to test jointly multiple hypotheses on single/multiple parameters. Let be our sample estimator of P parameters (i.e., is a vector), which is supposed to follow asymptotically a normal distribution with covariance matrix V, . The test of Q hypotheses on the P parameters is expressed with a matrix R:

The distribution of the test statistic under the null hypothesis is

which in turn implies

where is an estimator of the covariance matrix.[14]

Proof

Suppose . Then, by Slutsky's theorem and by the properties of the normal distribution, multiplying by R has distribution:

Recalling that a quadratic form of normal distribution has a Chi-squared distribution:

Rearranging n finally gives:

What if the covariance matrix is not known a-priori and needs to be estimated from the data? If we have a consistent estimator of such that has a determinant that is distributed , then by the independence of the covariance estimator and equation above, we have:

Nonlinear hypothesis[edit]

In the standard form, the Wald test is used to test linear hypotheses that can be represented by a single matrix R. If one wishes to test a non-linear hypothesis of the form:

The test statistic becomes:

where is the derivative of c evaluated at the sample estimator. This result is obtained using the delta method, which uses a first order approximation of the variance.

Non-invariance to re-parameterisations[edit]

The fact that one uses an approximation of the variance has the drawback that the Wald statistic is not-invariant to a non-linear transformation/reparametrisation of the hypothesis: it can give different answers to the same question, depending on how the question is phrased.[15][5] For example, asking whether R = 1 is the same as asking whether log R = 0; but the Wald statistic for R = 1 is not the same as the Wald statistic for log R = 0 (because there is in general no neat relationship between the standard errors of R and log R, so it needs to be approximated).[16]

Alternatives to the Wald test[edit]

There exist several alternatives to the Wald test, namely the likelihood-ratio test and the Lagrange multiplier test (also known as the score test). Robert F. Engle showed that these three tests, the Wald test, the likelihood-ratio test and the Lagrange multiplier test are asymptotically equivalent.[17] Although they are asymptotically equivalent, in finite samples, they could disagree enough to lead to different conclusions.

There are several reasons to prefer the likelihood ratio test or the Lagrange multiplier to the Wald test:[18][19][20]

See also[edit]

References[edit]

  1. ^ Fahrmeir, Ludwig; Kneib, Thomas; Lang, Stefan; Marx, Brian (2013). Regression : Models, Methods and Applications. Berlin: Springer. p. 663. ISBN 978-3-642-34332-2.
  • ^ Ward, Michael D.; Ahlquist, John S. (2018). Maximum Likelihood for Social Science : Strategies for Analysis. Cambridge University Press. p. 36. ISBN 978-1-316-63682-4.
  • ^ a b Martin, Vance; Hurn, Stan; Harris, David (2013). Econometric Modelling with Time Series: Specification, Estimation and Testing. Cambridge University Press. ISBN 978-0-521-13981-6.
  • ^ Davidson, Russell; MacKinnon, James G. (1993). "The Method of Maximum Likelihood : Fundamental Concepts and Notation". Estimation and Inference in Econometrics. New York: Oxford University Press. p. 89. ISBN 0-19-506011-3.
  • ^ a b c Gregory, Allan W.; Veall, Michael R. (1985). "Formulating Wald Tests of Nonlinear Restrictions". Econometrica. 53 (6): 1465–1468. doi:10.2307/1913221. JSTOR 1913221.
  • ^ Phillips, P. C. B.; Park, Joon Y. (1988). "On the Formulation of Wald Tests of Nonlinear Restrictions" (PDF). Econometrica. 56 (5): 1065–1083. doi:10.2307/1911359. JSTOR 1911359.
  • ^ Hayashi, Fumio (2000). Econometrics. Princeton: Princeton University Press. pp. 489–491. ISBN 1-4008-2383-8.,
  • ^ Lafontaine, Francine; White, Kenneth J. (1986). "Obtaining Any Wald Statistic You Want". Economics Letters. 21 (1): 35–40. doi:10.1016/0165-1765(86)90117-5.
  • ^ Hauck, Walter W. Jr.; Donner, Allan (1977). "Wald's Test as Applied to Hypotheses in Logit Analysis". Journal of the American Statistical Association. 72 (360a): 851–853. doi:10.1080/01621459.1977.10479969.
  • ^ King, Maxwell L.; Goh, Kim-Leng (2002). "Improvements to the Wald Test". Handbook of Applied Econometrics and Statistical Inference. New York: Marcel Dekker. pp. 251–276. ISBN 0-8247-0652-8.
  • ^ Yee, Thomas William (2022). "On the Hauck–Donner Effect in Wald Tests: Detection, Tipping Points, and Parameter Space Characterization". Journal of the American Statistical Association. 117 (540): 1763–1774. arXiv:2001.08431. doi:10.1080/01621459.2021.1886936.
  • ^ Cameron, A. Colin; Trivedi, Pravin K. (2005). Microeconometrics : Methods and Applications. New York: Cambridge University Press. p. 137. ISBN 0-521-84805-9.
  • ^ Davidson, Russell; MacKinnon, James G. (1993). "The Method of Maximum Likelihood : Fundamental Concepts and Notation". Estimation and Inference in Econometrics. New York: Oxford University Press. p. 89. ISBN 0-19-506011-3.
  • ^ Harrell, Frank E. Jr. (2001). "Section 9.3.1". Regression modeling strategies. New York: Springer-Verlag. ISBN 0387952322.
  • ^ Fears, Thomas R.; Benichou, Jacques; Gail, Mitchell H. (1996). "A reminder of the fallibility of the Wald statistic". The American Statistician. 50 (3): 226–227. doi:10.1080/00031305.1996.10474384.
  • ^ Critchley, Frank; Marriott, Paul; Salmon, Mark (1996). "On the Differential Geometry of the Wald Test with Nonlinear Restrictions". Econometrica. 64 (5): 1213–1222. doi:10.2307/2171963. hdl:1814/524. JSTOR 2171963.
  • ^ Engle, Robert F. (1983). "Wald, Likelihood Ratio, and Lagrange Multiplier Tests in Econometrics". In Intriligator, M. D.; Griliches, Z. (eds.). Handbook of Econometrics. Vol. II. Elsevier. pp. 796–801. ISBN 978-0-444-86185-6.
  • ^ Harrell, Frank E. Jr. (2001). "Section 9.3.3". Regression modeling strategies. New York: Springer-Verlag. ISBN 0387952322.
  • ^ Collett, David (1994). Modelling Survival Data in Medical Research. London: Chapman & Hall. ISBN 0412448807.
  • ^ Pawitan, Yudi (2001). In All Likelihood. New York: Oxford University Press. ISBN 0198507658.
  • ^ Agresti, Alan (2002). Categorical Data Analysis (2nd ed.). Wiley. p. 232. ISBN 0471360937.
  • Further reading[edit]

    External links[edit]


    Retrieved from "https://en.wikipedia.org/w/index.php?title=Wald_test&oldid=1215075042"

    Category: 
    Statistical tests
    Hidden categories: 
    Articles with short description
    Short description is different from Wikidata
     



    This page was last edited on 22 March 2024, at 23:13 (UTC).

    Text is available under the Creative Commons Attribution-ShareAlike License 4.0; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.



    Privacy policy

    About Wikipedia

    Disclaimers

    Contact Wikipedia

    Code of Conduct

    Developers

    Statistics

    Cookie statement

    Mobile view



    Wikimedia Foundation
    Powered by MediaWiki