Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  

































Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 



















Contents

   



(Top)
 


1 Background  





2 Partitioning the sum of squares in linear regression  



2.1  Proof  





2.2  Further partitioning  







3 See also  





4 References  














Partition of sums of squares






Italiano
 

Edit links
 









Article
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Wikidata item
 




Print/export  



Download as PDF
Printable version
 
















Appearance
   

 






From Wikipedia, the free encyclopedia
 


The partition of sums of squares is a concept that permeates much of inferential statistics and descriptive statistics. More properly, it is the partitioning of sums of squared deviations or errors. Mathematically, the sum of squared deviations is an unscaled, or unadjusted measure of dispersion (also called variability). When scaled for the number of degrees of freedom, it estimates the variance, or spread of the observations about their mean value. Partitioning of the sum of squared deviations into various components allows the overall variability in a dataset to be ascribed to different types or sources of variability, with the relative importance of each being quantified by the size of each component of the overall sum of squares.

Background[edit]

The distance from any point in a collection of data, to the mean of the data, is the deviation. This can be written as , where is the ith data point, and is the estimate of the mean. If all such deviations are squared, then summed, as in , this gives the "sum of squares" for these data.

When more data are added to the collection the sum of squares will increase, except in unlikely cases such as the new data being equal to the mean. So usually, the sum of squares will grow with the size of the data collection. That is a manifestation of the fact that it is unscaled.

In many cases, the number of degrees of freedom is simply the number of data points in the collection, minus one. We write this as n − 1, where n is the number of data points.

Scaling (also known as normalizing) means adjusting the sum of squares so that it does not grow as the size of the data collection grows. This is important when we want to compare samples of different sizes, such as a sample of 100 people compared to a sample of 20 people. If the sum of squares were not normalized, its value would always be larger for the sample of 100 people than for the sample of 20 people. To scale the sum of squares, we divide it by the degrees of freedom, i.e., calculate the sum of squares per degree of freedom, or variance. Standard deviation, in turn, is the square root of the variance.

The above describes how the sum of squares is used in descriptive statistics; see the article on total sum of squares for an application of this broad principle to inferential statistics.

Partitioning the sum of squares in linear regression[edit]

Theorem. Given a linear regression model including a constant , based on a sample containing n observations, the total sum of squares can be partitioned as follows into the explained sum of squares (ESS) and the residual sum of squares (RSS):

where this equation is equivalent to each of the following forms:

where is the value estimated by the regression line having , , ..., as the estimated coefficients.[1]

Proof[edit]

The requirement that the model include a constant or equivalently that the design matrix contain a column of ones ensures that , i.e. .

The proof can also be expressed in vector form, as follows:

The elimination of terms in the last line, used the fact that

Further partitioning[edit]

Note that the residual sum of squares can be further partitioned as the lack-of-fit sum of squares plus the sum of squares due to pure error.

See also[edit]

References[edit]

  1. ^ "Sum of Squares - Definition, Formulas, Regression Analysis". Corporate Finance Institute. Retrieved 2020-10-16.

Retrieved from "https://en.wikipedia.org/w/index.php?title=Partition_of_sums_of_squares&oldid=1159434179"

Categories: 
Analysis of variance
Least squares
Hidden categories: 
Articles with short description
Short description is different from Wikidata
 



This page was last edited on 10 June 2023, at 08:36 (UTC).

Text is available under the Creative Commons Attribution-ShareAlike License 4.0; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.



Privacy policy

About Wikipedia

Disclaimers

Contact Wikipedia

Code of Conduct

Developers

Statistics

Cookie statement

Mobile view



Wikimedia Foundation
Powered by MediaWiki