Jump to content

Variance: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
Larry_Sanger (talk)
mNo edit summary
Line 1: Line 1:
The '''variance''' of a set of data is the mean squared deviation from the [[Arithmetic Mean]] of the same set of data. Because this calculation sums the squared deviations, we can conclude two things:
The '''variance''' of a set of data is the mean squared deviation from the [[Arithmetic Mean]] of the same set of data. Because this calculation sums the squared deviations, we can conclude two things:

#The variance is never negative because the squares are positive or zero. When any method of calculating the variance results in a negative number, we know that there has been an error, often due to a poor choice of algorithm.
#The variance is never negative because the squares are positive or zero. When any method of calculating the variance results in a negative number, we know that there has been an error, often due to a poor choice of algorithm.
#The unit of variance is the square of the unit of observation. Thus, the variance of a set of heights measured in inches will be given in square inches. This fact is inconvenient and has motivated statisticians to call the square root of the variance, the [[standard deviation]] and to quote this value as a summary of dispersion.

#The unit of variance is the square of the unit of observation. Thus, the variance of a set of heights measured in inches will be given in square inches. This fact is inconvenient and has motivated statisticians to call the square root of the variance, the [[standard deviation]] and to quote this value as a summary of dispersion.




See [[algorithms for calculating variance]].
See [[algorithms for calculating variance]].







When the set of data is a [[population]], we call this the ''population variance''. If the set is a [[sample]], we call it the ''sample variance''.
When the set of data is a [[population]], we call this the ''population variance''. If the set is a [[sample]], we call it the ''sample variance''.




----
----

:see also [[standard deviation]]
:see also [[standard deviation]]




back to [[statistical dispersion]]
back to [[statistical dispersion]]


Revision as of 08:05, 30 June 2001

The variance of a set of data is the mean squared deviation from the Arithmetic Mean of the same set of data. Because this calculation sums the squared deviations, we can conclude two things:

  1. The variance is never negative because the squares are positive or zero. When any method of calculating the variance results in a negative number, we know that there has been an error, often due to a poor choice of algorithm.
  1. The unit of variance is the square of the unit of observation. Thus, the variance of a set of heights measured in inches will be given in square inches. This fact is inconvenient and has motivated statisticians to call the square root of the variance, the standard deviation and to quote this value as a summary of dispersion.


See algorithms for calculating variance.



When the set of data is a population, we call this the population variance. If the set is a sample, we call it the sample variance.



see also standard deviation


back to statistical dispersion