Wikipedia talk:WikiProject Mathematics/equivlist
The parameters and variables of factor analysis can be given a geometrical interpretation. The data (), the factors () and the errors () can be viewed as vectors in an -dimensional Euclidean space (sample space), represented as , and respectively. Since the data is standardized, the data vectors are of unit length (). The factor vectors define an -dimensional linear subspace (i.e. a hyperplane) in this space, upon which the data vectors are projected orthogonally. This follows from the model equation
and the independence of the factors and the errors: . In the above example, the hyperplane is just a 2-dimensional plane defined by the two factor vectors. The projection of the data vectors onto the hyperplane is given by
and the errors are vectors from that projected point to the data point and are perpendicular to the hyperplane. The goal of factor analysis is to find a hyperplane which is a "best fit" to the data in some sense. Any set of factor vectors which lie in the hyperplane and are independent will serve to define the hyperplane, so we are free to specify them as both orthogonal and normal () with no loss of generality. After a suitable set of factors are found, they may also be arbitrarily rotated within the hyperplane, so that any such rotation of the factor vectors will define the same hyperplane, and also be a solution. As a result, in the above example, in which the fitting hyperplane is two dimensional, if we do not know beforehand that the two types of intelligence are uncorrelated, then we cannot interpret the two factors as the two different types of intelligence. Even if they are uncorrelated, we cannot tell which factor corresponds to verbal intelligence and which corresponds to mathematical intelligence, or whether the factors are linear combinations of both, without an outside argument.
The data vectors have unit length. The correlation matrix for the data is given by . The correlation matrix can be geometrically interpreted as the cosine of the angle between the two data vectors and . The diagonal elements will clearly be 1's and the off diagonal elements will have absolute values less than or equal to unity. The "reduced correlation matrix" is defined as
- .
The goal of factor analysis is to choose the fitting hyperplane such that the reduced correlation matrix reproduces the correlation matrix as nearly as possible, except for the diagonal elements of the correlation matrix which are known to have unit value. In other words, the goal is to reproduce as accurately as possible the cross-correlations in the data. Specifically, for the fitting hyperplane, the mean square error in the off-diagonal components
is to be minimized, and this is accomplished by minimizing it with respect to a set of orthonormal factor vectors. It can be seen that
The term on the right is just the covariance of the errors. In the model, the error covariance is stated to be a diagonal matrix and so the above minimization problem will in fact yield a "best fit" to the model: It will yield a sample estimate of the error covariance which has its off-diagonal components minimized in the mean square sense. It can be seen that since the are orthogonal projections of the data vectors, their length will be less than or equal to the length of the projected data vector, which is unity. The square of these lengths are just the diagonal elements of the reduced correlation matrix. These diagonal elements of the reduced correlation matrix are known as "communalities":
Large values of the commmunalities will indicate that the fitting hyperplane is rather accurately reproducing the correlation matrix. The optimization problem stated above is intractable without high-speed computation. Before the advent of high speed computers, considerable effort was made to arrive at approximate solutions to the problem, particularly in estimating the communalities by other means, which then simplifies the problem considerably. With the advent of high-speed computers, the minimization problem can be solved quickly amd directly, and the communalities are calculated in the process, rather than being needed beforehand. The MinRes algorithm is particularly suited to this problem, but is hardly the only means of finding an exact solution.
Talk pages are where people discuss how to make content on Wikipedia the best that it can be. You can use this page to start a discussion with others about how to improve the "Wikipedia:WikiProject Mathematics/equivlist" page.