User:MRFS/Perron
In linear algebra, the Perron–Frobenius theorem, named after Oskar Perron and Georg Frobenius, states results concerning the eigenvalues and eigenvectors of irreducible non-negative square matrices. It has important applications to probability theory (ergodicity of Markov chains) and the theory of dynamical systems (subshifts of finite type).
Historical context
[edit]The early results were due to Perron (1907). In general the eigenvalues of a real (or complex) square matrix A are complex numbers and collectively they make up the spectrum of the matrix. The spectral radius (denoted by ρ(A)) is the radius of the smallest disk centred at the origin that covers the entire spectrum. A matrix in which all entries are positive real numbers is called positive. Likewise a matrix with all its entries non-negative real numbers is called non-negative.
Perron's findings
[edit]Let A be a positive n-square matrix with spectral radius r. Then the following statements hold.
- r is a positive real number. It is an eigenvalue of A and any other eigenvalue λ satisfies |λ| < r.
- r is a simple root of the characteristic polynomial of A. Both right and left eigenspaces associated with r are one-dimensional.
- A has an eigenvector w whose components are all positive. In other words Aw = rw and wi > 0 for 1 ≤ i ≤ n.
- Likewise A has a left eigenvector v (which is a row vector) whose components are all positive and vA = rv.
- The only eigenvectors whose components are all positive are those associated with r.
Since positive matrices are dense in the non-negative ones some of these findings can be extended to the latter class by continuity arguments. Significantly the spectral radius of any non-negative square matrix is always an eigenvalue, and is called the Perron root. The corresponding spectral projection is called the Perron projection. Further let || . || be a matrix norm on a square matrix A which has spectral radius r. Then ||A|| ≥ ||Ax||/||x|| = ||λx||/||x|| = |λ| for any eigenvalue λ and corresponding eigenvector x so r ≤ ||A||. A commonly used matrix norm is the infinity norm; the moduli of the numbers in each row are added together and the largest of these row sums equates to ||A||∞. Suppose now that A is positive. Then there exists a positive eigenvector w such that Aw = rw and the smallest component of w (say wi) is 1. Then r = (Aw)i ≥ the sum of the numbers in row i of A. Thus the minimum row sum gives a lower bound for r and this observation can be extended to all non-negative matrices by continuity. To sum up ....
- If A = A = A = (a_{ij}) is a non-negative square matrix then ρ(A) is an eigenvalue of A and
Some simple examples are provided by (0,1)-matrices: the identity matrix has entries of 1s and 0s, and the single eigenvalue 1, with multiplicity n. Similarly, a nilpotent matrix (such as 1s above the diagonal, 0s elsewhere) has eigenvalue 0, with algebraic multiplicity n and geometric multiplicity 1. These show that the bounds are sharp, and that r need not be positive (but remains nonnegative). Permutation matrices provide further instructive examples, where all eigenvalues are roots of unity.
Primitive and irreducible matrices
[edit]Frobenius sought to extend Perron's work to non-negative matrices. A full generalisation was obviously impossible because identity matrices are non-negative and have multi-dimensional eigenspaces. However Frobenius found even deeper results for square matrices that were both non-negative and irreducible. These were published in 1912 and they are known today as the Perron–Frobenius theorem. It involves several other classes of square matrices defined as follows :-
- A is primitive ⇔ every entry in A is non-negative and there exists k ∈ ℤ+ such that every entry in Ak is positive.
- A is reducible ⇔ there exists a permutation matrix P such that PAP−1 = where E and G are square matrices.
- A is irreducible ⇔ it is a square matrix that is not reducible.
Clearly positive square matrices are primitive and primitive matrices are irreducible. In fact Perron's findings are equally valid for primitive matrices. The irreducible non-negative matrices were the main focus of Frobenius's investigation. Much of their importance stems from the fact that irreducibility depends purely upon the distribution of zeros in the matrix and so it can easily be tested for algorithmically.
With these preliminaries out of the way it's time to state the theorem. Since every positive square matrix is irreducible and non-negative it is an extension of Perron's original work. Yet despite the weaker hypothesis the only change in statements (1)–(5) is that in the positive case there is precisely one eigenvalue of modulus r; in other words h is always 1. In fact the condition h = 1 characterises the primitive matrices. The deepest parts of the theorem, namely statements (6)–(8) are not relevant for primitive matrices.
The Perron–Frobenius theorem
[edit]Let A be an irreducible non-negative n-square matrix with spectral radius r. Then the following statements hold.
- r is a positive real number and an eigenvalue of A.
- r is a simple root of the characteristic polynomial of A. Both right and left eigenspaces associated with r are one-dimensional.
- A has a right eigenvector w whose components are all positive. In other words Aw = rw and wi > 0 for 1 ≤ i ≤ n.
- Likewise A has a left eigenvector v (which is a row vector) whose components are all positive and vA = rv.
- The only eigenvectors whose components are all positive are those associated with r.
- If the characteristic polynomial of A has h roots of modulus r then these are the h distinct roots of λh = rh.
- If ω = 2π/h then A is similar to eiωA consequently the spectrum of A is invariant under a rotation through ω about the origin.
- If h > 1 then there exists a permutation matrix P such that
- PAP−1 =
- where the zero blocks down the main diagonal are square.
Other matters : How to insert comments? Did it work?
Nesting inside comments seems OK : How about that?
All 1-square matrices are irreducible by default, but an exception is often made for the 1-square zero matrix (0) in order to be able to say that all zero matrices are reducible. In the absence of any definite convention the status of (0) is left to personal choice.
Let's try x = 0 and see what it looks like compared to x = 0 and . What makes the latter render big sometimes?