EarthWeb   
HomeAccount InfoLoginSearchMy ITKnowledgeFAQSitemapContact Us
     

   
  All ITKnowledge
  Source Code

  Search Tips
  Advanced Search
   
  

  

[an error occurred while processing this directive]
Table of Contents


Appendix B
Matrix Methods in Little’s Model

B.1 A Short Review on Matrix Theory

B.1.A Left and Right Eigenvectors

Assuming A is an M × M square matrix, it has eigenvectors. Denoting and as the left (row) and right (column) eigenvectors, then:

and

where λ is a scalar known as the eigenvalue.

It should be noted that and are orthogonal unless .

From Equation (B.2) it follows that:

where IM is called the identity matrix of order M.

Equation (B.3) refers to a system of homogeneous linear equations for which a nontrivial solution exists only if the determinant is zero. That is:

The same result of Equation (B.4) is also obtained for the eigenvalues of Equation (B.1), and hence the left and right eigenvectors have the same set of eigenvalues.

Last, Equation (B.3) can be used to find the eigenvectors, if the values of λ are known.

B.1.B Degenerate and Nondegenerate Matrices

A matrix which has no two eigenvalues equal and, therefore, has just as many distinct eigenvalues as its dimension is said to be nondegenerate.

A matrix is degenerate if more than one eigenvector has the same eigenvalue.

A nondegenerate matrix can always be diagonalized. However, a degenerate matrix may or may not be put into a diagonal form, but it can always be reduced to what is known as the Jordan normal or Jordan canonical form.

B.1.C Diagonalizing a Matrix by Similarity Transform

Consider λi (i = 1, ..., M) as the eigenvalues of A. Let be an eigenvector of A associated with λi, i = 1, 2, ..., M. Then:

Step 1: Find the eigenvalues of A. If the eigenvalues are distinct (that is, when the matrix A is nondegenerate), then proceed to Step 2.
Step 2: Find eigenvectors of A.
Step 3: Construct the following matrix:

Step 4: Calculate , which refers to the diagonal matrix.

Note that if the eigenvalues of A are distinct, a diagonal matrix can always be found. If, on the other hand, the eigenvalues of A are not all distinct (repeated eigenvalues), it still is possible to find by implementing the above procedure if a linearly independent eigenvector can be found to correspond to each eigenvalue. However, when there are not as many linearly independent eigenvectors as eigenvalues, diagonalization is impossible.

B.1.D Jordan Canonical Form

When A cannot be transformed into a diagonal form (in certain cases if it has repeated eignvalues), it can however be specified in Jordan canonical form expressed as:

in which Ji is an (mi × mi) matrix of the form:

That is, Ji has all diagonal entries λ, all entries immediately above the diagonal are 1’s, and all other entries are 0’s. Ji is called a Jordan block of size mi.

The procedure for transforming A into Jordan form is similar to the method discussed in Section B.1.C, and the generalized (or principal) eigenvectors can be considered as follows.

A vector is said to be a generalized eigenvector of grade k of A associated with λ iff:

Claim: Given an M × M matrix A, let λ be an eigenvalue of multiplicity , then one can always find linearly independent generalized eigenvectors associated with λ.

B.1.E Generalized Procedure to Find the Jordan Form

  Factorize:

for each λi, i = 1, 2, 3 ..., r.

  Recall: A null space of a linear operator A is the set N(A) defined by:

N(A) = {all the elements x of (Fn, F) for which Ax = 0}
ν(A) = rank of N(A) = nullity of A

Theorem B.1.E.1: Let A be an m × n matrix. Then ρ(A) + ν(A) = n, where ρ(A) is the rank of A equal to the maximum number of linearly independent columns of A.

  Proceed as per the following steps:
Step 1: Find denoting the smallest such that is equal to the length of the longest chains, and ν signifies the nullity.
Step 2: Find a basis for that does not lie in .
Step 3: Find the next vector in each chain already started by .
These lie in but not in .
Step 4: Complete the set of vectors as a basis for the solutions of which, however, do not satisfy .
Step 5: Repeat Steps (3) and (4) for successively higher levels until a basis (of dimension ki) is found for .
Step 6: Now the complete structure of A relative to λi is known.

B.1.F Vector Space and Inner Product Space

A set V whose operations satisfy the list of requirements specified below is said to be a real vector space. V is a set for which addition and scalar multiplication are defined and x, y, z ∈ V. Further, α and β are real numbers. The axioms of a vector space are:

1)  x + y = y + x
2)  (x + y) + z = x + (y + z)
3)  There is an element in V, denoted by 0, such that 0 + x = x + 0 = x.
4)  For each x in V, there is an element -x in V such that x + (-x) = (-x) + x = 0.
5)  (α + β)x = αx + βx
6)  α (x + y) = αx + βy
7)  (αβ)x = α (βx)
8)  1.x = x

An inner product of two vectors in RM denoted by (a, b) is the real number (a, b) = α1β1 + α2β2 + ... + αMβM. If V is a real vector space, a function that assigns to every pair of vectors x and y in V a real number (x, y) is said to be an inner product on V, if it has the following properties:

1. (x,x) ≥ 0
(x,x) = 0 iff x=0
2. (x,y + z) = (x,y) + (x,z)
(x + y,z) = (x,z) + (y,z)
3. x,y) = α(x,y)
(xβy) = β(x,y)
4. (x,y) = (y,x)

B.1.G Symmetric Matrices

If A is an M × N matrix, then A [aij](MN) is symmetric if A = AT, where AT = [bij](NM) and bij = aji

If U is a real M × M matrix and UTU = IM, U is said to be an orthogonal matrix. Clearly, any orthogonal matrix is invertible and U-1 = UT, where UT denotes the transpose of U.

Two real matrices C and D are said to be similar if there is a real invertible matrix B such that B-1CB = D. If A and B are two M × M matrices and there is an orthogonal matrix U such that A = U-1BU, then A and B are orthogonally similar. A real matrix is diagonalizable over the “reals”, if it is similar over the reals to some real diagonal matrix. In other words, an M × M matrix A is diagonalizable over the reals, iff RM has a basis* consisting of eigenvectors of A.


* Definition: Let V be a vector space and {x1, x2, ..., xM} be a collection of vectors in V. The set {x1, x2, ..., xM} is said to be a basis for V if, (1) {x1, x2, ..., xM} is a linearly independent set of vectors; and (2) {x1, x2, ..., xM} spans V.

Theorem B.1.G.1: Let A be an M × M real symmetric matrix. Then any eigenvalue of A is real.

Theorem B.1.G.2: Let T be a symmetric linear operator (that is, T(x) = Ax) on a finite dimensional inner product space V. Then V has an orthonormal basis of eigenvectors of T.

As a consequence of Theorem B.1.G.2, the following is a corollary: If A is a real symmetric matrix, A is orthogonally similar to a real diagonal matrix.

It follows that any M × M symmetric matrix is diagonalizable.

B.2 Persistent States and Occurrence of Degeneracy in the Maximum Eigenvalue of the Transition Matrix

As discussed in Chapter 5, Little in 1974 [33] defined a 2M × 2M matrix TM whose elements give the probability of a particular state; |S1,S2, ..., SM> yielding after one cycle the new state |S1,S2, ..., SM>.

Letting Ψ(α) represent the state |S1,S2, ..., SM> and Ψ(α’) represent |S’1,S2, ..., SM>, the probability of obtaining a configuration after m cycles is given by:

If TM is diagonalizable, then a representation of Ψ(α) can be made in terms of eigenvectors alone. These eigenvectors would be linearly independent. Therefore, as referred to in Section B.1, assuming TM is diagonalizable, from (B.5) it can be stated that:

where are the eigenvectors of the operator TM. There are 2M such eigenvectors each of which has 2M components , one for each configuration α;λr is the rth eigenvalue.

However, as discussed in Section B.1, if TM were symmetric, the assumption of its diagonalizability is valid since any symmetric matrix is diagonalizable. However, as stated previously, it is reasonably certain that TM may not be symmetric. Initially Little assumes that TM is symmetric and then shows how his argument can be extended to the situation in which TM is not diagonalizable.

Assuming TM is diagonalizable, Ψ(α) can be represented as:

as is familiar in quantum mechanics; and assuming that the are normalized to unity, it follows that:

where δrs = 0 if r ≠ s and δrs = 1, if r = s; and noting that Ψ(α’) can be expressed as similar to Equation (B.13) so that:

Now the probability Γ(α1) of obtaining the configuration α1 after m cycles can be elucidated. It is assumed that after Mο cycles, the system returns to the initial configuration and averaging over all initial configurations:

Using Equation (B.14), Equation (B.15) can be simplified to:

Then, by using Equation (B.13), Equation (B.15) can further be reduced to:

Assuming the eigenvalues are nondegenerate and Mο is a very large number, then the only significant contribution to Equation (B.17) comes from the maximum eigenvalues. That is:

However, if the maximum eigenvalue of TM is degenerate or sufficiently close in value (for example ), then these degenerate eigenvalues contribute and Equation (B.17) becomes:

In an almost identical procedure to that as discussed above, Little found the probability of obtaining a configuration α2 after cycles as:

if the eigenvalues are degenerate and Mo,( - m) are large numbers. Examining Equation (B.19), one finds:

Thus, the influence of configuration a, does not affect the probability of obtaining the configuration α2.

However, considering the case where .

The probability of obtaining a configuration α2 is then dependent upon the configuration α1, and thus the influence of α1 persists for an arbitrarily long time.

B.3 Diagonalizability of the Characteristic Matrix

It was indicated earlier that Little assumed TM is diagonalizable. However, his results also can be generalized to any arbitrary M × M matrix because any M × M matrix can be put into Jordan form as discussed in Section B.1.D.

In such a case, the representation of Ψ(α) is made in terms of generalized eigenvectors which satisfy:

instead of pertaining to the eigenvectors alone as discussed in Section B.1.D.

The eigenvectors are the generalized vectors of grade 1. For k > 1, Equation (B.25) defines the generalized vectors. Thus for a particular case of k = 1, the results of Equation (B.14) and Equation (B.16) are the same. Further, an eigenvector Ψ(α) can be derived from Equation (B.25) as follows:

Lemma. Let B be an M × M matrix. Let p be a principal vector of grade g ≥ 1 belonging to the eigenvalue μ. Then for large k one has the asymptotic forms

where is an eigenvector belonging to 4 and where the remainder r(k) = 0 if g = 1 or:

For the general case one must use the asymptotic form for large m. Hence, in the present case:

where r(m) ≈ mk-2 |λ|m, which yields:

where Ψrk(α) is the eigenvector of eigenvalue λr for the generalized vector tr31y(α) defined in Equation (B.26). By using this form in Equation (B.15), the following is obtained:

The above relation is dependent on m if there is a degeneracy in the maximum eigenvalues. It means that there is an influence from constraints or knowledge of the system at an earlier time. Also, from Equation (B.13) it is required that the generalized vectors must be of the same grade. That is, from Equation (B.15) it follows that:

Therefore, the conditions for the persistent order are: The maximum eigenvalues must be degenerate, and their generalized vectors must be of the same grade.


Table of Contents

Copyright © CRC Press LLC

HomeAccount InfoSubscribeLoginSearchMy ITKnowledgeFAQSitemapContact Us
Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.