![]() |
|
|||
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
|
![]() |
[an error occurred while processing this directive]
Appendix B
|
Step 1: | Find the eigenvalues of A. If the eigenvalues are distinct (that is, when the matrix A is nondegenerate), then proceed to Step 2. |
Step 2: | Find eigenvectors of A. |
Step 3: | Construct the following matrix: |
Step 4: | Calculate ![]() |
Note that if the eigenvalues of A are distinct, a diagonal matrix can always be found. If, on the other hand, the eigenvalues of A are not all distinct (repeated eigenvalues), it still is possible to find
by implementing the above procedure if a linearly independent eigenvector can be found to correspond to each eigenvalue. However, when there are not as many linearly independent eigenvectors as eigenvalues, diagonalization is impossible.
When A cannot be transformed into a diagonal form (in certain cases if it has repeated eignvalues), it can however be specified in Jordan canonical form expressed as:
in which Ji is an (mi × mi) matrix of the form:
That is, Ji has all diagonal entries λ, all entries immediately above the diagonal are 1s, and all other entries are 0s. Ji is called a Jordan block of size mi.
The procedure for transforming A into Jordan form is similar to the method discussed in Section B.1.C, and the generalized (or principal) eigenvectors can be considered as follows.
A vector is said to be a generalized eigenvector of grade k of A associated with λ iff:
Claim: Given an M × M matrix A, let λ be an eigenvalue of multiplicity , then one can always find
linearly independent generalized eigenvectors associated with λ.
for each λi, i = 1, 2, 3 ..., r.
Theorem B.1.E.1: Let A be an m × n matrix. Then ρ(A) + ν(A) = n, where ρ(A) is the rank of A equal to the maximum number of linearly independent columns of A.
Step 1: | Find ![]() ![]() ![]() |
Step 2: | Find a basis for ![]() ![]() |
Step 3: | Find the next vector in each chain already started by ![]() |
These lie in ![]() ![]() | |
Step 4: | Complete the set of vectors as a basis for the solutions of ![]() ![]() |
Step 5: | Repeat Steps (3) and (4) for successively higher levels until a basis (of dimension ki) is found for ![]() |
Step 6: | Now the complete structure of A relative to λi is known. |
A set V whose operations satisfy the list of requirements specified below is said to be a real vector space. V is a set for which addition and scalar multiplication are defined and x, y, z ∈ V. Further, α and β are real numbers. The axioms of a vector space are:
An inner product of two vectors in RM denoted by (a, b) is the real number (a, b) = α1β1 + α2β2 + ... + αMβM. If V is a real vector space, a function that assigns to every pair of vectors x and y in V a real number (x, y) is said to be an inner product on V, if it has the following properties:
1. | (x,x) ≥ 0 |
(x,x) = 0 iff x=0 | |
2. | (x,y + z) = (x,y) + (x,z) |
(x + y,z) = (x,z) + (y,z) | |
3. | (αx,y) = α(x,y) |
(xβy) = β(x,y) | |
4. | (x,y) = (y,x) |
If A is an M × N matrix, then A [aij](MN) is symmetric if A = AT, where AT = [bij](NM) and bij = aji
If U is a real M × M matrix and UTU = IM, U is said to be an orthogonal matrix. Clearly, any orthogonal matrix is invertible and U-1 = UT, where UT denotes the transpose of U.
Two real matrices C and D are said to be similar if there is a real invertible matrix B such that B-1CB = D. If A and B are two M × M matrices and there is an orthogonal matrix U such that A = U-1BU, then A and B are orthogonally similar. A real matrix is diagonalizable over the reals, if it is similar over the reals to some real diagonal matrix. In other words, an M × M matrix A is diagonalizable over the reals, iff RM has a basis* consisting of eigenvectors of A.
* Definition: Let V be a vector space and {x1, x2, ..., xM} be a collection of vectors in V. The set {x1, x2, ..., xM} is said to be a basis for V if, (1) {x1, x2, ..., xM} is a linearly independent set of vectors; and (2) {x1, x2, ..., xM} spans V.
Theorem B.1.G.1: Let A be an M × M real symmetric matrix. Then any eigenvalue of A is real.
Theorem B.1.G.2: Let T be a symmetric linear operator (that is, T(x) = Ax) on a finite dimensional inner product space V. Then V has an orthonormal basis of eigenvectors of T.
As a consequence of Theorem B.1.G.2, the following is a corollary: If A is a real symmetric matrix, A is orthogonally similar to a real diagonal matrix.
It follows that any M × M symmetric matrix is diagonalizable.
As discussed in Chapter 5, Little in 1974 [33] defined a 2M × 2M matrix TM whose elements give the probability of a particular state; |S1,S2, ..., SM> yielding after one cycle the new state |S1,S2, ..., SM>.
Letting Ψ(α) represent the state |S1,S2, ..., SM> and Ψ(α) represent |S1,S2, ..., SM>, the probability of obtaining a configuration after m cycles is given by:
If TM is diagonalizable, then a representation of Ψ(α) can be made in terms of eigenvectors alone. These eigenvectors would be linearly independent. Therefore, as referred to in Section B.1, assuming TM is diagonalizable, from (B.5) it can be stated that:
where are the eigenvectors of the operator TM. There are 2M such eigenvectors each of which has 2M components
, one for each configuration α;λr is the rth eigenvalue.
However, as discussed in Section B.1, if TM were symmetric, the assumption of its diagonalizability is valid since any symmetric matrix is diagonalizable. However, as stated previously, it is reasonably certain that TM may not be symmetric. Initially Little assumes that TM is symmetric and then shows how his argument can be extended to the situation in which TM is not diagonalizable.
Assuming TM is diagonalizable, Ψ(α) can be represented as:
as is familiar in quantum mechanics; and assuming that the are normalized to unity, it follows that:
where δrs = 0 if r ≠ s and δrs = 1, if r = s; and noting that Ψ(α) can be expressed as similar to Equation (B.13) so that:
Now the probability Γ(α1) of obtaining the configuration α1 after m cycles can be elucidated. It is assumed that after Mο cycles, the system returns to the initial configuration and averaging over all initial configurations:
Using Equation (B.14), Equation (B.15) can be simplified to:
Then, by using Equation (B.13), Equation (B.15) can further be reduced to:
Assuming the eigenvalues are nondegenerate and Mο is a very large number, then the only significant contribution to Equation (B.17) comes from the maximum eigenvalues. That is:
However, if the maximum eigenvalue of TM is degenerate or sufficiently close in value (for example ), then these degenerate eigenvalues contribute and Equation (B.17) becomes:
In an almost identical procedure to that as discussed above, Little found the probability of obtaining a configuration α2 after cycles as:
if the eigenvalues are degenerate and Mo,( - m) are large numbers. Examining Equation (B.19), one finds:
Thus, the influence of configuration a, does not affect the probability of obtaining the configuration α2.
However, considering the case where .
The probability of obtaining a configuration α2 is then dependent upon the configuration α1, and thus the influence of α1 persists for an arbitrarily long time.
It was indicated earlier that Little assumed TM is diagonalizable. However, his results also can be generalized to any arbitrary M × M matrix because any M × M matrix can be put into Jordan form as discussed in Section B.1.D.
In such a case, the representation of Ψ(α) is made in terms of generalized eigenvectors which satisfy:
instead of pertaining to the eigenvectors alone as discussed in Section B.1.D.
The eigenvectors are the generalized vectors of grade 1. For k > 1, Equation (B.25) defines the generalized vectors. Thus for a particular case of k = 1, the results of Equation (B.14) and Equation (B.16) are the same. Further, an eigenvector Ψ(α) can be derived from Equation (B.25) as follows:
Lemma. Let B be an M × M matrix. Let p be a principal vector of grade g ≥ 1 belonging to the eigenvalue μ. Then for large k one has the asymptotic forms
where is an eigenvector belonging to 4 and where the remainder r(k) = 0 if g = 1 or:
For the general case one must use the asymptotic form for large m. Hence, in the present case:
where r(m) ≈ mk-2 |λ|m, which yields:
where Ψrk(α) is the eigenvector of eigenvalue λr for the generalized vector tr31y(α) defined in Equation (B.26). By using this form in Equation (B.15), the following is obtained:
The above relation is dependent on m if there is a degeneracy in the maximum eigenvalues. It means that there is an influence from constraints or knowledge of the system at an earlier time. Also, from Equation (B.13) it is required that the generalized vectors must be of the same grade. That is, from Equation (B.15) it follows that:
Therefore, the conditions for the persistent order are: The maximum eigenvalues must be degenerate, and their generalized vectors must be of the same grade.
Table of Contents |
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |