1. Symmetric Matrices
We review some basic results concerning symmetric matrices. All matrices that we discuss are over the real numbers.
Let be a real, symmetric matrix of size and let denote the identity matrix. Perhaps the most important and useful property of symmetric matrices is that their eigenvalues behave very nicely.
Definition 1 Let be a matrix. The matrix is called an orthogonal matrix if . This implies that , by uniqueness of inverses.
Fact 2 (Spectral Theorem). Let be any symmetric matrix. There exists an orthogonal matrix and a (real) diagonal matrix such that
This is called a spectral decomposition of . Let be the th column of and let denote the th diagonal entry of . Then is an orthonormal basis consisting of eigenvectors of , and is the eigenvalue corresponding to . We can also write
The eigenvalues are uniquely determined by , up to reordering.
Caution. The product of two symmetric matrices is usually not symmetric.
1.1. Positive semi-definite matrices
Definition 3 Let be any symmetric matrix. The matrix is called positive semi-definite if all of its eigenvalues are non-negative. This is denoted , where here denotes the zero matrix. The matrix is called positive definite if all of its eigenvalues are strictly positive. This is denoted .
Another condition is: if and only if there exists a matrix such that .
The positive semi-definite condition can be used to define a partial ordering on all symmetric matrices. This is called the Löwner ordering or the positive semi-definite ordering. For any two symmetric matrices and , we write if .
Fact 4 if and only if for all vectors .
Proof: Note that holds if and only if holds. By (2), this is equivalent to which is the definition of .
The set of positive semi-definite matrices is closed under addition and non-negative scaling. (Such a set is called a convex cone.)
Fact 5 Let and be positive semi-definite matrices of size . Let be non-negative scalars. Then .
Proof: This follows easily from (2).
Caution. The Löwner ordering does not have all of the nice properties that the usual ordering of real numbers has. For example, if then it is not necessarily true that .
Proof: Since , then , so there exists a matrix such that . Then
which shows that is positive semi-definite.
Suppose that is non-singular and . Multiply on the left by and on the right by to get .
1.2. Spectral Norm
Just as there are many norms of vectors, there are many norms of matrices too. We will only consider spectral norm of .
Definition 7 Let be a symmetric matrix with eigenvalues . The spectral norm of is denoted and is defined by .
Recall that an eigenvalue of is any real number such that there exists a vector for which . Then we also have , which implies that is an eigenvalue of . In fact, the entire set of eigenvalues of is .
Fact 8 Let be a symmetric matrix. Let and respectively denote the smallest and largest eigenvalues of . Then .
Proof: Let be all the eigenvalues of . We will show only the second inequality. This is equivalent to . As observed above, the eigenvalues of are
The minimum of these eigenvalue is
In particular, for any symmetric matrix we have .
Definition 9 Let be an arbitrary matrix (not necessarily symmetric). The trace of , denoted , is the sum of the diagonal entries of .
Fact 10 (Linearity of Trace) Let and be arbitrary matrices and let be scalars. Then .
Fact 11 (Cyclic Property of Trace) Let be an arbitrary matrix and let be an arbitrary matrix. Then .
Proof: This easily follows from the definitions. The th diagonal entry of is , so
by swapping the order of summation. This equals because the th diagonal entry of is .
Fact 12 Let be a symmetric, matrix with eigenvalues . Then .
Proof: Suppose additionally that is rank-one, i.e., for some vector . Then
Now we consider the case where has arbitrary rank. Let . Since we assume that we can set and write . Then
Here the second and third equalities are by linearity of trace, and the inequality follows from the rank-one case.
Corollary 14 If then .
Proof: Apply Claim 13 with . We get
by linearity of trace since is a scalar.
1.4. Functions applied to matrices
For any function , we can extend to a function on symmetric matrices as follows. Let be a spectral decomposition of where is the th diagonal entry of . Then we define
In other words, is defined simplying by applying the function to the eigenvalues of .
Caution. is not obtained applying the function to the entries of !
One important example of applying a function to a matrix is powering. Let be any positive semi-definite matrix with spectral decomposition . For any we can define as follows. Let be the diagonal matrix with . Then we define . For example, consider the case . Note that
So the square of the square root is the matrix itself, as one would expect. If is non-singular, the matrix obtained by taking is the same as the usual matrix inverse (by uniqueness of inverses, since ). So we see that the inverse of a non-singular symmetric matrix is obtained by inverting its eigenvalues.
Inequalities on real-valued functions also give us inequalities on matrices.
Proof: Let be a spectral decomposition of where is the th diagonal entry of . We wish to show , i.e., . By (2), this is equivalent to
Since is non-singular this is equivalent to
where we have simply replaced . The left-hand side is simply , which is non-negative by our assumptions on , and the ‘s.
1.5. The pseudoinverse
Sometimes we would like to invert a matrix which is not quite invertible. A useful alternative is to invert the part of the matrix that is invertible (its image) and to leave alone the part of the matrix that is not invertible (its kernel).
We can do this by applying the real-valued function:
The function inverts all non-zero numbers and maps to . So if we apply to a symmetric matrix, all non-zero eigenvalues will be inverted, and the zero eigenvalues will remain unchanged. The resulting matrix is called the pseudoinverse and is denoted . If is indeed invertible then .
Suppose is not invertible. We cannot have since has full rank and does not. However, we have the next best thing: is the identity operator on the image of . More precisely is the orthogonal projection onto the image of , which we denote . In other words, for every in the image of , we have and for every in the kernel of we have .
Claim 16 Let and be symmetric, positive semi-definite matices of the same size. Let denote the square root of the pseudoinverse of . Let and be arbitrary scalars. Suppose that is a subspace of for all . Then
Proof: : Since , this follows from Fact 6.
: This also follows from Fact 6, because
where the second equality follows from the assumption that .
1.6. Matrix exponentials
We will mainly be interested in the case . For any symmetric matrix , note that is positive semi-definite. Whereas in the scalar case holds, it is not necessarily true in the matrix case that . It turns out that there is a useful inequality along these lines. It is called the Golden-Thompson inequality.
Theorem 17 (Golden-Thompson Inequality) Let and be symmetric matrices. Then .