**1. Spectral Sparsifiers **

** 1.1. Graph Laplacians **

Let be an unweighted graph. For notational simplicity, we will think of the vertex set as . Let be the th standard basis vector, meaning that has a in the th coordinate and s in all other coordinates. For an edge , define the vector and the matrix as follows:

In the definition of it does not matter which vertex gets the and which gets the because the matrix is the same either way.

Definition 1TheLaplacian matrixof is the matrix

Let us consider an example.

Note that each matrix has only four non-zero entries: we have and . Consequently, the th diagonal entry of is simply the degree of vertex . Moreover, we have the following fact.

Fact 2Let be the diagonal matrix with equal to the degree of vertex . Let be the adjacency matrix of . Then .

If had weights on the edges we could define the weighted Laplacian as follows:

Claim 3Let be a graph with non-negative weights . Then the weighted Laplacian is positive semi-definite.

*Proof:* Since , it is positive semi-definite. So is a weighted sum of positive semi-definite matrices with non-negative coefficients. Fact 5 in the Notes on Symmetric Matrices implies is positive semi-definite.

The Laplacian can tell us many interesting things about the graph. For example:

Claim 4Let be a graph with Laplacian . For any , let be the characteristic vector of , i.e., the vector with equal to if and equal to otherwise. Then .

*Proof:* For any edge we have . But is if exactly one of or is in , and otherwise it is . So if , and otherwise it is . Summing over all edges proves the claim.

Similarly, if is a graph with edge weights and is the weighted Laplacian, then then .

Fact 5If is connected then , which is an -dimensional subspace.

** 1.2. Main Theorem **

Theorem 6Let be a graph with . There is a randomized algorithm to compute weights such that:

- only of the weights are non-zero, and
- with probability at least ,
where denotes the weighted Laplacian of with weights . By Fact 4 in Notes on Symmetric Matrices, this is equivalent to

By (1) and Claim 4, the resulting weights are a graph sparsifier of :

The algorithm that proves Theorem 6 is as follows.

- Initially .
- Set .
- For every edge compute .
- For
- Let be a random edge chosen with probability .
- Increase by .

*Proof:* (of Theorem 6). How does the matrix change during the th iteration? The edge is chosen with probability and then increases by . Let be this random change in during the th iteration. So equals with probability . The random matrices are mutually independent and they all have this same distribution. Note that

The final matrix is simply . To analyze this final matrix, we will use the Ahlswede-Winter inequality. All that we require is the following claim, which we prove later.

We apply Corollary 2 from the previous lecture with , obtaining

**2. Appendix: Additional Proofs **

*Proof:* (of Claim 7) First we check that the values are non-negative. By the cyclic property of trace

This is non-negative since because . Thus . Next, note that

where is the orthogonal projection onto the image of . The image has dimension by Fact 5, and so

*Proof:* (of Claim 8). The maximum eigenvalue of a positive semi-definite matrix never exceeds its trace, so

By Fact 8 in the Notes on Symmetric Matrices,

So, by Fact 4 in the Notes on Symmetric Matrices, for every vector ,

Now let us write where is the projection onto the image of and is the projection onto the kernel of . Then and . So

Since this holds for every vector , Fact 4 in the Notes on Symmetric Matrices again implies

Since , Claim 16 in the Notes on Symmetric Matrices shows this is equivalent to

This completes the proof of the claim.