
Recent Posts
Archives
Categories
Meta
Advertisements
In the last lecture we defined the notion of pairwise independence, and saw some ways to use this concept in reducing the randomness of a randomized algorithm. Today we will apply these ideas to designing hash functions and “perfect” hash … Continue reading
So far we have seen two concentration bounds for scalar random variables: Markov and Chernoff. For our sort of applications, these are by far the most useful. In most introductory probability courses, you are likely to see another inequality, which … Continue reading
In this lecture we discuss the topic of derandomization — converting a randomized algorithm into a deterministic one. 1. Method of Conditional Expectations One of the simplest methods for derandomizing an algorithm is the “method of conditional expectations”. In some … Continue reading
1. Lowrank approximation of matrices Let be an arbitrary matrix. We assume . We consider the problem of approximating by a lowrank matrix. For example, we could seek to find a rank matrix minimizing . It is known that a … Continue reading
1. Spectral Sparsifiers 1.1. Graph Laplacians Let be an unweighted graph. For notational simplicity, we will think of the vertex set as . Let be the th standard basis vector, meaning that has a in the th coordinate and s … Continue reading
1. Useful versions of the AhlswedeWinter Inequality Theorem 1 Let be a random, symmetric, positive semidefinite matrix such that . Suppose for some fixed scalar . Let be independent copies of (i.e., independently sampled matrices with the same distribution as … Continue reading
1. Concentration for sums of random matrices Let be a random real matrix of size . In other words, we have some probability distribution on the space of all matrices, and we let be a matrix obtained by sampling from … Continue reading