## Lecture 18: Perfect Hashing

In the last lecture we defined the notion of pairwise independence, and saw some ways to use this concept in reducing the randomness of a randomized algorithm. Today we will apply these ideas to designing hash functions and “perfect” hash … Continue reading

## Lecture 17: Chebyshev’s Inequality, Pairwise Independence

So far we have seen two concentration bounds for scalar random variables: Markov and Chernoff. For our sort of applications, these are by far the most useful. In most introductory probability courses, you are likely to see another inequality, which … Continue reading

Posted in Uncategorized | 1 Comment

## Lecture 16: Derandomization: Method of Conditional Expectations, Method of Pessimistic Estimators

In this lecture we discuss the topic of derandomization — converting a randomized algorithm into a deterministic one. 1. Method of Conditional Expectations One of the simplest methods for derandomizing an algorithm is the “method of conditional expectations”. In some … Continue reading

Posted in Uncategorized | 1 Comment

## Lecture 15: Low-rank approximation of matrices

1. Low-rank approximation of matrices Let be an arbitrary matrix. We assume . We consider the problem of approximating by a low-rank matrix. For example, we could seek to find a rank matrix minimizing . It is known that a … Continue reading

## Lecture 14: Spectral sparsifiers

1. Spectral Sparsifiers 1.1. Graph Laplacians Let be an unweighted graph. For notational simplicity, we will think of the vertex set as . Let be the th standard basis vector, meaning that has a in the th coordinate and s … Continue reading