Teaching Assistant: Yuchen Zhang
Class time and Location:
Prerequisites: General mathematical sophistication; and a solid understanding of Algorithms, Linear Algebra, and Probability Theory, at the advanced undergraduate or beginning graduate level, or equivalent.
Course requirements: Most likely, three homeworks (ca. 15-20% each), scribe a lecture (ca. 10%), and a major project (ca. 40%).
Lecture Notes: pdf
Main References:
Mahoney,
"Randomized Algorithms for Matrices and Data,"
FnTML 2011.
(arXiv)
Lecture Notes: pdf
Main References:
Drineas, Kannan, and Mahoney,
"Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication"
Lecture Notes: pdf
Main References:
If the materian was too foreign, take a look at, e.g., the first few
chapters of "Probability and Computing," by Mitzenmacher and Upfal.
Lecture Notes: pdf
Main References:
Appendix of:
Recht,
"A Simpler Approach to Matrix Completion"
Oliveira,
"Sums of random Hermitian matrices and an inequality by Rudelson"
Drineas, Kannan, and Mahoney,
"Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication"
Lecture Notes: pdf
Main References:
Dasgupta and Gupta,
"An elementary proof of a theorem of Johnson and Lindenstrauss"
Appendix of:
Drineas, Mahoney, Muthukrishnan, and Sarlos,
"Faster Least Squares Approximation"
Achlioptas,
"Database-friendly random projections: Johnson-Lindenstrauss with binary coins"
Lecture Notes: pdf
Main References:
Chapter 4 of:
Mahoney,
"Randomized Algorithms for Matrices and Data"
Drineas, Mahoney, Muthukrishnan, and Sarlos,
"Faster Least Squares Approximation"
Sarlos,
"Improved Approximation Algorithms for Large Matrices via Random Projections"
Lecture Notes: pdf
Main References:
Same as last class.
Lecture Notes: pdf
Main References:
Ailon and Chazelle,
"The fast Johnsonâ€“Lindenstrauss transform and approximate nearest neighbors"
Matousek,
"On variants of the Johnson-Lindenstrauss lemma"
Drineas, Magdon-Ismail, Mahoney, and Woodruff,
"Fast approximation of matrix coherence and statistical leverage"
Lecture Notes: pdf
Main References:
Same as last class.
Lecture Notes: pdf
References:
Same as last class.
Lecture Notes: pdf
Main References:
Avron, Maymounkov, and Toledo,
"Blendenpik: Supercharging LAPACK's Least-Squares Solver"
Avron, Ng, and Toledo,
"Using Perturbed QR Factorizations to Solve Linear Least-Squares Problems"
Lecture Notes: pdf
Main References:
Same as last class.
Lecture Notes: pdf
Main References:
Same as last class.
Lecture Notes: pdf
Main References:
Drineas, Kannan, and Mahoney,
"Fast Monte Carlo Algorithms for Matrices II: Computing Low-Rank Approximations to a Matrix"
Deshpande and Vempala,
"Adaptive Sampling and Fast Low-rank Matrix Approximation"
Lecture Notes: pdf
Main References:
Same as last class.
Lecture Notes: pdf
Main References:
Drineas, Mahoney, and Muthukrishnan,
"Relative-Error CUR Matrix Decompositions"
Lecture Notes: pdf
Main References:
Same as last class; and
Lemma 2 (of arXiv-v2, or Lemma 4.2 of SODA) of:
Boutsidis, Mahoney, and Drineas
"An Improved Approximation Algorithm for the Column Subset Selection Problem"
Theorem 9.1 of:
Halko, Martinsson, and Tropp,
"Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions,"
Lecture Notes: pdf
Main References:
Boutsidis, Mahoney, and Drineas
"An Improved Approximation Algorithm for the Column Subset Selection Problem"
Halko, Martinsson, and Tropp,
"Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions,"
Lecture Notes: pdf
Main References:
Same as last class.
Lecture Notes: pdf
Main References:
Achlioptas and McSherry,
"Fast Computation of Low-Rank Matrix Approximations" (the JACM version)
Lecture Notes: pdf
Main References:
Same as last class.
Lecture Notes: pdf
Main References:
Kundu, Nambirajan, and Drineas
"Identifying Influential Entries in a Matrix"
Additional References:
Recht,
"A Simpler Approach to Matrix Completion"
Chen, Bhojanapalli, Sanghavi, and Ward,
"Coherent Matrix Completion"
Lecture Notes: pdf
Main References:
Same as last class.
Lecture Notes: pdf
Main References:
Batson, Spielman, Srivastava, and Teng,
"Spectral Sparsification of Graphs: Theory and Algorithms"
Koutis, Miller, and Peng,
"A fast solver for a class of linear systems"
Additional References:
Spielman and Srivastava,
"Graph Sparsification by Effective Resistances"
Drineas and Mahoney,
"Effective Resistances, Statistical Leverage, and Applications to Linear Equation Solving"
Lecture Notes: pdf
Main References:
Same as last class.
Sparsity-preserving Random Projections:
The basic result:
Clarkson and Woodruff,
"Low rank approximation and regression in input sparsity time"
A simpler linear algebraic proof is in Section 3 of:
Meng and Mahoney,
"Low-distortion Subspace Embeddings in Input-sparsity Time and Applications to Robust Linear Regression"
A generalization of the previous result:
Nelson and Nguyen,
"OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings"
Low-rank Approximation and Kernel-based Learning:
Gittens and Mahoney,
"Revisiting the Nystrom Method for Improved Large-Scale Machine Learning"
(the arXiv version, it is much more detailed than the ICML version)
Bach,
"Sharp analysis of low-rank kernel matrix approximations"