I am a postdoctoral researcher at the Simons Institute for the Theory of Computing at UC Berkeley, hosted by Peter Bartlett and Bin Yu as a part of the NSF/Simons Collaboration on the Theoretical Foundations of Deep Learning. I'm interested in machine learning, statistics, and optimization, especially in deep learning theory.

Before coming to Berkeley, I completed my PhD in Statistics at UCLA under the supervision of Quanquan
Gu and Ying Nian Wu.
Prior to this, I completed a
masters in mathematics at the University of British Columbia,
Vancouver, where I was supervised by Ed
Perkins. Before that, I completed my
undergraduate degree in mathematics at McGill University.

My latest CV is here
(last updated August 2021).

__2021__
* My paper on

proxy convexity as a framework for neural network optimization was accepted at NeurIPS 2021.

* Two new preprints on arxiv: (1)

Proxy convexity: a unified framework for the analysis of neural networks trained by gradient descent, and (2)

Self training converts weak learners to strong learners in mixture models.
* I am reviewing for

ICLR 2022.

* I am reviewing for the ICML 2021 workshop

Overparameterization: Pitfalls and Opportunities (ICMLOPPO2021).

*

Three recent papers accepted at ICML, including

one as a long talk.

* New

preprint on provable robustness of adversarial training for learning halfspaces with noise.

* I'm reviewing for

NeurIPS 2021.

* I will be presenting recent

work at

TOPML2021 as a lightning talk, and at the

SoCal ML Symposium as a spotlight talk.

* I'm giving a talk at the

ETH Zurich Young Data Science Researcher Seminar on April 16th.

* I'm giving a talk at the Johns Hopkins University Machine Learning Seminar on April 2nd.

* I'm reviewing for the

Theory of Overparameterized Machine Learning Workshop.

* I'm giving a talk at the

Max-Planck-Insitute (MPI) MiS Machine Learning Seminar on March 11th.

* New

preprint showing SGD-trained neural networks of any width generalize in the presence of adversarial label noise.

* I'm reviewing for

ICML 2021.

__2020__
* New

preprint on agnostic
learning of halfspaces using gradient descent is now on arXiv.

* My

single neuron paper
was accepted at NeurIPS 2020.

* I received a Best Reviewer Award for ICML 2020.

* I will be attending the

IDEAL
Special Quarter on the Theory of Deep Learning hosted by
TTIC/Northwestern for the fall quarter.

* I'm reviewing for

AISTATS
2021.

* I've been awarded a

Dissertation
Year Fellowship by UCLA's Graduate Division.

* New

preprint on agnostic
PAC learning of a single neuron using gradient descent is now on arXiv.

* New

paper
accepted at

*Brain Structure and Function* from work with
researchers at UCLA School of Medicine.

* I'll be (remotely) working at Amazon's

Alexa
AI group for the summer as a research intern, working on natural
language understanding.

* I'm reviewing for

NeurIPS 2020.

* I'm reviewing for

ICML 2020.

*2019*
* My paper with Yuan Cao and Quanquan Gu, "Algorithm-dependent
Generalization Bounds for Overparameterized Deep Residual Networks", was
accepted at NeurIPS 2019 (

arXiv
version,

NeurIPS
version).

I have

a monthly radio show where I play music like house, techno, synth pop, new wave, post punk, disco, funk, and reggae.

My partner is a

historian.

### Preprints

Self-training converts weak learners to strong learners in mixture models.
__Spencer Frei*__, Difan Zou*, Zixiang Chen*, and Quanquan Gu.

Preprint, 2021.

### Conference Publications

Proxy convexity: a unified framework for the analysis of neural networks trained by gradient descent.
__Spencer Frei__ and Quanquan Gu.

*Advances in Neural Information Processing Systems (NeurIPS), 2021.*
Provable robustness of adversarial training for learning halfspaces with noise.
Difan Zou*,

__Spencer Frei*__, and Quanquan Gu.

*International Conference on Machine Learning (ICML)*, 2021.

Provable generalization of SGD-trained neural networks of any width in the presence of adversarial label noise.
__Spencer Frei__, Yuan Cao, and Quanquan Gu.

*Appeared at the Theory of Overparameterized Machine Learning (TOPML2021) workshop.*
*International Conference on Machine Learning (ICML)*, 2021.

Agnostic learning of halfspaces with gradient descent via soft margins.
__Spencer Frei__, Yuan Cao, and Quanquan Gu.

*International Conference on Machine Learning (ICML)*, 2021.

**Long talk.**
Agnostic learning of a single neuron with gradient descent.
__Spencer Frei__, Yuan Cao, and Quanquan Gu.

* Advances in Neural Information Processing Systems (NeurIPS)*, 2020.

Algorithm-dependent generalization bounds for overparameterized deep residual networks.
__Spencer Frei__, Yuan Cao, and Quanquan Gu.

*Advances in Neural Information Processing Systems (NeurIPS)*, 2019.

### Journal Publications

Hemodynamic latency is associated with reduced intelligence across the lifespan: an fMRI DCM study of aging, cerebrovascular integrity, and cognitive ability.
A.E. Anderson, M. Diaz-Santos,

__Spencer Frei__ *et al.*
*Brain Structure and Function*, 2020.

A lower bound for $p_c$ in range-$R$
bond percolation in two and three dimensions.
__Spencer Frei__ and Edwin Perkins.

*Electronic Journal of Probability*
21(56), 2016.

On thermal resistance in concentric residential geothermal heat
exchangers.
__Spencer Frei__, Kathryn Lockwood, Greg Stewart, Justin Boyer, and Burt S. Tilley.

*Journal of Engineering Mathematics* 86(1),
2014.

* denotes equal contribution.