Learning methods for online prediction
problems
Statistics and EECS
UC Berkeley
This short course will provide an introduction to the design and
theoretical analysis of prediction methods for decision problems
that are formulated as a repeated game between a learner and an
adversary. These online learning problems are often a natural way
to model a decision problem. For instance, identifying spam email,
detecting an attack on a computer network, or optimizing a financial
portfolio all involve an adversarial component. In addition,
there are many connections between adversarial and probabilistic
prediction problems, and between online prediction strategies and
statistical methods: It is often straightforward to convert a strategy
for an adversarial environment to a method for a probabilistic environment;
there are strong similarities between the performance guarantees in
the two cases, and in particular between their dependence on the
complexity of the class of prediction rules; regularization of some
form plays a central role in the design of methods for both problems;
and many online prediction strategies have a natural interpretation
as a Bayesian statistical method.
This series of lectures will introduce a variety of models of
prediction problems in adversarial environments, present a range of
strategies for these problems, discuss some tools to analyze the
performance of these strategies, and highlight points of contact
between adversarial and probabilistic models.
It is part of the Statistics
and Information Techonlogy Summer School 2010 at Peking University.
Synopsis:
-
Overview: Formulation of online prediction problems in
adversarial environments. Motivations.
-
Finite comparison class:
Prediction with expert advice. Halving
algorithm. Exponential weights. Extensions.
Statistical prediction with a finite class.
-
Converting online strategies for adversarial environments to batch
strategies for probabilistic environments.
-
Online convex optimization: Problem formulation.
The limitations of empirical minimization.
Gradient methods. Regularized minimization.
Bregman divergence. Linearization. Mirror descent.
Regret bounds. Strongly convex losses.
-
Log loss: universal portfolios, universal compression, prequential
analysis. Normalized maximum likelihood. Sequential investment.
Constantly rebalanced portfolios.
Slides:
Tuesday, July 13, 2010, 2-4pm: Lecture1.pdf.
Wednesday, July 14, 2010, 2-3pm: Pao-Lu Hsu Seminar
slides.
Friday, July 16, 2010, 10am-12: Lecture2.pdf.
Friday, July 16, 2010, 2-4pm: Lecture3.pdf.
Last update: Tue Jun 22 22:05:23 PDT 2010