Statistics 210A: Theoretical Statistics (Fall 2020)
If you are an undergraduate who wants to take the course, please fill out the permission code request form to let me know about your background.
Anyone considering taking the course is encouraged to read the frequently asked questions regarding preparation and review materials.
Course Information
Prof. Will Fithian (Instructor)
Tae Joo Ahn (GSI)
Forest Yang (GSI)
Course schedule (Google Calendar link):
Lectures TuTh 9:3011
Recitation sections every second F 4pm beginning September 4
Happy hours F 45pm on off weeks
Final Exam Review F Dec 11, 4pm
Final Exam Tu Dec 15 (takehome)
Zoom links:
Syllabus
Lecture videos and handouts at bCourses
Email policy: You can email me or the GSIs about administrative questions, with [Stat 210A] in the subject line. No math over email, please.
Piazza for announcements and technical questions (no homework spoilers!)
Gradescope for turning in homework
Link to Google calendar for course
Materials
Lecture notes:
Recitation section materials:
Materials from class:
Assignments:
Relevant articles:
Content
Stat 210A is Berkeley's introductory Ph.D.level course on theoretical statistics. It is a fastpaced and demanding course intended to prepare students for research careers in statistics.
Topics:
Statistical decision theory, frequentist and Bayesian
Exponential families
Point estimation
Hypothesis testing
Resampling methods
Estimating equations and maximum likelihood
Empirical Bayes
Largesample theory
Highdimensional testing
Multiple testing and selective inference
References
All texts are available online from Springer Link.
Main text:
Supplementary texts:
Undergradlevel review texts for prerequisites:
Axler, Linear Algebra Done Right, Chapters 13, 56.
Abbott, Understanding Analysis, Chapters 13.
Adhikari & Pitman, Probability for Data Science, Chapters 16, 89, 1317, and 23.
Grading
Your final grade is based on:
Weekly problem sets: 80%
Final exam: 20%
Lateness policy: Homework must be submitted to Gradescope at midnight on Wednesday nights. Late problem sets will not be accepted, but we will drop your lowest two grades.
Collaboration policy: For homework, you are welcome to work with each other or consult articles or textbooks online, with the following caveats:
You must write up your solution by yourself.
You may NOT consult any solutions from previous iterations of this course.
If you collaborate or use any resources other than course texts, you must acknowledge your collaborators and the resources you used.
Academic integrity: You are expected to abide by the Berkeley honor code. Violating the collaboration policy, or cheating in any other way, will result in a failing grade for the semester and you will be reported to the University Office of Student Conduct.
Accommodations
Students with disabilities: Please see me as soon as possible if you need particular accommodations, and we will work out the necessary arrangements.
Scheduling conflicts: Please notify me in writing by the second week of the term about any known or potential extracurricular conflicts (such as religious observances, graduate or medical school interviews, or team activities). I will try my best to help you with making accommodations, but cannot promise them in all cases. In the event there is no mutuallyworkable solution, you may be dropped from the class.
Lecture schedule
Date  Reading  Topic 
Aug. 27  Chap. 1 and Sec. 3.1 of Keener  Probability models and risk 
Sep. 1  Chap. 2 of Keener  Exponential families 
Sep. 3  Chap. 2 and Sec. 3.2 of Keener  Sufficient statistics 
Sep. 8  Secs. 3.4, 3.5, and 3.6 of Keener  Minimal sufficiency and completeness 
Sep. 10  Secs. 3.6 and 4.1 of Keener  RaoBlackwell theorem 
Sep. 15  Secs. 4.1 and 4.2 of Keener  UMVU estimation 
Sep. 17  Secs. 4.5 and 4.6 of Keener  Information inequality 
Sep. 22  Secs. 7.1 and 7.2 of Keener  Bayesian estimation 
Sep. 24  Secs. 7.1 and 7.2 of Keener  Conjugate priors 
Sep. 29  Secs. 7.2 and 11.1 of Keener  More on Bayes 
Oct. 1  Secs. 7.2 and 11.1 of Keener  Hierarchical priors, empirical Bayes 
Oct. 6  Secs. 11.1, 11.2 and 9.4 of Keener  JamesStein paradox, confidence intervals 
Oct. 8  Secs. 5.1 and 5.2 of LehmannCasella  Minimaxity and admissibility 
Oct. 13  Secs. 12.1, 12.2, 12.3 and 12.4 of Keener  Hypothesis testing, NeymanPearson lemma 
Oct. 15  Secs. 12.3, 12.4, 12.5, 12.6 and 12.7 of Keener  UMP tests 
Oct. 20  Secs. 13.1, 13.2, and 13.3 of Keener  Testing with nuisance parameters 
Oct. 22  Secs. 13.1, 13.2, and 13.3 of Keener  UMP unbiased tests 
Oct. 27  Secs. 13.1, 13.2, and 13.3 of Keener  UMP unbiased tests 
Oct. 29  Secs. 14.1, 14.2, 14.4, 14.5, and 14.7 of Keener  Linear models 
Nov. 3  Secs. 8.1, 8.2, and 8.3 of Keener  Asymptotic concepts 
Nov. 5  Secs. 8.3 and 8.4 of Keener  Maximum likelihood estimation 
Nov. 10  Secs. 8.5, 9.1, and 9.2 of Keener  Relative efficiency 
Nov. 12  Secs. 9.1, 9.2, and 9.3 of Keener  Consistency of the MLE 
Nov. 17  Secs. 9.1, 9.2, and 9.3 of Keener  Asymptotic normality of MLE 
Nov. 19  Secs. 9.5 and 9.7 of Keener  Trio of asymptotic likelihoodbased tests and CIs 
Nov. 24  Secs. 19.119.3 of Keener  Bootstrap and permutation tests 
Nov. 26   No class (Thanksgiving) 
Dec. 1  15.115.4 of LehmannRomano  Bootstrap theory 
Dec. 3  Lecs. 2, 3 of Candes  Testing in high dimensions 
Dec. 8  Lec. 6 of Candes  Multiple testing 
Dec. 10  Lecs. 8 and 9 of Candes  Multiple testing 

