Statistics 701 Mathematical Statistics II


Spring 2025 MWF 9-9:50am,    PHY2106

Instructor: Professor Eric SludStatistics Program,  Math Dept.,   Rm 2314, x5-5469,  slud@umd.edu

Office hours: M 1-2, W 10-11 (initially), or email me to make an appointment (can be on Zoom).

Lecture Handouts Statistical Computing Handouts Homework

Overview: This is the second term of a full-year course sequence introducing mathematical statistics at a theoretical graduate level, using tools of advanced calculus and basic analysis. The material in the fall term emphasized conceptual definitions of the standard framework based on families of probability models for observed-data structures, along with the parameter space indexing the class of assumed models. We explained several senses in which functions of observed-data random variables can give a good idea of which of those probability models governed a particular dataset. In the fall term we emphasized finite-sample properties, while the spring term will emphasize large-sample limit theory. Aspects of the theoretical results will be llustrated using demonstrations with statistical simulation.
Large sample theory for Maximum Likelihood Estimation and Estimating Equations will be discussed in detail; in connection with hypothesis testing, we will prove the large-sample equivalence of Wald tests, Rao Score tests, and Likelihood Ratio tests; confidence intervals -- not covered in the fall semester, will be introduced through the "test-based confidence region" duality with likelihood ratio tests and other hypothesis tests. Notions of Locally Most Powerful tests and large-sample relative efficiency of tests will be discussed, with application to the determination of sample size required to achieve specified power. Other topics that will be covered as time permits include: misspecified models, inference with missing data, and introduction to semiparametric models.


Prerequisite: Stat 700 or {Stat 420 and Math 410}, or equivalent.

Required Course Text:   P. Bickel & K. Doksum, Mathematical Statistics, vol.I, 2nd ed., Pearson Prentice Hall, 2007.

Recommended Texts:   (i)   George Casella and Roger Berger Statistical Inference,   2nd ed., Duxbury, 2002.
(ii)   V. Rohatgi and A.K. Saleh, An Introduction to Probability and Statistics, 2nd ed., Wiley, 2001.
(iii)   Jun Shao, Mathematical Statistics, 2nd ed., Springer, 2003.
(iv)   P. Billingsley, Probability and Measure, 2nd (1986) or later edition, Wiley.


Course Coverage: STAT 700 and 701 divide roughly so that definitions and properties for finite-sample statistics are covered in the Fall (STAT 700), and large-sample limit theory in the Spring (STAT 701). The division is not quite complete, because finite-sample confidence intervals and likelihood ratio tests in Chapter 4 are introduced in the first weeks of Stat 701. We continue Stat 701 by consolidating the topics covered in the Fall, from Chapters 1-4, from the viewpoint of behavior of statistical procedures when i.i.d. data samples are large. This will involve discussion of consistency and efficiency of estimators from exponential families, large-sample definitions and behavior of hypothesis tests and confidence intervals, and some decision theory topics where probability limit theory plays a role. We will study more deeply some relationships between likelihood estimators and other classes of "estimating equation" estimators, and will discuss the computational solution of the likelihood and estimating equations and the large-sample properties of the the resulting estimators. The heart of the spring term material is in Chapters 5 and 6 of Bickel and Doksum. We will cover in detail the EM Algorithm and introduce Bayesian theory and MCMC computation.
Readings in Casella and Berger and other sources will be occasional and topic-based.

This term, we begin by discussing asymptotic topics related to the comparison of large-sample variances of method of moments and ML estimators (Bickel & Doksum Chapter 5, Sec.3) in the setting of canonical exponential families, to consolidate material on exponential families in the fall and give associated large-sample theorems. Then we will interrupt our development of Chapter 5 material to return to Chapter 4 sections 4.4 and 4.5 to give a thorough introduction to confidence intervals using some large-sample theory (Central Limit Theorem and Law of Large Numbers) to clarify the problem of defining formulas fo confidence interval endpoints. We will similarly return to Neyman-Pearson tests in the large-sample setting to clarify the (approximate) definition of rejection region cutoffs, and also incorporate large-sample asymptotics through the notions of asymptotic size and power, asymptotically pivotal quantity, and asymptotic confidence level. (These topics are covered well in statistical terms, although less rigorously, in Casella & Berger.)

Then we re-visit the topic of (multidimensional-parameter) Fisher Information along with numerical maximization of likelihoods, via Newton-Raphson, Fisher scoring, and EM algorithm. We complete this part of the course by proving that MLE's under regularity conditions are consistent and asymptotically normal, with related facts about he behavior of the Likelihood in the neighborhood of the MLE. We follow roughly the development of Chapter 5 of the book of Bickel and Doksum; you can see this material covered non-rigorously in the univariate case in Casella and Berger's Section 10.1. Chapter 6 of Bickel & Doksum covers the asymptotic theory of estimating-equation estimation, unifying linear model theory and maximum likelihood theory in multi-parameter settings. We prove theorems on the asymptotic optimality of MLEs among the large class of "regular" parameter estimators, and complete our discussion of large-sample theory by proving the Wilks theorem for asymptotic chi-square distribition of Likelihood Ratio Tests along with the large-sample equivalence between Wald tests, score tests, and likelihood ratio tests.

Grading: There will be graded homework sets roughly every 1.5--2 weeks (6 or 7 altogether); one in-class test, tenatively on Fri., March 17; and an in-class Final Exam on Wednesday, May 17. The course grade will be based 45% on homeworks, 20% on the in-class test, and 35% on the Exam.
Homework will generally not be accepted late, and must be handed in as an upload pdf or png file in ELMS.


HONOR CODE

The University of Maryland, College Park has a nationally recognized Code of Academic Integrity, administered by the Student Honor Council. This Code sets standards for academic integrity at Maryland for all undergraduate and graduate students. As a student you are responsible for upholding these standards for this course. It is very important for you to be aware of the consequences of cheating, fabrication, facilitation, and plagiarism. For more information on the Code of Academic Integrity or the Student Honor Council, please visit http://www.shc.umd.edu.

To further exhibit your commitment to academic integrity, remember to sign the Honor Pledge on all examinations and assignments:
"I pledge on my honor that I have not given or received any unauthorized assistance on this examination (assignment)."


This course web-page serves as the Spring 2025 Course Syllabus for Stat 701.
Also: messages and updates (such as corrections to errors in stated homework problems or changes in due-dates) will generally be posted here, and sometimes also through emails in CourseMail.

Additional information:    Important Dates below;
for auxiliary reading, several useful handouts that are described and linked below.


HANDOUTS & OTHER LINKS

Many relevant handouts can already be found on the Stat 700 web-page. Others will be added here
throughout the Spring 2025 semester.

(O). Have a look at the discussion paper of David Donoho on Fifty Years of Data Science, especially if you are
interested in Machine Learning and Data Science. Where do you think this course fits into his scheme of things ?

(I). Union-Intersection Tests covered in Casella and Berger are discussed in a journal article in
connection with applications to so-called Bioequivalence trials.

(II).   Summary of calculations in R comparing three methods for creating (one-sided)
confidence intervals for binomial proportions in moderate sized samples. The assessment of coverage
probabilities for CIs for binomial proportions is done in R code in file "BinCvrgScript.tex" and R
workspace "BinCvrg.RData" in Rscripts and interesting pictures of Coverage Prob's for n=77
and Coverage Prob's for p=0.12.

(III).   Handout containing single page Appendix from Anderson-Gill article (Ann. Statist. 1982)
showing how uniform law of large numbers for log-likelihoods follows from a pointwise strong law.

(IV).   Handout on the 2x2 table asymptotics covered in a 2009 class concerning different sampling
designs and asymptotic distribution theory for the log odds ratio.

(V).   Handout on Wald, Score and LR statistics covered in class April 10 and 13, 2009.

(VI).   Handout on Chi-square multinomial goodness of fit test.

(VII)   Handout on Proof of Wilks Thm and equivalence of corresponding chi-square statistic with
Wald & Rao-Score statistics which will complete the proof steps covered in class.

(IX).   A DIRECTORY OF SAMPLE PROBLEMS FOR OLD IN-CLASS FINALS (with somewhat different
coverage) CAN BE FOUND HERE. A list of course topics in scope for the exam can be found here.

(X).   A directory RScripts containing R scripts and workspace(s) and pdf pictures for class demonstrations of
R code and outputs illustrating large sample theory and estimation algorithms. The first set of code 4/24/23
illustrates the large-sample behavior of MLE's for Gamma-distributed data, along with the behavior of
the chi-square test of fit.

(XI).   Handout on EM Algorithm from STAT 705.

(XII) Background on Markov Chain Monte Carlo: First see Introduction and application of MCMC
within an EM estimation problem in random-intercept logistic regression. For additional pdf files of
"Mini-Course" Lectures, including computer-generated figures, see Lec.1 on Metropolis-Hastings Algorithm,
and Lec.2 on the Gibbs Sampler, with Figures that can be found in Mini-Course Figure Folders.

(XIII).   Zoom lecture (also on ELMS, with recording) from Feb. 12, 2025 on topic of Likelihood Ratio Test and basic MLE consistency.

(XIV).   Because I have always found the proof of Theorem 5.2.2 in Bickel and Doksum impenetrable, here is a cleaned-up version of the proof of that Theorem that I sketched in class on Feb.19, 2025.



Homework: Assignments, including any changes and hints, will continually be posted here. The most current form of the assignment will be posted also on ELMS. Selected homework solutions will be posted to the ELMS course pages.
Homework assignments for Spring 2025 are still under construction.


HW 1, due Saturday 2/8/25 11:59pm (7 Problems)

Read Sections 4.4, 4.5 and 4.9 of Bickel and Doksum, and do problems 4.4.2, 4.4.7, 4.4.10, 4.5.5, 4.9.4
Read Section 5.3 of Bickel and Doksum, and do problem 5.3.10.
In addition, based on exponential family facts (Sections 1.6 and 2.3) or other knowledge about distributions and moment generating functions, do and hand in the following additional problem (A), linked here.


HW 2, due Sunday 2/23/25 11:59pm (7 Problems)

Read Sections 5.2 of Bickel and Doksum, Section 5.3 material on Edgeworth Expansions and Monte Carlo Simulation, plus Section 5.3.3 and Sections 5.4 through 5.4.3, and do problems 4.9.13, 5.2.4, 5.2.5, 5.3.13, 5.3.16, 5.3.28. Also do the extra problem (B), again linked to the Extra Problems Assigned.


HW 3, due Monday 3/10, 11:59pm (6 problems counting as 7): Reading for this HW set is: Bickel and Doksum sections 5.3.3 through 5.5, Section 2.1.2 and Section 2.4 (on Numerical Likelihood optimization including Newton-Raphson and EM algorithms plus Fisher scoring defined in problem 6.5.1). Do the following 6 problems, to be handed in by Bickel and Doksum # 5.4.4, 5.4.5, 2.3.1, 2.4.10, plus the two extra problems (C) and (D) given here:

(C). (i). Suppose you analyze a sample   X1,...,Xn   of real-valued random variables via Maximum Likelihood assuming that they are   N(μ, 1)   when they are really double-exponential with density   g(x,μ) = (1/2) exp(-|x-μ|)   for all real   x. What is the asymptotic relative efficiency of your estimator of   μ   ?

(ii) Same question if the sample is   N(μ, 1)   distributed but you estimate   μ   via MLE assuming that the sample has density   g(x,μ).

(D). [Counts as 2 problems] Suppose that   f0   is a known probability density on the real line, and that a location-scale family is given (for all real μ, and all positive σ) by

f(x, μ, σ) = (1/σ) f0((x-μ)/σ) ,         all real     x

(i) Find a formula for the 2x2 per-observation Fisher Information Matrix for this kind of data X ~ f(x, μ, σ) , which should involve only   μ, σ,   and some numerical constants which involve integrals defined from f0 and its derivatives.

(ii) Specialize your result in (i) to the logistic location-scale density

f(x,μ, σ) = (1/σ) exp((x-μ)/σ)/{1 + exp((x-μ)/σ)}2,     all real     x

Find an explicit formula in terms of   θ = (μ, σ)   for the 2x2 Fisher information matrix, involving constants that you find by numerical integration.

(iii) Assume that you see "data" Xi,   i=1,...,20,   given by values 0.03 + 0.3*i in the logistic model of part (ii). Find the MLE for (μ,   σ) both by Newton-Raphson and Fisher Scoring, which DIFFER in this problem, using the initial guess μ0 = 0.3 and σ0 = 0.14. In each of the Newton-Raphson and Fisher Scoring solutions, give the entire iteration history needed to obtain the MLE to 4-decimal-place accuracy. NOTE: in this example, one of these methods is much more stable than the other to bad starting points. Which is the stable one ? Try   (μ0,   σ0) = (.3, .16) or (.2,.2).



Important Dates

  • First Class: January 27, 2025
  • Mid-Term Exam: Fri., March 14
  • Spring Break: March 16-23
  • Last Day of Classes: Mon., May 13
  • Review Session for Exam: Wed., May 15, 9-10:30am (same classroom)
  • Final Examination: Mon., May 19, 1:30-3:30 pm (same classroom)

  • Return to my home page.


    © Eric V Slud, February 24, 2025.