HW 1 due Wednesday Sept.4, 11:59pm (upload to ELMS). Read Chapters 1, Sec.1.1 and Appendices A. 10-A.14 and B.7 of Bickel and Doksum. In Bickel and Doksum, do problems # 1.1.1(d), 1.2.(b)-(c), 1.1.15, and B.7.10, along with 3 additional problems:

(A) Suppose that i.i.d. real random variables X1,...,Xn are observed and can be assumed to follow one of the densities   f(x,θ)   from a family with real-valued unknown parameter θ. Suppose that there is a function   r(x)   such that   R(θ) = ∫ r(x) f(x,θ) dx   exists, is finite, and is strictly increasing in   θ.   Show that the parameter   θ   is identifiable from the data.

(B) In the setting of problem (A), explain (as constructively as possible) why there is a consistent (in probability) estimator   gn(X1,...,Xn)   of   θ  .   Hint: Start from   n-11≤j≤n r(Xj) ,   and assume that   R(θ)  is continuous if you have to. An alternative assumption you may use instead is   ∫   r2(x) f(x,θ) dx < ∞   for all θ.

(C) In the setting of i.i.d. vector-valued data Y1,...,Yn   with vector-valued parameter   θ ∈ Θ ⊂ ℝk,   suppose that there exists a consistent (in probability) estimator   gn(Y1,...,Yn)   of   θ.   Then show that   θ   is identifiable from the density family   f(y,θ).

All 7 problems are to be handed in (uploaded) Monday Sept. 12 in ELMS.


HW 2, due Tuesday September 27, 11:59pm (7 problems total)

Read Chapter 1 Sections 1.2-1.3 of Bickel and Doksum and continue to review Appendix B.7.

In Bickel and Doksum, do problems # 1.2.2, 1.2.8, 1.2.12, 1.3.2, 1.3.3, 1.3.4(a) plus one additional problem:

(D) (a) Show that if a random K-vector v=(v1,...,vK)   is Dirichlet(α)   distributed, then   v1 ~ Beta(α1, α2+...+αK).
     (b). Suppose that in 100 multinomial trials with 3 outcome categories and unknown category probabilities  (p1, p2, p3)   you observe respectively 37, 42, 21 outcomes in category 1, 2, 3. Assume that the prior density for the unknown   (p1, p2) is   proportional to   p1 * p2,   and find the prior and posterior probability that   p3 > 0.3.
Hint: the probabilities in (b) are cdf's for the Beta distribution, also called incomplete Beta integrals (which you must multiply by a Beta function value). You can get them either from Tables (not so easy to find these days) or by a one-line invocation to the Beta distribution function pbeta in R or a similarly named function in your favorite computing language (Matlab, basic, python, ...)


HW 3, due Wednesday October 12, 11:59pm (7 problems total)

Read Chapter 1 Sections 1.4, 1.5 and 1.6.1 of Bickel and Doksum.

In Bickel and Doksum, do problems # 1.4.4, 1.4.12, 1.4.24, 1.5.4, 1.5.5, 1.5.14, 1.5.16 (and in 1.5.16, prove minimality).

For #1.4.4, to say Z is of "no value" in predicting Y would mean that   P(Y ≥ t | Z)   is free of Z for all t, or equivalently that Y is independent of Z. To solve 1.4.4,
(a) Prove that   sign(U1),   U12 / (U12 + U22)  and   U12 + U22 are jointly independent random variables; and
(b) Show that the best predictor of Y = U1 with respect to mean-square or absolute error loss is 0, but also find a loss function for which the best predictor of Y is a nontrivial function of U1.


HW 4, due Saturday October 29, 11:59pm (8 problems total)

Read Chapter 1 Section 1.6 of Bickel and Doksum thoroughly. Also look at Sections 3.2-3.3 which will round out our coverage of decision theory before the in-class test on November 2.

In Bickel and Doksum, do the following problems from Bickel and Doksum pp.87-95: # 1.6.2, 1.6.10, 1.6.17, 1.6.28, and 1.6.35. Then also do and hand in the following 3 problems:

(E) For a Poisson(λ) sample find the UMVUE (Uniformly Minimum Variance Unbiased Estimator) of eλ/2.

(F) For a Poisson(λ) sample X1, ..., Xn with prior π(λ) ~ Gamma(3,1) for the parameter λ, find the Bayes estimator of eλ/2 with respect to mean-squared error loss, and show that the mean-squared errors of both of the estimators found in (E) and (F) (in a frequentist sense, not using the prior) are of order 1/n and differ from each other by an amount of order 1/n2.

(G) Suppose that the sample X1, ..., Xn of nonnegative-integer observations have the probability mass function p(k,θ) = θk (1-θ) I[k ≥0] for unknown parameter θ > 0. Find the UMVUE's of 1/(1-θ) and of θ based on the data sample of size n. Hint: finding an unbiased estimator of each of these functions of θ as a function of a single observation X1 is a matter of identifying the coefficients of a power series in θ. Use the result of Bickel & Doksum problem 1.6.3 to do the conditional expectation calculation you need in this problem.


HW 5, due Friday 11/18/22 11:59pm (7 Problems)

Reading: Chapter 2 through Section 2.3, also Sections 2.4.2-2.4.3 and 3.4.2.

Do problems 2.2.11(b) (counts as 1/2 problem), 2.2.12, 2.2.21, 3.4.11 (counts as 1.5 problems), and 3.4.12, plus the following two extra problems:

(H) Let X1, ..., Xn be an iid sample from   N(μ,1)   and ψ(μ) = μ2. (a) Show that the minimum variance for any estimator of μ2 from this sample, according to the Cramer-Rao inequality, is 4 μ2/n. (b) Show that the UMVUE of μ2 is X̄2 - 1/n and that its variance is 4 μ2/n + 2/n2.

(I) Find by direct calculation the likelihood equation solved uniquely by the MLE of α based on a Gamma(α, 2) sample W1,...,Wn, and also show by direct calculation that this is the same equation satisfied by the method of moments estimator of α. Why does this follow from Exponential-Family theory ?


HW 6, due Monday 12/12/22 11:59pm (7.5 Problems)

Reading: Chapter 4 through Section 4.5.

In the Bickel & Doksum problems for Chapter 4, do 4.1.12 (counts as 1.5 problems), 4.2.2, 4.3.5, 4.3.7, 4.3.8, 4.3.10, 4.4.6.