**HW 1 due Wednesday Sept.4, 11:59pm (upload to ELMS)**.
*Read Chapters 1, Sec.1.1 and Appendices A. 10-A.14 and B.7 of Bickel and Doksum.*
In Bickel and Doksum, do problems # 1.1.1(d), 1.2.(b)-(c), 1.1.15, and B.7.10, along with 3 additional problems:

(A) Suppose that *i.i.d.* real random variables X_{1},...,X_{n} are observed and can be assumed to follow one of the densities f(x,θ) from a family with real-valued unknown parameter θ. Suppose that there is a function r(x) such that R(θ) = ∫ r(x) f(x,θ) dx exists, is finite, and is strictly increasing in θ. Show that the parameter θ is *identifiable* from the data.

(B) In the setting of problem (A), explain (as constructively as possible) why there is a consistent (in probability) estimator g_{n}(X_{1},...,X_{n}) of θ . *Hint:* Start from n^{-1} ∑_{1≤j≤n} r(X_{j}) , and assume that R(θ) is continuous if you have to. An alternative assumption you may use instead is ∫ r^{2}(x) f(x,θ) dx < ∞ for all θ.

(C) In the setting of *i.i.d.* vector-valued data Y_{1},...,Y_{n} with vector-valued parameter θ ∈ Θ ⊂ ℝ^{k}, suppose that there exists a consistent (in probability) estimator g_{n}(Y_{1},...,Y_{n}) of θ.
Then show that θ is identifiable from the density family f(y,θ).

**All 7 problems are to be handed in (uploaded) Monday Sept. 12 in ELMS.**

Read Chapter 1 Sections 1.2-1.3 of Bickel and Doksum and continue to review Appendix B.7.

In Bickel and Doksum, do problems # 1.2.2, 1.2.8, 1.2.12, 1.3.2, 1.3.3, 1.3.4(a) plus one additional problem:

(D) (a) Show that if a random K-vector **v**=(v_{1},...,v_{K}) is Dirichlet(__α__) distributed, then v_{1} ~ Beta(α_{1}, α_{2}+...+α_{K}).

(b). Suppose that in 100 multinomial trials with 3 outcome categories and unknown category probabilities (p_{1}, p_{2}, p_{3}) you observe respectively 37, 42, 21 outcomes in category 1, 2, 3. Assume that the prior density for the unknown (p_{1}, p_{2}) is proportional to p_{1} * p_{2}, and find the prior and posterior probability that p_{3} > 0.3.

*Hint: the probabilities in (b) are cdf's for the Beta distribution, also called incomplete Beta integrals (which you must multiply by a Beta function value). You can get them either from Tables (not so easy to find these days) or by a one-line invocation to the Beta distribution function pbeta in R or a similarly named function in your favorite computing language (Matlab, basic, python, ...) *

Read Chapter 1 Sections 1.4, 1.5 and 1.6.1 of Bickel and Doksum.

In Bickel and Doksum, do problems # 1.4.4, 1.4.12, 1.4.24, 1.5.4, 1.5.5, 1.5.14, 1.5.16 (and in 1.5.16, prove minimality).

For #1.4.4, to say Z is of "no value" in predicting Y would mean that P(Y ≥ t | Z) is free of Z for all t, or equivalently that Y is independent of Z. To solve 1.4.4,

(a) Prove that sign(U_{1}), U_{1}^{2} / (U_{1}^{2} + U_{2}^{2}) and U_{1}^{2} + U_{2}^{2}
are jointly independent random variables; and

(b) Show that the best predictor of Y = U_{1} with respect to mean-square or absolute error loss is 0, but also find a loss function for which the best predictor of Y is a nontrivial function of U_{1}.

Read Chapter 1 Section 1.6 of Bickel and Doksum thoroughly. Also look at Sections 3.2-3.3 which will round out our coverage of decision theory before the in-class test on November 2.

In Bickel and Doksum, do the following problems from Bickel and Doksum pp.87-95: # 1.6.2, 1.6.10, 1.6.17, 1.6.28, and 1.6.35. Then also do and hand in the following 3 problems:

**(E)** For a Poisson(λ) sample find the UMVUE (Uniformly Minimum Variance Unbiased Estimator) of e^{λ/2}.

**(F)** For a Poisson(λ) sample X_{1}, ..., X_{n} with prior π(λ) ~ Gamma(3,1) for the parameter λ, find the Bayes estimator of e^{λ/2} with respect to mean-squared error loss, and show that the mean-squared errors of both of the estimators found in (E) and (F) (in a frequentist sense, not using the prior) are of order 1/n and differ from each other by an amount of order 1/n^{2}.

**(G)** Suppose that the sample X_{1}, ..., X_{n} of nonnegative-integer observations have the probability mass function p(k,θ) = θ^{k} (1-θ) I_{[k ≥0]} for unknown parameter θ > 0. Find the UMVUE's of 1/(1-θ) and of θ based on the data sample of size n.
*Hint:* finding an unbiased estimator of each of these functions of θ as a function of a single observation X_{1} is a matter of identifying the coefficients of a power series in θ. Use the result of Bickel & Doksum problem 1.6.3 to do the conditional expectation calculation you need in this problem.

**HW 5, due Friday 11/18/22 11:59pm (7 Problems)**

Reading: Chapter 2 through Section 2.3, also Sections 2.4.2-2.4.3 and 3.4.2.

Do problems 2.2.11(b) (counts as 1/2 problem), 2.2.12, 2.2.21, 3.4.11 (counts as 1.5 problems), and 3.4.12, plus the following two extra problems:

**(H)** Let X_{1}, ..., X_{n} be an iid sample from N(μ,1) and ψ(μ) = μ^{2}. (a) Show that the minimum variance for any estimator of μ^{2} from this sample, according to the Cramer-Rao inequality, is 4 μ^{2}/n. (b) Show that the UMVUE of μ^{2} is X̄^{2} - 1/n and that its variance is 4 μ^{2}/n + 2/n^{2}.

**(I)** Find by direct calculation the likelihood equation solved uniquely by the MLE of α based on a Gamma(α, 2) sample W_{1},...,W_{n}, and also show by direct calculation that this is the same equation satisfied by the method of moments estimator of α. Why does this follow from Exponential-Family theory ?

**HW 6, due Monday 12/12/22 11:59pm (7.5 Problems)**

Reading: Chapter 4 through Section 4.5.

In the Bickel & Doksum problems for Chapter 4, do 4.1.12 (counts as 1.5 problems), 4.2.2, 4.3.5, 4.3.7, 4.3.8, 4.3.10, 4.4.6.