Find the MLE of 1 and 2 and prove their consistency.

15. If X P () and L(, ) = ( )2 , show that for any estimator , we have sup>0 R(, ) = . 16. Suppose that X N (, 1). Consider a squared error loss function and an improper prior () = e , < < . Show that the mean of the posterior distribution of cannot be minimax. 17. Prove or disprove the following statements. (a) If an estimator has constat risk (i.e. the value of R(, ) does not depend on ), it is minimax. (b) Bayes estimator, if unique, is admissible. (c) Minimax estimator, if unique, is admissible. (d) A minimax estimator can always be viewed as a Bayes estimator for a suitable choice of prior. (e) If an admissible estimator has constant risk, it is minimax. (f) A minimax estimator can have expected risk smaller than a Bayes estimator. 18. Suppose that X f , where = {1, 2}, f1 is U (0, 1), and f2 (x) = 1 + sin(2kx); 0 x 1, for k being a positive integer. Consider the uniform prior on and the 0-1 loss function. Show that the Bayes risk does not depend on the value of k. 19. Suppose that X f , where = {1, 2}, f1 is uniform over a d-dimensional unit hypercube, and f2 is uniform over the largest hypersphere inscribed in it. Consider the uniform prior on and the 0-1 loss function. Show that the Bayes risk converges to 0 as d . 20. Suppose that X N (, 2 ), where and are known, and = {1, 2}. Consider the uniform prior on and the 0-1 loss function. Find the Bayes estimator for and check whether the corresponding Bayes risk depends on and . If X and HX have the same distribution for all orthogonal matrix H, can you nd and ?

The exam will be held on 18th October (Tuesday) at 2:15 P.M. You will be asked to solve 2 out of these 20 problems chosen at random. So, two students may get two dierent sets of questions. You will get 40 minutes (time cannot be extended) to solve these two problems.