Question:
If $(X_1, X_2)\sim\mathscr N(2,2,1,1,\frac{1}{2})$, then $E(T) = E(\max(X_1, X_2)) = ?$
Answer:
According to Basu and Ghosh (1978):

\[f(t) = \frac{1}{\sigma_1}\phi(A_{x_1}(t))\Phi\left(\frac{A_{x_2}(t) – A_{x_1}(t)}{\sqrt{1-\rho^2}}\right) + \frac{1}{\sigma_2}\phi(A_{x_2}(t))\Phi\left(\frac{A_{x_1}(t) – A_{x_2}(t)}{\sqrt{1-\rho^2}}\right)\]
and Cain (1994) gave:
\[E(T) = \mu_1\Phi\left(\frac{\mu_1 – \mu_2}{\theta}\right) + \mu_2\Phi\left(\frac{\mu_2 – \mu_1}{\theta}\right) + \theta\Phi\left(\frac{\mu_2 – \mu_1}{\theta}\right)\]
where $\theta = (\sigma_1^2 + \sigma_2^2 – 2\sigma_1\sigma_2\rho)^{\frac{1}{2}}$.
In our case, we have $E(T) = E(max(X_1, X_2))= 2*\frac{1}{2} + 2\frac{1}{2} + \frac{1}{2} = \frac{5}{2}$.

Math Stat 20090731-3

July 31, 2009

If $X_i\sim Poisson(i\lambda)$ then $\sum{i=1}^nX_i\sim Poisson(\frac{n(n+1)}{2}\lambda)$.
It is very easy to verify above statement, since
\[M_{X_i}(t) = \exp(i\lambda(e^t – 1))\], thus,
\[M_{\sum_{i=1}^nX_i}(t) = \exp\left(\sum_{i=1}^ni\lambda(e^t – 1)\right) = \exp\left(\frac{n(n+1)}{2}(e^t – 1),\right)\]
thus, $\sum_{i=1}^nX_i\sim Poisson(\frac{n(n+1)}{2}\lambda)$.

Math Stat 20090731-2

July 31, 2009

$\{X_i\}_{i=1}^n\sim\mathscr N(0, \sigma^2)$ then, $\frac{\sum_{i=1}^nX_i^2}{n}$ is a unbiased MLE for $\sigma^2$, and since
\[Var\left(\frac{\sum_{i=1}^nX_i^2}{n}\right) = \frac{n\sigma^4 + 2n\sigma^2 + n(n-1)\sigma^4}{n^2} – \sigma^4\] thus,
\[\lim_{n\rightarrow\infty}Var\left(\frac{\sum_{i=1}^nX_i^2}{n}\right) = \lim_{n\rightarrow\infty}\frac{2\sigma^2}{n} = 0\]
for any $\sigma^2 < \infty$. The estimator is consistent.

Math Stat 20090731-1

July 31, 2009

$\{X_i\}_{i=1}^n\sim Pois(\lambda)$, let $U=X_1X_2$, thus, $E(U)=\lambda^2$.
Now, let $W=E(X_1X_2|\sum_{i=1}^n X_i)$, is Rao-Blackwellized estimator $W$ a UMVUE?
Yes.
\[E(X_1X_2|\sum_{i=1}^n X_i) = E(X_1E(X_2|\sum_{i=1}^nX_i = t, X_1=u)|\sum_{i=1}^nX_i=t)\]
\[=E(X_1\sum_{k=0}^{t-u}k\cdot P(X_2=k|\sum_{i=2}^n X_i = t-u)|\sum_{i=1}^n X_i=t)\]
\[ = E(X_1\sum_{k=0}^{t-u}k\cdot\frac{(t-u)!}{k!(t-u-k)!}\left(\frac{n-2}{n-1}\right)^{t-u-k}\left(\frac{1}{n-1}\right)^{k}|\sum_{i=1}^n X_i=t)\]
\[= E(X_1\frac{t-X_1}{n-1}|\sum_{i=1}^nX_i=t) = \frac{1}{n-1}E(X_1(t-X_1)|\sum_{i=1}^nX_i =t)\]
\[=\frac{1}{n-1}\sum_{k=0}^{t}k(t-k)P(X_1=k|\sum_{i=1}^n X_i=t)\]
\[=\frac{1}{n-1}\sum_{k=0}^tk(t-k)\frac{t!}{k!(t-k)!}\left(\frac{n-1}{n}\right)^{t-k}\left(\frac{1}{n}\right)^{k}\]

\[=\frac{1}{n-1}\left\{\frac{t^2}{n} – \frac{t(n-1)}{n^2} – \frac{t^2}{n^2}\right\} = \frac{1}{n-1}\left\{\frac{t^2(n-1)}{n^2} – \frac{t(n-1)}{n^2}\right\}\]
\[= \frac{t^2 -t }{n^2}\]
and
\[E(W) = E\left(\frac{T^2 – T}{n^2}\right) = \frac{n\lambda + n^2\lambda^2 – n\lambda}{n^2} = \lambda^2\]
thus, $W$ is unbiased.
Note that $W$ is not consistent.
\[Var\left(\frac{T^2 – T}{n^2}\right)= \frac{1}{n^4}E(T^4 – 2T^3 + T^2) = \frac{n^4\lambda^4 + 4n^3\lambda^3 + 3n^2\lambda^2 – n\lambda^2}{n^4}\]
thus,
\[\lim_{n\rightarrow\infty}Var\left(\frac{T^2 – T}{n^2}\right) = \lambda^4\neq 0\]
thus, $W$ is not consistent.
For finding a MVUE for $\lambda^2$, we have
\[E[T(T-1)(T-2)(T-3)]=\sum_{k=1}^\infty k(k-1)(k-2)(k-3)\frac{e^{-n\lambda}(n\lambda)^k}{k!} \]
\[= n^4\lambda^4\sum_{k=4}^\infty\frac{e^{-n\lambda}(n\lambda)^(k-4)}{(k-4)!} = n^4\lambda^4\sum_{j=0}^\infty\frac{e^{-n\lambda}(n\lambda)^j}{j!} = n^4\lambda^4,\] thus,
$\frac{T(T-1)(T-2)(T-3)}{n^4}$ is the MVUE for $\lambda^4$

Let $Y_\lambda\sim Poisson (\lambda)$. With some results from last post, we can find $E(\overline{X_n}) = \lambda$ Then we can consider that $Y_\lambda = \overline{X_\lambda}$.
By CLT, we have $\sqrt{\lambda}(\overline{X_\lambda} – 1)\rightarrow\mathscr N(0, 1)$,
and so $\sqrt{\lambda}(\frac{Y_\lambda}{\lambda} – 1) = \frac{Y_\lambda – \lambda}{\sqrt{\lambda}}\rightarrow\mathscr N(0, 1)$ in distribution.
$X_n$ follows a binomial distribution with size $n$ and probability $p$. Let $p=\frac{\lambda}{n}$

\[\lim_{n\rightarrow\infty}P(X_n = j) = \lim_{n\rightarrow\infty}\left(\begin{array}{c}n \\ j\end{array}\right)p^j(1-p)^{n-j}) = \lim_{n\rightarrow\infty}(1-\frac{\lambda}{n})^n\frac{n!}{j!(n-j)!}\frac{\lambda^j}{(n-\lambda)^j}\]
\[ = \frac{\lambda^j}{j!}\lim_{n\rightarrow\infty}(1-\frac{\lambda}{n})^n\cdot\lim_{n\rightarrow\infty}\prod_{k=0}^{j-1}\frac{n-k}{n-\lambda} = \frac{\lambda^j}{j!}e^{-\lambda}\cdot\prod_{k=0}^{j-1}\lim_{n\rightarrow\infty}\frac{n-k}{n-\lambda} = \frac{\lambda^j}{j!}e^{-\lambda}\cdot\prod_{k=0}^{j-1}1 = \frac{\lambda^j}{j!}e^{-\lambda} \]

To estimate $P(X_1 + X_2 \leq 0)$, we use $T^* = I(X_1+X_2\leq 0|\overline{X_n})$ as the Rao-Blackwell estimator, then we have
$Var(T^*) = P(X_1+X_2\leq 0|\overline{X_n})(1 – P(X_1+X_2\leq 0|\overline{X_n})) = \Phi\left(-\frac{\sqrt{2}\overline{X_n}}{\sigma\sqrt{1-\frac{2}{n}}}\right)\left(1 – \Phi\left(-\frac{\sqrt{2}\overline{X_n}}{\sigma\sqrt{1-\frac{2}{n}}}\right)\right)$.

Note that for any estimator $T(\boldsymbol x)$ such that $E(T) = P(X_1 + X_2 \leq 0)$,
\[Var(T) \geq \frac{\sigma^2\left[\phi\left(-\frac{\sqrt{2}\mu}{\sigma}\right)\right]^2}{n}\]

$\boldsymbol x\sim\mathscr N_p(\boldsymbol\mu,\mathbf\Sigma)$, then \[\left(\begin{array}{c}\mathbf A \\ \mathbf B\end{array}\right)\boldsymbol x \sim\mathscr N_r\left(\left(\begin{array}{c}\mathbf A \\ \mathbf B\end{array}\right)\boldsymbol\mu, \left(\begin{array}{c}\mathbf A \\ \mathbf B\end{array}\right)\mathbf\Sigma(\begin{array}{cc}\mathbf A’ & \mathbf B\end{array})\right)\]
where $\mathbf A$ is a $r_1\times p$ matrix and $\mathbf B$ is a $r_2\times p$ matrix, $r_1+r_2 = r$.
We have,
\[ \mathbf A\boldsymbol x|\mathbf B\boldsymbol x \sim \mathscr N_{r_1}\left(\mathbf A\boldsymbol\mu + \mathbf{A\Sigma B}'(\mathbf{B\Sigma B}’)^{-1}(\mathbf B\boldsymbol x – \mathbf B\mu), \mathbf{A\Sigma A}’ – \mathbf{A\Sigma B}'(\mathbf{B\Sigma B}’)^{-1}\mathbf{B\Sigma A}’\right)\]
For $\boldsymbol x = (X_1, X_2,\ldots, X_p)$ are i.i.d. normal random variable with mean $\mu$ and variance $\sigma^2$; let $\mathbf A = (1, 1, \ldots, 0)$ and $\mathbf B = \frac{1}{n}\boldsymbol J_p’$. We have $\mathbf A\boldsymbol x = X_1 + X_2$ and $\mathbf B\boldsymbol x = \overline{X_p}$. \\
We have
\[X_1 + X_2|\overline{X_p}\sim\mathscr N\left(2\overline{X_p}, \sigma^2\left(2 – \frac{4}{p}\right)\right)\]
In previous post, we can found that $R_n = \frac{n}{(T_n+1)}\sim\Gamma(n, \frac{1}{\theta + 1})$. Thus
\[\frac{\sqrt{n}(T_n – \theta)}{2(|T_n +1)} = \frac{\sqrt{n}}{2} – \frac{\sqrt{n}(\theta +1)}{2(T_n+1)} = \frac{\sqrt{n}}{2} – \frac{R_n(\theta +1)}{2\sqrt{n}}\]
Then we can apply the theorem:
\[\sqrt{n}(g(T_n) – g(\theta))\rightarrow\mathscr N(0, (\sigma g'(\theta))^2)\] in law.

Math Stat 20090729

July 29, 2009

$\{X_i\}_{i=1}^n\sim F(\cdot)$ where $f(x) = F'(x) = (\theta + 1)x^\theta$ if $0 < x < 1$ and $f(x) = 0$ else where. Consider:
\[T_n = -\frac{n}{\log(\prod_{i=1}^nX_i)}-1,\] we want to show $T_n\rightarrow\theta$ in probability.
Since $Y_i = -\log (X_i)\sim Exp(\theta+1)$, thus, $\sum_{i=1}^n Y_i\sim\Gamma(n, \frac{1}{\theta+1}$, thus, $R_n = (\sum_{i=1}^n Y_i)^{-1}\sim\mathscr{IG}(n, \theta+1)$. We have $\lim_{n\rightarrow\infty}Var(R_n) = \lim_{n\rightarrow\infty}\frac{(\theta+1)^2}{(n-1)^2(n-2)} = 0$. Since $\lim_{n\rightarrow\infty}E(nR_n – 1) = \lim_{n\rightarrow\infty}\frac{n(\theta + 1))}{n-1} – 1 = \lim_{n\rightarrow\infty}\frac{n}{n-1}\theta + \lim_{n\rightarrow\infty}\frac{n}{n-1} – 1 = \theta$. Thus we have $T_n\rightarrow\theta$ in probability.