Stat 200A Fall 2018

A. Adhikari

UC Berkeley

Lecture 10, Tuesday 9/25

These notes contain only the terminology and main calculations in the lectures. For the associated discussions, please come to lecture.

This lecture was on densities of transformations of random variables. The first part, on transformations of a single variable, is in Chapter 16 of the Prob 140 textbook.

Here is the part on transformations of two variables that have a joint density.

Change of Variable Formula for Joint Densities

This is an extension of the change of variable formula when we transform a single variable. A proof is outlined in Section 2 of a very clear set of notes from MIT. If you remember your multivariable calculus you will see that the formula is exactly analogous to the single variable formula discussed in class.

Let $X$ and $Y$ have joint density $f_{X,Y}$, and let $(V, W) = g(X, Y)$ for some smooth and invertible function $g: \mathbb{R}^2 \longrightarrow \mathbb{R}^2$.

Then we can write $(V, W) = g(X, Y) = (g_1(X, Y), g_2(X, Y))$ where the two coordinates are $V = g_1(X, Y)$ and $W = g_2(X, Y)$.

Let $J(x, y)$ be the Jacobian matrix given by

$$ J(x, y) ~ = ~ \begin{bmatrix} \frac{\partial g_1(x,y)}{\partial x} & \frac{\partial g_1(x,y)}{\partial y} \\ \frac{\partial g_2(x,y)}{\partial x} & \frac{\partial g_2(x,y)}{\partial y} \end{bmatrix} $$

and let $\det(J(x,y))$ be the determinant of $J(x,y)$.

Finally, let $g^{-1}$ be written as $(x, y) = (h_1(v, w), h_2(v, w))$.

First Version of Formula

Then the joint density of $V$ and $W$ is given by

$$ f_{V,W}(v,w) ~ = ~ \frac{f_{X,Y}(x,y)}{\text{abs}(\det(J(x,y)))} ~~~~~~ \text{at the point } (x, y) = g^{-1}(v, w) = (h_1(v, w), h_2(v, w)) $$

Example: Joint Density of $X$ and $X+Y$

Let $g(X, Y) = (X, X+Y)$. In the notation of the formulas above:

  • $g_1(x, y) = x = v$ and $g_2(x, y) = x+y = w$
  • $x = h_1(v, w) = v$ and $y = h_2(v, w) = w-v$

Also $ J(x, y) ~ = ~ \begin{bmatrix} 1 & 0 \\ -1 & 1 \end{bmatrix} $ so $\text{abs}(\det(J(x,y))) = 1$. Therefore by the first version of the formula,

$$ f_{X, X+Y}(v, w) ~ = ~ \frac{f_{X,Y}(x,y)}{1} ~~~~~~ \text{at the point } (x,y) = (v, w-v) $$

So

$$ f_{X, X+Y}(v,w) ~ = ~ f_{X,Y}(v, w-v) $$

Main Consequence: Density of $X+Y$

The density of the sum $X+Y$ is the marginal given by

$$ f_{X+Y}(w) ~ = ~ \int_{-\infty}^{\infty} f_{X,Y}(v, w-v)dv $$

If $X$ and $Y$ are independent, then this becomes the density convolution formula

$$ f_{X+Y}(w) ~ = ~ \int_{-\infty}^{\infty} f_X(v)f_Y(w-v)dv $$

If $X$ and $Y$ are independent as well as non-negative, then $f_X$ and $f_Y$ are 0 on the negative numbers and the convolution formula becomes

$$ f_{X+Y}(w) ~ = ~ \int_0^w f_X(v)f_Y(w-v)dv $$

Sum of Two IID Exponential Variables

Let $X$ and $Y$ be i.i.d. exponential $(\lambda)$ variables. The convolution formula says that the density of $X+Y$ is given by

$$ \begin{align*} f_{X+Y}(w) ~ &= ~ \int_0^w \lambda e^{-\lambda v} \cdot \lambda e^{-\lambda (w-v)} dv \\ &= ~ \lambda^2 e^{-\lambda w} \int_0^w 1 dv \\ &= ~ \lambda^2 w e^{-\lambda w} \end{align*} $$

This is the gamma $(2, \lambda)$ density. In the next lecture we will study sums of independent gamma and normal variables.