These notes contain only the terminology and main calculations in the lectures. For the associated discussions, please come to lecture.
If a random variable $X$ has a density given by
$$ f_X(x) ~ = ~ Ce^{-(ax^2 + bx)}, ~~~ x \in \mathbb{R} $$
then $X$ must be normal. Here $C$ and $a$ are positive and $b$ is any number.
This is because
$$ f_X(x) ~ = ~ \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2}\big{(}\frac{x-\mu}{\sigma}\big{)}^2} $$
can be written as
$$ f_X(x) ~ = ~ C e^{-\frac{1}{2\sigma^2} (x^2 -2\mu x)} $$
and two densities can't differ by a constant factor.
As in Lecture 10, let $X$ and $Y$ have joint density $f_{X,Y}$, and let $(V, W) = g(X, Y)$ for some smooth and invertible function $g: \mathbb{R}^2 \longrightarrow \mathbb{R}^2$.
Then we can write $(V, W) = g(X, Y) = (g_1(X, Y), g_2(X, Y))$ where the two coordinates are $V = g_1(X, Y)$ and $W = g_2(X, Y)$.
Let $J(x, y)$ be the Jacobian matrix given by
$$ J(x, y) ~ = ~ \begin{bmatrix} \frac{\partial g_1(x,y)}{\partial x} & \frac{\partial g_1(x,y)}{\partial y} \\ \frac{\partial g_2(x,y)}{\partial x} & \frac{\partial g_2(x,y)}{\partial y} \end{bmatrix} $$
and let $\det(J(x,y))$ be the determinant of $J(x,y)$.
Finally, let $g^{-1}$ be written as $(x, y) = (h_1(v, w), h_2(v, w))$.
Then the joint density of $V$ and $W$ is given by
$$ f_{V,W}(v,w) ~ = ~ \frac{f_{X,Y}(x,y)}{\text{abs}(\det(J(x,y)))} ~~~~~~ \text{at the point } (x, y) = g^{-1}(v, w) = (h_1(v, w), h_2(v, w)) $$
For an equivalent formula, let $K(v, w)$ be the Jacobian matrix given by
$$ K(v, w) ~ = ~ \begin{bmatrix} \frac{\partial h_1(v,w)}{\partial v} & \frac{\partial h_1(v,w)}{\partial w} \\ \frac{\partial h_2(v,w)}{\partial v} & \frac{\partial h_2(v,w)}{\partial w} \end{bmatrix} $$
Then
$$ f_{V,W}(v,w) ~ = ~ f_{X,Y}(h_1(v, w), h_2(v, w))\cdot \text{abs}(\det(K(v, w))) $$
Let $X$ and $Y$ be i.i.d. standard normal. Let $V = X+Y$ and $W = X-Y$. Then
Also $ J(x, y) ~ = ~ \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} $ so $\text{abs}(\det(J(x,y))) = 2$. Therefore by the first version of the formula,
$$ f_{X+Y, X-Y}(v,w) ~ = ~ \frac{f_{X,Y}(x,y)}{2} ~~~~~~ \text{at the point } (x,y) = ((v+w)2, (v-w)/2) $$
Now
$$ f_{X,Y}(x, y) ~ = ~ \frac{1}{2\pi}e^{-\frac{1}{2}(x^2 + y^2)} $$
and so
$$ \begin{align*} f_{X+Y, X-Y}(v,w) ~ &= ~ \frac{1}{4\pi}e^{-\frac{1}{2}(x^2 + y^2)} ~~~~~~ \text{at the point } (x,y) = ((v+w)2, (v-w)/2)\\ &= \frac{1}{4\pi} e^{-\frac{1}{2}\cdot\frac{1}{4}(2v^2 + 2w^2)} \\ \end{align*} $$
Without doing any more algebra, you can conclude the following:
To completely specify the normal distributions, you need the means and variances. The means are both 0. Since $X$ and $Y$ are i.i.d. standard normal,
$$ Var(X+Y) = Var(X) + Var(Y) = 2 ~~~ \text{ and } ~~~ Var(X-Y) = Var(X) + (-1)^2Var(Y) = 2 $$
The distribution of the sum is normal (0, 2), as is the distribution of the difference. In fact, any linear combination of independent normal variables is normal. You can show it by convolution, or by trigonometry as in Pitman Section 5.3. Next week you'll have yet another proof.
Let $X$ have the gamma $(r, \lambda)$ density. Let $Y$ be independent of $X$ and have the gamma $(s, \lambda)$ density. Let $V = X+Y$ and $W = X/(X+Y)$.
The range of the sum $V$ is $(0, \infty)$ and the range of the ratio $W$ is $(0, 1)$. For $v > 0$ and $0 < w < 1$,
This time, we will use the second version of the change of variable formula. We have $ K(v, w) ~ = ~ \begin{bmatrix} w & v \\ 1 - w & -v \end{bmatrix} $ and so $\text{abs}(\det(K(v,w))) = \text{abs}(-v) = v$ since $v > 0$.
So for $v > 0$ and $0 < w < 1$, the second version of the change of variable formula says
$$ f_{X+Y, X/(X+Y)}(v, w) ~ = ~ f_{X,Y}(vw, v(1-w)) \cdot v $$
Now for positive $x$ and $y$,
$$ f_{X,Y}(x,y) ~ = ~ \frac{\lambda^r}{\Gamma(r)}x^{r-1}e^{-\lambda x} \cdot \frac{\lambda^s}{\Gamma(s)}y^{s-1}e^{-\lambda y} $$
So for $v > 0$ and $0 < w < 1$,
$$ \begin{align*} f_{X+Y, X/(X+Y)}(v, w) ~ &= ~ \frac{\lambda^r}{\Gamma(r)}(vw)^{r-1}e^{-\lambda vw} \cdot \frac{\lambda^s}{\Gamma(s)}(v(1-w))^{s-1}e^{-\lambda v(1-w)} \cdot v \\ &= ~ \frac{\lambda^{r+s}}{\Gamma(r+s)} v^{r+s-1}e^{-\lambda v} \cdot \frac{\Gamma(r+s)}{\Gamma(r)\Gamma(s)} w^{r-1}(1-w)^{s-1} \end{align*} $$
This formula shows that
$$ f_W(w) ~ = ~ \frac{\Gamma(r+s)}{\Gamma(r)\Gamma(s)} w^{r-1}(1-w)^{s-1}, ~~~~ 0 < w < 1 $$
This is called the beta $(r, s)$ density.