2.6 Conditioning (2)
Conditional PMFs, given the value of another random variable
Properties of conditional PMFs.
Contents
Conditioning (1)
- conditioning a random variable on an event
- conditioning one random variable on another
- summary of facts about conditional PMFs
Conditioning (2)
- conditional expectation
- total expectation theorem
+ mean and variance of the geometric (example 2.17.)
+ The Two-Envelopes Paradox (example 2.18. - 이건 따로 다룰 예정)
Conditional Expectation
Let X and Y be two random variables associated with the same experiment.
- The conditional expectation of \(X\) given an event \(A\) with \(P(A) > 0\), is defined by $$E[X|A] = \sum_x x p_{X|A} (x).$$
For a function \(g(X)\), we have (cause the function of random variables are another random variables.)
$$E[g(X)|A] = \sum_x g(x) p_{X|A} (x).$$
- The conditional expectation of \(X\) given a value \(y\) of \(Y\) is defined by $$E[X|Y=y] = \sum_x x p_{X|Y} (x\y).$$
- If \(A_1, \cdots , A_n\) be disjoint events that form a partition of the sample space, with \(P(A_i)>0\) for all \(i\), then $$E[X] = \sum_{i=1}^n P(A_i)E[X|A_i].$$
Furthermore, for any event \(B\) with \(P(A_i \cap B > 0 \)for all \(i\),
we have $$E[X|B] =\sum_{i=1}^n P(A_i|B)E[X|A_i \cap B].$$
- We have $$E[X] = \sum_y p_Y(y)E[X|Y=y].$$
Total Expectation Theorem
They follow from the total probabiolity theorem.
: "The unconditional average can be obtained by averaging the conditional averages."
They can be used to calculate the unconditional expectation \(E[X]\) from the conditional PMF or expectation, using a divide-and-conquer approach.
Proof.
Mean and Variance of the Geometric
Question.
programming over and over, and each time there is probability \(p\) that it works correctly. (indep of previous attempts)
What is the mean and variance of X, the number of tries until the program works correctly?
Answer.
We recognize \(X\) as a geometric random variable with PMF
$$p_X(k) = (1-p)^{k-1} p, \space k = 1, 2, \dots .$$
The mean and variance of \(X\) are given by
But 전부 계산하는 것은 귀찮
As an alternative, we will apply the total expectation theorem,
A1 = {X = 1} = {first try is a success}, A2 = {X > 1} = {first try is a failure}
다시 정리하자면, X 는 성공할 때까지 몇 번 돌리는지 나타내는 geometric random variable
그럼 E[X]는 X에 대한 mean
그럼 E[X|X=1] 는 첫 시도 성공했을 때 expectation = 1
첫 시도 실패했다고 가정, 처음부터 다시, 그러면 한 번 기회 쓴 거니까 1 더해줘야 함 -> E[X|X>1] = 1 + E[X]
Thus,
$$E[X] = P(X=1)E[X|X=1] + P(X>1)E[X|X>1] = p + (1-p)(1+E[X]),$$
from which we obtain E[X] as
$$E[X] = \frac{1}{p}.$$
References
Bertsekas, D. P., Tsitsiklis, J. N. (2008). Introduction to Probability Second Edition. Athena Scientific.