Chapter 5. Distributions of Functions of Random Variables

Preparing to load PDF file. please wait...

0 of 0
100%
Chapter 5. Distributions of Functions of Random Variables

Transcript Of Chapter 5. Distributions of Functions of Random Variables

1
Chapter 5. Distributions of Functions of Random Variables

♣ Distributions of Functions of Random Variables ♣ Sampling Distribution Theory ♣ Random Functions Associated with Normal Distributions ♣ The Central Limit Theorem (CLT) ♣ Approximations for Discrete Distributions ♣ Limiting Moment-Generating Functions ♣ Box-Muller Transformation ♣ The Beta, Student’s t, and F Distributions

2 Distributions of Functions of Random Variables
• We discuss the distributions of functions of one random variable X and the distributions of functions of independently distributed random variables in this Chapter.
Example 1. Let X have the p.d.f. f (x) = xe−x2/2, 0 < x < ∞. Then Y = X2 has an exponential distribution with mean 2.

Example 2. The p.d.f. of X is f (x) = θxθ−1, 0 < x < 1, 0 < θ < ∞. Then Y = −2θln(X) has an exponential distribution with mean 2.

Example 3. Let X have a logistic distribution with p.d.f.

e−x f (x) = (1 + e−x)2 , − ∞ < x < ∞

Then

Y

=

1 1+e−X

has

a

U(0, 1)

distribution.

Example 4. Let X1 ∼ b(m, p) and X2 ∼ b(n, p) be independent r.v.’s, then Y = X1+X2 ∼ b(m + n, p).

2
Sampling Distribution Theory

♣ The collection of n independent and identically distributed random variables X1, X2, . . ., Xn, is called a random sample of size n from the common distribution, e.g., Xj ∼ N (0, 1), 1 ≤ j ≤ n.

♣ Some functions of a random sample, called statistics, are of interest, for examples, mean and variance. Sampling distribution theory refers to the derivation of distributions for functions of a random sample.

Theorem 1: Let X1, X2, . . . , Xn be n independent r.v.’s with respective means {μi} and

variances {σi2}, then Y =

n i=1

aiXi

has mean μY

=

n i=1

aiμi

and variance σY2

=

n i=1

a2i σi2,

respectively.

Theorem 2: Let X1, X2, . . . , Xn be n independent r.v.’s with respective moment-generating

functions {Mi(t)}, 1 ≤ i ≤ n, then the moment-generating function of Y =

n i=1

aiXi

is MY (t) =

n i=1

Mi(ai

t).

Corollary: If X1, X2, . . . , Xn are observations of a random sample from a distribution with moment-generating function M(t), then

(a) MY (t) =

n i=1

M

(t),

where

Y

=

n i=1

Xi.

(b) MX (t) =

n i=1

M (t/n),

where

X

=

1 n

n i=1

Xi.

Example 1: Let Xi ∼ b(k, p) be a random sample of size n. Define Y = MY (t) = ni=1(q + pet)k = (q + pet)kn.

n i=1

Xi,

then

Example 2: Let Xi ∼ Gamma(1, θ) be a random sample of size n. Define Y = then MY (t) = ni=1(1 − θt)−1 = 1/(1 − θt)n.

n i=1

Xi,

Exercises:

3
Random Functions Associated with Normal Distributions

♣ In statistical applications, it is usually assumed that the population from which a sample is taken is N (μ, σ2).

Theorem: Let X1, X2, . . . , Xn be a random sample of size n from N (μ, σ2). Define X =

1 n

n i=1

Xi,

then

X



N (μ,

σ2/n).

Theorem: Let Xj ∼ χ2(rj), 1 ≤ j ≤ n. If X1, X2, . . . , Xn are independent, then Y =

n i=1

Xi



χ2(r1

+

r2

+

.

.

.

+

rn).

Theorem: Let Z1, Z2, . . . , Zn be a random sample of size n from N (0, 1), then W = Z12 + Z22 + . . . + Zn2 ∼ χ2(n).

Corollary: Let {Xis} be independent random variables from N (μi, σi2), respectively, then W = ni=1(Xi − μi)2/σi2 is χ2(n).

Theorem: Let {Xis} be observations of a random sample of size n from N (μ, σ2). Define

X

=

1 n

n i=1

Xi

and

S2

=

1 n−1

ni=1(Xi − X)2, then

(a) X and S2 are independent.

(b) (n−σ12)S2 = ni=1(Xi − X)2/σ2 ∼ χ2(n − 1).

Example 1: Let X1, X2, X3, X4 be a random sample of size 4 from the normal distribution N(76.4, 383). Then

(a) U = ni=1(Xi − 76.4)2/383 ∼ χ2(4), P (0.711 ≤ U ≤ 7.779) = 0.90 - 0.05 =0.85. (b) W = ni=1(Xi − X)2/383 ∼ χ2(3), P (0.352 ≤ W ≤ 6.251) = 0.90 - 0.05 =0.85.

Theorem: Let Xi ∼ N (μi, σi2), 1 ≤ i ≤ n, be independent. Define Y =

Y ∼ N(

n i=1

aiμi,

n i=1

a2i σi2).

n i=1

aiXi,

then

4
The Central Limit Theorem

Theorem: Let X1, X2, . . . , Xn be a random sample of size n from N (μ, σ2). Define X =

1 n

n i=1

Xi,

then

X



N (μ,

σ2/n).

Theorem: Let X be the mean of a random sample X1, X2, . . . , Xn of √size n from a distribution with mean μ and variance σ2. Define Wn = (X − μ)/(σ/ n). Then
(a) Wn = ( n Xi − nμ)/(√nσ) i=1

(b) P (Wn ≤ w) ≈

w −∞

√12π e−z2/2dz

=

Φ(w).

(c) Wn ∼ N (0, 1) as n → ∞.

Example 1: Let X1, X2, . . . , Xn be a random sample of size n from a χ2(1). Define Y =

n i=1

Xi

.

Then

(a) Y ∼ χ2(n). √
(b) (Y − n)/ 2n ≈ N(0, 1).

Example 2: Let X1, X2, . . . , Xn be a random sample of size n from a U(0, 1). Define

Y=

n i=1

Xi.

Then

(Y − 0.5n)/ n/12 ≈ N(0, 1).

Example 3: Let X1, X2, . . . , Xn be a random sample of size n from a Bernoulli(p). Define

Y=

n i=1

Xi.

Then

(a) Y ∼ b(n, p).

(b) (Y − np)/ np(1 − p) ≈ N(0, 1).

Example 4: Let X1, X2, . . . , Xn be a random sample of size n from an exponential distri-

bution with mean θ. Define Y =

n i=1

Xi.

Then

(a) Y ∼ Gamma(n, θ). √
(b) (Y − nθ)/ nθ2 ≈ N (0, 1).

5
Approximations for Discrete Distributions

♣ Use the normal distribution to approximate probabilities for certain discrete-type distributions.

Example 1: Let Y ∼ b(10, 1/2). Then

P (3 ≤ Y < 6) = P (2.5 ≤ Y ≤ 5.5)

= Φ(0.316) − Φ(−1.581)

= 0.6240 − 0.0570

(1)

= 0.5670(0.5683 by T able II).

Example 2: Let X1, X2, . . . , Xn be a random sample of size n from a P oisson(λ). Define

Y=

n i=1

Xi.

Then

√ (Y − nλ)/ nλ ≈ N(0, 1).

Example 3: Let Y ∼ P oisson(λ = 20). Then

P (16 < Y ≤ 21) = P (16.5 ≤ Y ≤ √21.5)





= P [(16.5 − 20)/ 20 ≤ (Y − 20)/ 20 ≤ (21.5 − 20)/ 20)]

= Φ(0.335) − Φ(−0.783) = 0.4142

(2)

6
Limiting Moment-Generating Functions
Theorem: If a sequence of moment-generating functions approaches a certain one, say, M(t), then the limit of the corresponding distribution must be the distribution corresponding to M(t).
Example 1: Let Y ∼ b(50, 0.04) and let λ = np = 50 × 0.04 = 2. Then P (Y ≤ 1) = 0.400 P (Y ≤ 1) ≈ 0.406 by a Poisson approximation.

7
Simulating Continuous Distributions
Theorem 5.1-2 Let X have the cumulative distribution function (c.d.f.) F (x) of the continuous type that is strictly increasing (i.e., F (t) > F (s) if t > s) in on the support a < x < b. Then the r.v. Y = F (X) has a uniform distribution U(0, 1).
Proof Since F (a) = 0 and F (b) = 1. For 0 < y < 1, we have P (Y ≤ y) = P (F (x) ≤ y) = P (X ≤ F −1(y)) = F (F −1(y)) = y
Thus, Y has a uniform distribution U(0, 1). • Simulating an exponential distribution f (x) = 12 e−x/2, 0 < x < ∞. (1) Y = F (X) = 1 − e−X/2 ∼ U (0, 1), (2) Generate a y from U (0, 1) and let y = 1 − e−x/2 (3) Then x = −2 × ln(1 − y + ), (4) Repeat steps (2) and (3) for the sample size you request.

8
Example 5.2-6: Box-Muller Transformation

Box-Muller Transformation Let {X1, X2} be a random sample from U(0,1), define





Z1 = −2lnX1cos(2πX2) and Z2 = −2lnX1sin(2πX2).

or, equivalently X1 = exp − Z12+2 Z22 = e−q/2 and X2 = 21π arctan ZZ21 ,
which has the Jacobian

J=

−z1e−q/2
−z2 2π (z12 +z22 )

−z2e−q/2
z1 2π (z12 +z22 )

= − 1 e−q/2. 2π

Since the joint p.d.f. of X1 and X2 is f (x1, x2) = 1, 0 < x1, x2 < 1,

hence the joint p.d.f. of Z1 and Z2 is

g(z1, z2)

=

|Jx1,x2 |

=

1 2π

exp

[−(

z1

2

+

z22)/2],

− ∞ < z1, z2 < ∞.

9
The Beta Distribution

Beta f (x) = ΓΓ((αα)+Γ(ββ)) xα−1(1 − x)β−1, 0 < x < 1, α, β ∈ N
Example Let X1 and X2 have independent gamma distribution with parameters α, θ and β, θ, respectively. That is the joint probability density function (p.d.f.) of X1 and X2 is

f (x1, x2) = Γ(α)Γ(1β)θα+β xα1 −1xβ2−1Exp − x1 +θ x2 , 0 < x1, x2 < ∞, α, β ∈ N

Consider or, equivalently, The jacobian is

Y1 = X1 , Y2 = X1 + X2 X1 + X2
X1 = Y1Y2, X2 = Y2 − Y1Y2

J=

y2

y1

−y2 1 − y1

= y2(1 − y1) + y1y2 = y2.

Thus, the joint p.d.f. of Y1 and Y2 is g(y1, y2) = y2 Γ(α)Γ(1β)θα+β (y1y2)α−1(y2 − y1y2)β−1e−y2/θ,
where 0 < y1 < 1 and 0 < y2 < ∞. y1α−1(1 − y1)β−1 ∞ y2α+β−1 −y /θ
g(y1) = Γ(α)Γ(β) 0 θα+β e 2 dy2.

In particular, when θ = 1, we have a beta distribution

g(y1) = Γ(α + β) y1α−1(1 − y1)β−1, 0 < y1 < 1. Γ(α)Γ(β)

What is E(Y1) and V ar(Y1)?

Beta(a,b) Distributions

4

Beta(2,5)

Beta(3,7)

3.5

Beta(4,4)

Beta(9,2)

3

2.5

2

1.5

1

0.5

0

0

0.2

0.4

0.6

0.8

1

Beta(a,b) Distributions

4

Beta(2,5)

Beta(3,7)

3.5

Beta(4,4)

Beta(9,2)

3

2.5

2

1.5

1

0.5

0

0

0.2

0.4

0.6

0.8

1

(a)

(b)

Figure 1: Beta Distributions.

Student’s t and F Distributions

10

Random variables whose space are intervals or a union of intervals are said to be of the continuous types. The p.d.f. of a r.v. X of continuous type is an integrable function f (x) satisfying
(a) f (x) > 0, x ∈ R
(b) R f (x)dx = 1 (c) The probability of the event X ∈ A is P (A) = A f (x)dx

Student’s t Let Z ∼ N(0, 1) and V ∼ χ2(r) be two independent random variables. Define T = Z/ V /r. Then T has a t-distribution with p.d.f.

f (t)

=

√Γ[(r+1)/2] πrΓ(r/2)

1 (1+t2 /r)(r+1)/2

,

−∞
F-distribution Let U ∼ χ2(r1) and V ∼ χ2(r2) be two independent random variables. Define W = (U/r1)/(V /r2). Then W has an F -distribution with p.d.f.

f (w) = , Γ[(r1+r2)/2](r1/r2)r1/2

x(r1/2)−1

Γ(r1 /2)Γ(r2 /2)

(1+r1 w /r2 )(r1 +r2 )/2

0
SampleDistributionFunctionsVariablesDistributions