# 7. Continuous Random Variables

## Transcript Of 7. Continuous Random Variables

CSE 312, 2011 Autumn, W.L. Ruzzo

7. continuous random variables

continuous random variables Discrete random variable: takes values in a ﬁnite or countable set, e.g.

X ∈ {1,2, ..., 6} with equal probability X is positive integer i with probability 2-i

Continuous random variable: takes values in an uncountable set, e.g.

X is the weight of a random person (a real number) X is a randomly selected point inside a unit square X is the waiting time until the next packet arrives at the server

2

pdf and cdf f(x) : the probability density function (or simply “density”)

f(x)

F(a) = ∫−a∞ f(x) dx

a

b

P(X ≤ a) = F(x): the cumulative distribution function (or simply “distribution”)

P(a < X ≤ b) = F(b) - F(a)

Need f(x)≥0, & ∫+∞ f(x) dx (= F(+∞)) = 1 -∞

A key relationship:

f(x) = d F(x), since F(a) = ∫a f(x) dx,

dx

−∞

3

Densities are not probabilities; e.g. may be > 1

densities

P(x = a) = P(a ≤ X ≤ a) = F(a)-F(a) = 0

I.e., the probability that a continuous random variable falls at a speciﬁed point is zero

P(a - ε/2 ≤ X ≤ a + ε/2) =

f(x)

F(a + ε/2) - F(a - ε/2)

≈ ε • f(a)

a-ε/2 a a+ε/2

I.e.,The probability that it falls near that point is proportional to the density; in a large random sample, expect more samples where density is higher (hence the name “density”).

4

sums and integrals; expectation

Much of what we did with discrete r.v.s carries over almost unchanged, with Σx... replaced by ∫... dx

E.g. For discrete r.v. X, For continuous r.v. X,

E[X] = Σx xp(x)

Why?

(a) We deﬁne it that way (b) The probability that X falls “near” x, say within x±dx/2, is ≈f(x)dx,

so the “average” X should be ≈ Σ xf(x)dx (summed over grid points spaced dx apart on the real line) and the limit of that as dx→0 is ∫xf(x)dx

5

example

1

Let

f(x)

-1

0

1

1

2

F(x)

-1

0

1

2

6

Linearity

properties of expectation

E[aX+b] = aE[X]+b E[X+Y] = E[X]+E[Y]

still true, just as for discrete

Functions of a random variable

E[g(X)] = ∫g(x)f(x)dx

just as for discrete, but w/integral

Alternatively, let Y = g(X), ﬁnd the density of Y, say fY, (see B&T 4.1; somewhat like r.v. slides 33-35) and directly compute E[Y] = ∫yfY(y)dy.

7

Deﬁnition is same as in the discrete case Var[X] = E[(X-μ)2] where μ = E[X]

variance

Identity still holds: Var[X] = E[X2] - (E[X])2

proof “same”

8

example

1

Let

f(x)

-1

0

1

1

2

F(x)

-1

0

1

2

9

continuous random variables: summary Continuous random variable X has density f(x), and

7. continuous random variables

continuous random variables Discrete random variable: takes values in a ﬁnite or countable set, e.g.

X ∈ {1,2, ..., 6} with equal probability X is positive integer i with probability 2-i

Continuous random variable: takes values in an uncountable set, e.g.

X is the weight of a random person (a real number) X is a randomly selected point inside a unit square X is the waiting time until the next packet arrives at the server

2

pdf and cdf f(x) : the probability density function (or simply “density”)

f(x)

F(a) = ∫−a∞ f(x) dx

a

b

P(X ≤ a) = F(x): the cumulative distribution function (or simply “distribution”)

P(a < X ≤ b) = F(b) - F(a)

Need f(x)≥0, & ∫+∞ f(x) dx (= F(+∞)) = 1 -∞

A key relationship:

f(x) = d F(x), since F(a) = ∫a f(x) dx,

dx

−∞

3

Densities are not probabilities; e.g. may be > 1

densities

P(x = a) = P(a ≤ X ≤ a) = F(a)-F(a) = 0

I.e., the probability that a continuous random variable falls at a speciﬁed point is zero

P(a - ε/2 ≤ X ≤ a + ε/2) =

f(x)

F(a + ε/2) - F(a - ε/2)

≈ ε • f(a)

a-ε/2 a a+ε/2

I.e.,The probability that it falls near that point is proportional to the density; in a large random sample, expect more samples where density is higher (hence the name “density”).

4

sums and integrals; expectation

Much of what we did with discrete r.v.s carries over almost unchanged, with Σx... replaced by ∫... dx

E.g. For discrete r.v. X, For continuous r.v. X,

E[X] = Σx xp(x)

Why?

(a) We deﬁne it that way (b) The probability that X falls “near” x, say within x±dx/2, is ≈f(x)dx,

so the “average” X should be ≈ Σ xf(x)dx (summed over grid points spaced dx apart on the real line) and the limit of that as dx→0 is ∫xf(x)dx

5

example

1

Let

f(x)

-1

0

1

1

2

F(x)

-1

0

1

2

6

Linearity

properties of expectation

E[aX+b] = aE[X]+b E[X+Y] = E[X]+E[Y]

still true, just as for discrete

Functions of a random variable

E[g(X)] = ∫g(x)f(x)dx

just as for discrete, but w/integral

Alternatively, let Y = g(X), ﬁnd the density of Y, say fY, (see B&T 4.1; somewhat like r.v. slides 33-35) and directly compute E[Y] = ∫yfY(y)dy.

7

Deﬁnition is same as in the discrete case Var[X] = E[(X-μ)2] where μ = E[X]

variance

Identity still holds: Var[X] = E[X2] - (E[X])2

proof “same”

8

example

1

Let

f(x)

-1

0

1

1

2

F(x)

-1

0

1

2

9

continuous random variables: summary Continuous random variable X has density f(x), and