Читать книгу Martingales and Financial Mathematics in Discrete Time - Benoîte de Saporta - Страница 12
1.2.2. Random variables
ОглавлениеLet us now recall the definition of a generic random variable, and then the specific case of discrete random variables.
DEFINITION 1.9.– Let (Ω, , ℙ) be a probabilizable space and (E, ε) be a measurable space. A random variable on the probability space (Ω, , ℙ) taking values in the measurable space (E, ε), is any mapping X : Ω → E such that, for any B in ε, X−1(B) ∈ ; in other words, X : Ω → E is a random variable if it is an (, ε)-measurable mapping. We then write the event “X belongs to B” by
In the specific case where E = ℝ and = ε = (ℝ), the mapping X is called a real random variable. If E = ℝd with d ≥ 2, and ε = (ℝd), the mapping X is said to be a real random vector.
EXAMPLE 1.12.– Let us return to the experiment where a six-sided die is rolled, where the set of possible outcomes is Ω = {1, 2, 3, 4, 5, 6}, which is endowed with the uniform probability. Consider the following game:
– if the result is even, you win 10 ;
– if the result is odd, you win 20 .
This game can be modeled using the random variable defined by:
This mapping is a random variable, since for any B ∈ ({10, 20}), we have
and all these events are in (Ω).
DEFINITION 1.10.– The distribution of a random variable X defined on (Ω, , ℙ) taking values in (E, ε) is the mapping ℙX : ε → [0, 1] such that, for any B ∈ ε,
The distribution of X is a probability distribution on (E, ε); it is also called the image distribution of ℙ by X.
DEFINITION 1.11.– A random real variable is discrete if X(Ω) is at most countable. In other words, if X(Ω) = xi, i ∈ I, where I ⊂ ℕ . In this case, the probability distribution of X is characterized by the family
EXAMPLE 1.13.– Uniform distribution: Let and {x1, ..., xN } ⊂ ℝ. Let X be a random variable on (Ω, , ℙ) such that X(Ω) = {x1, ..., xN } and for any i ∈ {1, ..., N },
It is then said that X follows a uniform distribution on {x1, ..., xN }.
EXAMPLE 1.14.– The Bernoulli distribution: Let p ∈ [0, 1]. Let X be a random variable on (Ω, , ℙ) such that X(Ω) = {0, 1} and
It is then said that X follows a Bernoulli distribution with parameter p, and we write X ∼ (p).
The Bernoulli distribution models random experiments with two possible outcomes: success, with probability p, and failure, with probability 1 – p. This is the case in the following game. A coin is tossed N times. This experiment is modeled by Ω = {T, H}N, endowed with the σ-algebra of its subsets and the uniform distribution. For 1 ≤ n ≤ N, the mappings Xn from Ω onto ℝ are considered, defined by
the number of tails at the nth toss. Thus, Xn, 1 ≤ n ≤ N, are random real variables in the Bernoulli distribution with parameter 1/2 if the coin is balanced.
EXAMPLE 1.15.– Binomial distribution: Let p ∈ [0, 1], and X be a random variable on (Ω, , ℙ) such that X(Ω) = {0, 1, ..., N } and for any k ∈ {0, 1, ..., N },
It is then said that X follows a binomial distribution with parameters N and p, and we write X ∼ (N, p).
If the Bernoulli experiment with probability of success p is repeated N times, independently, then the binomial distribution is the distribution of the random variable containing the number of successes at the end of the N repetitions of the experiment.
EXAMPLE 1.16.– Hypergeometric distribution: Let n and N be two integers such that n ≤ N, p ∈]0, 1[such that pN ∈ ℕ, and let X be a random variable on (Ω, , ℙ) such that
and for any k ∈ X(Ω),
X is then said to follow a hypergeometric distribution with parameters N, n and p, and we write X ∼ (N, n, p).
If we consider an urn containing N indistinguishable balls, k red balls and N – k white balls, with k ∈ {1, ...N 1}, and if we simultaneously draw n balls, then the random variable X, equal to the number of red balls obtained, follows a hypergeometric distribution with parameters N, n and
EXAMPLE 1.17.– Poisson distribution: Let λ > 0 and X be a random variable on (Ω, , ℙ) such that
and for any k ∈ X(Ω),
It is then said that X follows a Poisson distribution with parameter λ, and we write X ∼ (λ).
DEFINITION 1.12.– Let X be a discrete random variable such that X(Ω) = {xi, i ∈ I}, where I ⊂ ℕ.
– X or the distribution of X is said to be integrable (or summable) if
– If X is integrable, then the expectation of X is the real number defined by
EXAMPLE 1.18.– The random variable X defined in Example 1.12 admits an expectation equal to
The average winnings in the die-rolling game is therefore equal to 15 .
The following proposition establishes a link between the expectation of a discrete, random variable and measure theory.
PROPOSITION 1.3.– Let X be a discrete random variable such that X(Ω) = {xi, i ∈ I}, where I ⊂ ℕ. It is assumed that
Then,
The above proposition also justifies the concept of integrability introduced in Definition 1.12. Further, in this case (i.e. when X is integrable: we write X ∈ L1(Ω, , ℙ).
When Xp is integrable for a certain real number p ≥ 1 (i.e. when we write
Let us look at some of the properties of expectations.
PROPOSITION 1.4.– Let X and Y be two integrable, discrete random variables, a, b ∈ ℝ. Then,
1 1) Linearity: [aX + bY ] = a[X]+ b[Y ].
2 2) Transfer theorem: if g is a measurable function such that g(X) is integrable, then
3 3) Monotonicity: if X ≤ Y almost surely (a.s.), then [X] ≤ [Y].
4 4) Cauchy–Schwartz inequality: If X2 and Y2 are integrable, then XY is integrable and
5 5) Jensen inequality: if g is a convex function such that g(X) is integrable, then,
DEFINITION 1.13.– Let X be a discrete random variable, such that X(Ω) = {xi, i ∈ I}, I ⊂ ℕ and X2 is integrable. The variance of X is the real number:
Variance satisfies the following properties.
PROPOSITION 1.5.– If a discrete random variable X admits variance, then,
1 1) (X) ≥ 0.
2 2) (X) = [X2] − ([X])2.
3 3) For any (a, b) ∈ ℝ2, (aX + b) = a2(X).