Читать книгу Martingales and Financial Mathematics in Discrete Time - Benoîte de Saporta - Страница 16
1.3. Stochastic processes
ОглавлениеThe main objective of this book is to study certain families of stochastic (or random) processes in discrete time. There are two ways of seeing such objects:
– as a sequence (Xn)n∈ℕ of real random variables;
– as a single random variable X taking values in the set of real sequences.
The index n represents time. Since n ∈ ℕ, we speak of processes in discrete time. In the rest of this book, unless indicated otherwise, we will only consider processes taking discrete real values. The notation E thus denotes a finite or countable subset of ℝ and ε = (E), the set of subsets of E.
DEFINITION 1.18.– A stochastic process is a sequence X = (Xn)n∈ℕ of random variables taking values in (E, ε). The process X is then a random variable taking values in (Eℕ, ε⊗ℕ).
EXAMPLE 1.22.– A coin is tossed an infinite number of times. This experiment is modeled by Ω = {T, H}ℕ∗ . For n ∈ ℕ∗, consider the mappings Xn to Ω in ℝ defined by
the number of tails at the nth toss. Therefore, Xn, n ∈ ℕ∗ are discrete, real random variables and the sequence X = (Xn)n∈ℕ is a stochastic process.
DEFINITION 1.19.– Let X = (Xn)n∈ℕ be a stochastic process. For all n ∈ ℕ, the distribution of the vector (X0, X1,..., Xn) is denoted by μn. The probability distributions (μn)n∈ℕ are called finite-dimensional distributions or finite-dimensional marginal distributions of the process X = (Xn)n∈ℕ.
PROPOSITION 1.10.– Let X = (Xn)n∈ℕ be a stochastic process and let (μn)n∈ℕ be its finite-dimensional distributions. Then, for all n ∈ N∗ and (A0,..., An−1) ∈ εn, we have
In other words, the restriction of the marginal distribution of the vector (X0,..., Xn) to its first n coordinates is exactly the distribution of the vector (X0,..., Xn−1).
PROOF.– This proof directly follows from the definition of the objects. We have
and hence, the desired equality.
□
Indeed, this property completely characterizes the distribution of the process X according to the following theorem.
THEOREM 1.4 (Kolmogorov).– The canonical space (Ω, ) is defined in the following manner. Let Ω = Eℕ. The coordinate mappings (Xn)n∈ℕ are defined by Xn(ω) = ωn for any ω = (ωn)n∈ℕ ∈ Ω and we write = σ(Xn,n ∈ ℕ). Let (μn)n∈ℕ be a family of probability distributions such that
1 1) for any n ∈ ℕ, μn is defined on (En+1, ε⊗(n+1)),
2 2) for any n ∈ ℕ∗ and (A0,..., An−1) ∈ εn, we have μn−1(A0 × ... × An−1) = μn(A0 × ... × An−1 × E).
Therefore, there exists a unique probability distribution μ over the canonical space (Ω, ) such that the process X = (Xn)n∈ℕ for the coordinate mapping has the distribution μ and for the finite-dimensional distributions has the sequence (μn)n∈ℕ.
This result is very important for the theory of processes as it signifies that it is sufficient to specify (all) the finite-dimensional distributions and for them to be compatible with each other, to uniquely define a process distribution over the space of infinite random sequences. In practice, this makes it possible to justify the construction of processes (existence property) as well as showing that two processes have the same distribution (unicity property).
To study the random variables taking values in the set of sequences, we need new definitions for σ-algebras and measurability.
DEFINITION 1.20.– In a probability space (Ω, , ℙ), a filtration is a sequence (n)n∈ℕ of sub-σ-algebras of such that, for any n ∈ ℕ, n ⊂ n+1. This is, thus, a non-decreasing sequence (for inclusion) of sub-σ-algebras of .
When (n)n∈ℕ is a filtration defined on the probability space (Ω, , ℙ), the quadruplet (Ω, , ℙ, (n)n∈ℕ) is said to be a filtered probability space.
EXAMPLE 1.23.– Let (Xn)n∈ℕ be a sequence of random variables and we consider, for any n ∈ ℕ, n = σ(X0, X1, ..., Xn), the σ-algebra generated by {X0, ..., Xn}. The sequence (n)n∈ℕ is, therefore, a filtration, called a natural filtration of (Xn)n∈ℕ or filtration generated by (Xn)n∈ℕ. This filtration represents the information revealed over time, by the observation of the drawings of the sequence X = (Xn)n∈ℕ.
DEFINITION 1.21.– Let (Ω, , ℙ, (n)n∈ℕ) be a filtered probability space, and let X = (Xn)n∈ℕ be a stochastic process.
– X is said to be adapted to the filtration (n)n∈ℕ (or again (n)n∈ℕ−adapted), if Xn is n-measurable for any n ∈ ℕ;
– X is said to be predictable with respect to the filtration (n)n∈ℕ (or again (n)n∈ℕ−predictable), if Xn is n−1-measurable for any n ∈ ℕ∗.
EXAMPLE 1.24.– A process is always adapted with respect to its natural filtration.
As its name indicates, for a predictable process, we know its value Xn from the instant n − 1.