Anda di halaman 1dari 4

Random process

In probability theory,

a stochastic process (pronunciation: /stokstk/), or

sometimes random process (widely used) is


a collection of random variables; this is often used to represent the evolution of some random value, or system, over
time.
This is the probabilistic counterpart to a deterministic process (or deterministic system).
Instead of describing a process which can only evolve in one way (as in the case, for example, of solutions of
an ordinary differential equation),
in a stochastic or random process there is some indeterminacy: even if the initial condition (or starting point) is
known, there are several (often infinitely many) directions in which the process may evolve.
In the simple case of discrete time, a stochastic process amounts to a sequence of random variables known as a time
series (for example, seeMarkov chain). Another basic type of a stochastic process is a random field, whose domain is
a region of space, in other words, a random function whose arguments are drawn from a range of continuously
changing values. One approach to stochastic processes treats them as functions of one or several deterministic
arguments (inputs, in most cases regarded as time) whose values (outputs) are random variables: non-deterministic
(single) quantities which have certain probability distributions. Random variables corresponding to various times (or
points, in the case of random fields) may be completely different. The main requirement is that these different random
quantities all have the same type. Type refers to the codomain of the function. Although the random values of a
stochastic process at different times may be independent random variables, in most commonly considered situations
they exhibit complicated statistical correlations.
Familiar examples of processes modeled as stochastic time series include stock market and exchange
rate fluctuations, signals such as speech,audio and video, medical data such as a patient's EKG, EEG, blood
pressure or temperature, and random movement such as Brownian motion orrandom walks. Examples of random
fields include static images, random terrain (landscapes), wind waves or composition variations of a heterogeneous
material.

Contents
[hide]

1 Formal definition and basic properties

o 1.1 Definition

o 1.2 Finite-dimensional distributions

2 Construction

o 2.1 Kolmogorov extension

o 2.2 Separability, or what the Kolmogorov extension does not provide

3 Filtrations

o 3.1 The natural filtration

4 Classification

o 4.1 Discrete time and discrete states

o 4.2 Continuous time and continuous state space

o 4.3 Discrete time and continuous state space

o 4.4 Continuous time and discrete state space


5 See also

6 References

7 Further reading
[edit]Formal definition and basic properties

[edit]Definition

Given a probability space and a measurable space , an S-valued stochastic process is a


collection of S-valued random variables on , indexed by a totally ordered set T ("time"). That is, a stochastic
process X is a collection

where each is an S-valued random variable on . The space S is then called the state space of the
process.
[edit]Finite-dimensional distributions

Let X be an S-valued stochastic process. For every finite subset , the k-

tuple is a random variable taking values in . The

distribution of this random variable is a probability measure on . This is called


a finite-dimensional distribution of X.
Under suitable topological restrictions, a suitably "consistent" collection of finite-dimensional distributions can be
used to define a stochastic process (see Kolmogorov extension in the next section).
[edit]Construction

In the ordinary axiomatization of probability theory by means of measure theory, the problem is to construct
a sigma-algebra of measurable subsetsof the space of all functions, and then put a finite measure on it. For this
purpose one traditionally uses a method called Kolmogorov extension.[1]
There is at least one alternative axiomatization of probability theory by means of expectations on C-star algebras
of random variables. In this case the method goes by the name of GelfandNaimarkSegal construction.
This is analogous to the two approaches to measure and integration, where one has the choice to construct
measures of sets first and define integrals later, or construct integrals first and define set measures as integrals
of characteristic functions.
[edit]Kolmogorov extension
The Kolmogorov extension proceeds along the following lines: assuming that a probability measure on the space
of all functions exists, then it can be used to specify the joint probability distribution of finite-
dimensional random variables . Now, from this n-dimensional probability distribution we

can deduce an (n 1)-dimensional marginal probability distribution for . Note that


the obvious compatibility condition, namely, that this marginal probability distribution be in the same class as the
one derived from the full-blown stochastic process, is not a requirement. Such a condition only holds, for
example, if the stochastic process is a Wiener process (in which case the marginals are all gaussian distributions
of the exponential class) but not in general for all stochastic processes. When this condition is expressed in
terms of probability densities, the result is called the ChapmanKolmogorov equation.
The Kolmogorov extension theorem guarantees the existence of a stochastic process with a given family of
finite-dimensional probability distributionssatisfying the ChapmanKolmogorov compatibility condition.
[edit]Separability, or what the Kolmogorov extension does not provide
Recall that in the Kolmogorov axiomatization, measurable sets are the sets which have a probability or, in other
words, the sets corresponding toyes/no questions that have a probabilistic answer.
The Kolmogorov extension starts by declaring to be measurable all sets of functions where finitely many
coordinates are restricted to lie in measurable subsets of . In other words, if a
yes/no question about f can be answered by looking at the values of at most finitely many coordinates, then it
has a probabilistic answer.
In measure theory, if we have a countably infinite collection of measurable sets, then the union and intersection
of all of them is a measurable set. For our purposes, this means that yes/no questions that depend on countably
many coordinates have a probabilistic answer.
The good news is that the Kolmogorov extension makes it possible to construct stochastic processes with fairly
arbitrary finite-dimensional distributions. Also, every question that one could ask about a sequence has a
probabilistic answer when asked of a random sequence. The bad news is that certain questions about functions
on a continuous domain don't have a probabilistic answer. One might hope that the questions that depend on
uncountably many values of a function be of little interest, but the really bad news is that virtually all concepts
of calculus are of this sort. For example:

1. boundedness
2. continuity
3. differentiability
all require knowledge of uncountably many values of the function.
One solution to this problem is to require that the stochastic process be separable. In other words, that there be
some countable set of coordinates whose values determine the whole random function f.
The Kolmogorov continuity theorem guarantees that processes that satisfy certain constraints on the moments of
their increments have continuous modifications and are therefore separable.
[edit]Filtrations

Given a probability space , a filtration is a weakly increasing collection of sigma-algebras on


, , indexed by some totally ordered set T, and bounded above by . I.e. for
with s < t,

.
A stochastic process X on the same time set T is said to be adapted to the filtration if, for every
, is -measurable.[2]
[edit]The natural filtration

Given a stochastic process , the natural filtration for (or induced by) this
process is the filtration where is generated by all values of up to time s = t.

I.e. .
A stochastic process is always adapted to its natural filtration.
[edit]Classification

Stochastic processes can be classified according to the cardinality of its index set (usually interpreted as
time) and state space.
[edit]Discrete time and discrete states

If both and belong to , the set of natural numbers, then we have models which lead to Markov
chains. For example:

(a) If means the bit (0 or 1) in position of a sequence of transmitted bits, then can be modelled
as a Markov chain with 2 states. This leads to the error correcting viterbi algorithm in data transmission.
(b) If means the combined genotype of a breeding couple in the th generation in a inbreeding model,
it can be shown that the proportion of heterozygous individuals in the population approaches zero as
goes to .[3]
[edit]Continuous time and continuous state space
The paradigm of continuous stochastic process is that of the Wiener process. In its original form the
problem was concerned with a particle floating on a liquid surface, receiving "kicks" from the molecules of
the liquid. The particle is then viewed as being subject to a random force which, since the molecules are
very small and very close together, is treated as being continuous and, since the particle is constrained to
the surface of the liquid by surface tension, is at each point in time a vector parallel to the surface. Thus the
random force is described by a two component stochastic process; two real-valued random variables are
associated to each point in the index set, time, (note that since the liquid is viewed as
being homogeneous the force is independent of the spatial coordinates) with the domain of the two random
variables being R, giving the x and y components of the force. A treatment of Brownian motion generally
also includes the effect of viscosity, resulting in an equation of motion known as the Langevin equation.[4]
[edit]Discrete time and continuous state space
If the index set of the process is N (the natural numbers), and the range is R (the real numbers), there are
some natural questions to ask about the sample sequences of a process {Xi}i N, where a sample sequence
is {Xi()}i N.

1. What is the probability that each sample sequence is bounded?


2. What is the probability that each sample sequence is monotonic?
3. What is the probability that each sample sequence has a limit as the index approaches ?

4. What is the probability that the series obtained from a sample sequence from converges?
5. What is the probability distribution of the sum?
Main applications of discrete time continuous state stochastic models include Markov chain Monte
Carlo (MCMC) and the analysis of Time Series.
[edit]Continuous time and discrete state space
Similarly, if the index space I is a finite or infinite interval, we can ask about the sample paths {Xt()}t I

1. What is the probability that it is bounded/integrable/continuous/differentiable...?


2. What is the probability that it has a limit at
3. What is the probability distribution of the integral?

Anda mungkin juga menyukai