This article is about the univariate normal distribution. The normal distribution is a subclass of the elliptical disFor normally distributed vectors, see Multivariate normal tributions. The normal distribution is symmetric about its
distribution.
mean, and is non-zero over the entire real line. As such it
may not be a suitable model for variables that are inherently positive or strongly skewed, such as the weight of
In probability theory, the normal (or Gaussian) distribution is a very common continuous probability distribu- a person or the price of a share. Such variables may be
logtion. Normal distributions are important in statistics and better described by other distributions, such as the
or
the
Pareto
distribution.
normal
distribution
are often used in the natural and social sciences to represent real-valued random variables whose distributions are The value of the normal distribution is practically zero
not known.[1][2]
when the value x lies more than a few standard deviaThe normal distribution is remarkably useful because of tions away from the mean. Therefore, it may not be an
the central limit theorem. In its most general form, un- appropriate model when one expects a signicant fracder mild conditions, it states that averages of random tion of outliers values that lie many standard deviavariables independently drawn from independent distri- tions away from the mean and least squares and other
butions are normally distributed. Physical quantities that statistical inference methods that are optimal for norare expected to be the sum of many independent pro- mally distributed variables often become highly unrelicesses (such as measurement errors) often have distribu- able when applied to such data. In those cases, a more
tions that are nearly normal.[3] Moreover, many results heavy-tailed distribution should be assumed and the apand methods (such as propagation of uncertainty and least propriate robust statistical inference methods applied.
squares parameter tting) can be derived analytically in The Gaussian distribution belongs to the family of
explicit form when the relevant variables are normally stable distributions which are the attractors of sums of
distributed.
independent, identically distributed distributions whether
The normal distribution is sometimes informally called or not the mean or variance is nite. Except for the Gausthe bell curve. However, many other distributions are sian which is a limiting case, all stable distributions have
bell-shaped (such as Cauchy's, Student's, and logistic). heavy tails and innite variance.
The terms Gaussian function and Gaussian bell curve
are also ambiguous because they sometimes refer to multiples of the normal distribution that cannot be directly
interpreted in terms of probabilities.
1 Denition
(x)2
1
f (x, , ) =
e 22
2 2
Here, is the mean or expectation of the distribution
(and also its median and mode). The parameter is its
standard deviation; its variance is then 2 . A random
variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.
e 2 x
(x) =
2
1
2 PROPERTIES
ex
(x) =
f (x) =
(x) = ex
1.2
(x)2
2
e
.
2
because of a much simpler and easier-to-remember formula, the fact that the pdf has unit height at zero, and
The probability density must be scaled by 1/ so that the simple approximate formulas for the quantiles of the disintegral is still 1.
tribution.
If Z is a standard normal deviate, then X = Z + will
have a normal distribution with expected value and standard deviation . Conversely, if X is a general normal 2 Properties
deviate, then Z = (X )/ will have a standard normal
distribution.
+bx+c
1.3
Notation
X N (, 2 ).
2.3
E [|X| ] = 2
p
p
2
( 1+p )
(
)
1 1
1
2
, (/) .
1 F1 p,
2 2
2
e /(2
f (0) =
2
2
f (x) + f (x)(x ) = 0,
or
f (x)+ f (x)(x) = 0,
2 /2
2
1
e
f (0) =
. (t) =
2.2
Moments
{
0
E [X ] =
p (p 1)!!
p
ifpodd, is
ifpeven. is
Here n!! denotes the double factorial, that is, the product The moment generating function of a real random variof every number from n to 1 that has the same parity as able X is the expected value of etX , as a function of the
n.
real parameter t. For a normal distribution with mean
The central absolute moments coincide with plain mo- and deviation , the moment generating function exists
ments for all even orders, but are nonzero for odd orders. and is equal to
For any non-negative integer p,
{
ifpodd is
p
2
(t)) = (it)
= et e 2
(M
p+1
2 2
2 2
The
cumulant generating function is the logarithm of the
1
ifpeven is
moment generating function, namely
The last formula is valid also for any non-integer p > 1.
When the mean is not zero, the plain and absolute mo1
ments can be expressed in terms of conuent hypergeog(t) = ln M (t) = t + 2 t2
metric functions 1 F 1 and U.
2
E [|X|p ] = p (p1)!!
E [X p ] = p (i 2)p U
= p
)
1 1
1
2
p, , (/) ,
2 2
2
function
CDF(x:extended):extended;
var
value,sum:extended;
i:integer;
begin sum:=x;
for
i:=1
to
100
do
begin
The cumulative distribution function (CDF) of the stan- value:=x;
value:=(value*x*x/(2*i+1));
sum:=sum+value;
end;
dard normal distribution, usually denoted with the capital
result:=0.5+(sum/sqrt(2*pi))*exp(-(x*x)/2);
end;
Greek letter (phi), is the integral
1
(x) =
2
et
/2
dt
et dt
2
These integrals cannot be expressed in terms of elementary functions, and are often said to be special functions.
However, many numerical approximations are known;
see below.
The two functions are closely related, namely
[
(
)]
1
x
(x) =
1 + erf
2
2
For a generic normal distribution f with mean and de- For the normal distribution, the values less than one standard deviation away from the mean account for 68.27% of the set; while
viation , the cumulative distribution function is
(
F (x) =
[
(
)]
1
x
=
1 + erf
2
2
two standard deviations from the mean account for 95.45%; and
three standard deviations account for 99.73%.
3
35
(2nfunction,
+ 1)!! and can be expressed in terms of the inverse
2
error function:
where !! denotes the double factorial. Example of Pascal function to calculate CDF (sum of rst 100 elements)
[See comments on the talk page under the CDF heading] 1 (p) = 2 erf1 (2p 1),
p (0, 1).
5
For a normal random variable with mean and variance
2 , the quantile function is
n=1
1/6
p (0, 1).
p(k)
0.18
0.16
0.14
0.12
0.10
0.08
0.05
0.04
0.02
0.00
Zero-variance limit
p(k)
0.18
0.16
0.14
0.12
0.10
0.08
0.05
0.04
0.02
0.00
p(k)
0.18
0.16
0.14
0.12
0.10
0.08
0.05
0.04
0.02
0.00
123456
n=2
1/6
12
p(k)
0.18
0.16
0.14
0.12
0.10
0.08
0.05
0.04
0.02
0.00
p(k)
0.18
0.16
0.14
0.12
0.10
0.08
0.05
0.04
0.02
0.00
n=4
73 / 648
14
24
n=5
65 / 648
17,18
30 k
n=3
1/8
10,11
18 k
Z= n
1
Xi
n i=1
n
The central limit theorem also implies that certain distri- Since this must hold for any small f(x), the term in
butions can be approximated by the normal distribution, brackets must be zero, and solving for f(x) yields:
for example:
The binomial distribution B(n, p) is approximately f (x) = e0 1(x)
normal with mean np and variance np(1p) for large
Using the constraint equations to solve for 0 and yields
n and for p not too close to zero or one.
the normal distribution:
The Poisson distribution with parameter is approximately normal with mean and variance , for large
(x)2
values of .[21]
1
e 22
f (x, , ) =
2 2
The chi-squared distribution 2 (k) is approximately
normal with mean k and variance 2k, for large k.
2
In particular, if X and Y are independent normal deviates with zero mean and variance 2 , then X + Y and X
6 Maximum entropy
Y are also independent and normally distributed, with
zero mean and variance 22 . This is a special case of the
Of all probability distributions over the reals with a spec- polarization identity.[26]
ied mean and variance 2 , the normal distribution Also, if X , X are two independent normal deviates with
1
2
N(, 2 ) is the one with maximum entropy.[22] If X is a mean and deviation , and a, b are arbitrary real numcontinuous random variable with probability density f(x), bers, then the variable
then the entropy of X is dened as[23][24][25]
H(X) =
X3 =
L=
aX1 + bX2 (a + b)
+
a2 + b2
)
(
7.1
Innite
divisibility)and Cramrs theof (x) dx 2
f (x)(x )2 dx
rem
7
This result is known as Cramrs decomposition theorem, and is equivalent to saying that the convolution of
two distributions is normal if and only if both are normal. Cramrs theorem implies that a linear combination
of independent non-Gaussian variables will never have an
exactly normal distribution, although it may approach it
arbitrarily close.[29]
7.2
Bernsteins theorem
n
1
n 0 + 0 x
|x1 , . . . , xn N
,
+ 2
2
2
2
0
n + 0
7. The family of normal distributions forms a manifold
with constant curvature 1. The same family is
at with respect to the (1)-connections (e) and
(m) .[36]
9 Related distributions
Other properties
1. If the characteristic function X of some random 9.1 Operations on a single random varivariable X is of the form X(t) = eQ(t) , where Q(t)
able
is a polynomial, then the Marcinkiewicz theorem
(named after Jzef Marcinkiewicz) asserts that Q If X is distributed normally with mean and variance 2 ,
can be at most a quadratic polynomial, and therefore then
X a normal random variable.[29] The consequence of
this result is that the normal distribution is the only
The exponential of X is distributed log-normally: eX
distribution with a nite number (two) of non-zero
~ ln(N (, 2 )).
cumulants.
The absolute value of X has folded normal distribu2. If X and Y are jointly normal and uncorrelated, then
tion: |X| ~ Nf (, 2 ). If = 0 this is known as the
they are independent. The requirement that X and
half-normal distribution.
Y should be jointly normal is essential, without it the
property does not hold.[32][33][proof] For non-normal
The square of X/ has the noncentral chi-squared
random variables uncorrelatedness does not imply
distribution with one degree of freedom: X2 /2 ~
independence.
2 1 (2 /2 ). If = 0, the distribution is called simply
chi-squared.
3. The KullbackLeibler divergence of one normal
distribution X1 N(1 , 2 1 )from another X2
The distribution of the variable X restricted to an
N(2 , 2 2 )is given by:[34]
interval [a, b] is called the truncated normal distri( 2
)
bution.
2
2
(1 2 )
1 1
DKL (X1 X2 ) =
+
1 ln 12 .
2
2
22
2 2
2
(X )2 has a Lvy distribution with location 0 and
scale 2 .
The Hellinger distance between the same distributions is equal to
H 2 (X1 , X2 ) = 1
2)
21 2 14 (12
2
1 +2
.
e
12 + 22
4. The Fisher information matrix for a normal distribution is diagonal and takes the form
(1
)
0
2
I=
0 21 4
9
Their product Z = X1 X2 follows the productnormal distribution[37] with density function fZ(z)
= 1 K 0 (|z|), where K 0 is the modied Bessel function of the second kind. This distribution is symmetric around zero, unbounded at z = 0, and has the
characteristic function Z(t) = (1 + t 2 )1/2 .
9.3
RELATED DISTRIBUTIONS
9.5 Extensions
X12 + + Xn2 2n .
If X1 , X2 , , Xn are independent normally distributed random variables with means and variances 2 , then their sample mean is independent
from the sample standard deviation,[38] which can
be demonstrated using Basus theorem or Cochrans
theorem.[39] The ratio of these two quantities will
have the Students t-distribution with n 1 degrees
of freedom:
9.4
)
2
X12 + X22 + + Xn /n
Fn, m .
(Y12 + Y22 + + Ym2 ) /m
Brownian motion,
Brownian bridge,
OrnsteinUhlenbeck process.
Gaussian q-distribution is an abstract mathematical
construction that represents a "q-analogue" of the
normal distribution.
the q-Gaussian is an analogue of the Gaussian distribution, in the sense that it maximises the Tsallis
entropy, and is one type of Tsallis distribution. Note
that this distribution is dierent from the Gaussian
q-distribution above.
9
One of the main practical uses of the Gaussian law is to
model the empirical distributions of many dierent random variables encountered in practice. In such case a
possible extension would be a richer family of distributions, having more than two parameters and therefore being able to t the empirical distribution more accurately.
The examples of such extensions are:
Pearson distribution a four-parametric family of
probability distributions that extend the normal law
to include dierent skewness and kurtosis values.
10
Normality tests
JarqueBera test
Empirical distribution function tests:
Lilliefors test (an adaptation
KolmogorovSmirnov test)
of
the
AndersonDarling test
11 Estimation of parameters
See also: Standard error of the mean, Standard deviation
Estimation, Variance Estimation and Maximum likelihood Continuous distribution, continuous parameter
space
n
n
Visual tests are more intuitively appealing but
n
n
1
ln f (xi ; , 2 ) = ln(2) ln 2 2
(xi
subjective at the same time, as they rely on infor- ln L(, 2 ) =
2
2
2 i=1
i=1
mal human judgement to accept or reject the null
hypothesis.
Taking derivatives with respect to and 2 and solving
Q-Q plot is a plot of the sorted values from the resulting system of rst order conditions yields the
the data set against the expected values of maximum likelihood estimates:
the corresponding quantiles from the standard
normal distribution. That is, its a plot of
n
n
point of the form (1 (pk), xk), where plot1
1
2
=
x
x
,
=
(xi x)2 .
i
ting points pk are equal to pk = (k )/(n + 1
n i=1
n i=1
2) and is an adjustment constant, which
can be anything between 0 and 1. If the null Estimator is called the sample mean, since it is the arithhypothesis is true, the plotted points should ap- metic mean of all observations. The statistic x is complete
proximately lie on a straight line.
and sucient for , and therefore by the Lehmann
N (, 2/n).
The variance of this estimator is equal to the -element
of the inverse Fisher information matrix I 1 . This implies that the estimator is nite-sample ecient. Of practical importance is the fact that the standard error of is
10
d
n(
)
N (0, 2 ).
=
s/ n
x
tn1
(xi x)2
t=
n
1
s =
2 =
(xi x)2 .
n1
n 1 i=1
] [
[
1
1
1
+ tn1,1/2 s
|z/2 | s,
+ tn1,/2 s,
n
n
n
[
] [
2
2
(n 1)s
(n 1)s
2 2 2
2
2
,
s |z/2 | s , s + |z/2 |
2n1,1/2 2n1,/2
n
s2
2
2 ,
n 1 n1
2
2n1 .
n
1
n(n1)
Applying the asymptotic theory, both estimators s2 and Bayesian analysis of normally distributed data is compli
2 are consistent, that is they converge in probability to cated by the many dierent possibilities that may be con2 as the sample size n . The two estimators are also sidered:
both asymptotically normal:
d
n(
2 2 ) n(s2 2 )
N (0, 2 4 ).
In particular, both estimators are asymptotically ecient
for 2 .
By Cochrans theorem, for normal distributions the sample mean and the sample variance s2 are independent,
12.2
11
Scalar form
n
n
(
)2
2
ay + bz
ab
(x
(xi x
)2 + n(
x )2
2 i ) =
a(xy) +b(xz) = (a+b) x
+
(yz)
a+b
a + b i=1
i=1
2
1
n
n
i=1
xi .
)
(
1
2
exp (xi )
p(X|, ) =
2
2
i=1
A similar formula can be written for the sum of two vector
)
(
n
( ) n2
1
quadratics: If x, y, z are vectors of length k, and A and B
2
=
(xi )
exp
are symmetric, invertible matrices of size k k , then
2
2 i=1
( n
[
)]
( ) n2
1
exp
(xi x
)2 + n(
x )2
.
=
1 1
(yx) A(yx)+(xz) B(xz) = (xc) (A+B)(xc)+(yz) (A1 +B2
) (yz) 2
i=1
12.1.2
where
Vector form
12
( average, as described
)
( ) n2
1
0
1 written in terms
same
formulas
can be
of variance by re2
2
2
=
exp
(xi x
) + n(
x )
exp 0 ( 0 )
ciprocating
yielding the more ugly for2
2
2 all the precisions,
2
i=1
) mulas
))
(
( ( n
1
exp
(xi x
)2 + n(
x )2 + 0 ( 0 )2
2
i=1
(
)
)
1(
2
2
1
exp n (
x ) + 0 ( 0 )
02 = n
1
2
+
2
(
02 )
(
)2
n x
+ 0 0
n 0 nx + 0 2
1
+
(
= exp (n + 0 )
x2 002)
2
n + 0
n+
0 n
0 =
1
(
2 + 02
(
)2 )
1
n x
+ 0 0
n
exp (n + 0 )
1
x
=
xi
2
n + 0
n i=1
In the above derivation, we used the formula above for the
sum of two quadratics and eliminated all constant factors
not involving . The result is the kernel of a normal disx
+0 0
tribution, with mean nn
and precision n + 0 ,
+0
12.4 With known mean
i.e.
For a set of i.i.d. normally distributed data points X
of size n where each individual point x follows x
p(|X) N
N (, 2 ) with known mean , the conjugate prior of the
variance has an inverse gamma distribution or a scaled
This can be written as a set of Bayesian update equations
inverse chi-squared distribution. The two are equivalent
for the posterior parameters in terms of the prior paramexcept for having dierent parameterizations. Although
eters:
the inverse gamma is more commonly used, we use the
scaled inverse chi-squared for the sake of convenience.
The prior for 2 is as follows:
= + n
(
n x
+ 0 0
1
,
n + 0
n + 0
n x
+ 0 0
0 =
n + 0
n
1
x
=
xi
n i=1
That is, to combine n data points with total precision of n
(or equivalently, total variance of n/2 ) and mean of values x
, derive a new total precision simply by adding the
total precision of the data to the prior total precision, and
form a new mean through a precision-weighted average,
i.e. a weighted average of the data mean and the prior
mean, each weighted by the associated total precision.
This makes logical sense if the precision is thought of as
indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this
distribution is the sum of the individual certainties. (For
the intuition of this, compare the expression the whole
is (or is not) greater than the sum of its parts. In addition, consider that the knowledge of the posterior comes
from a combination of the knowledge of the prior and
likelihood, so it makes sense that we are more certain of
it than of either of its components.)
p( 2 |0 , 02 ) =
0
(02 20 ) 2
( 0 )
2
exp
0 02
2 2
( 2 )1+
0
2
exp
0 02
2 2
( 2 )1+
0
2
) n2
n
1
p(X|, ) =
exp 2
(xi )2
2 i=1
) n2
]
(
[
1
S
=
exp 2
2 2
2
2
1
2 2
where
S=
(xi )2 .
i=1
The above formula reveals why it is more convenient
to do Bayesian analysis of conjugate priors for the normal distribution in terms of the precision. The posterior Then:
12.5
13
+
0 exp
4. To handle the case where both mean and variance
2
2 2
2 2
( 2 )1+ 2
are unknown, we could place independent priors
[
]
1
0 02 + S
over the mean and variance, with xed estimates of
=
0 +n exp
2 2
the average mean, total variance, number of data
( 2 )1+ 2
points used to compute the variance prior, and sum
The above is also a scaled inverse chi-squared distribution
of squared deviations. Note however that in reality,
where
the total variance of the mean depends on the unknown variance, and the sum of squared deviations
2
2
2
dence is relatively unimportant: Shifting the actual
0 0 = 0 0 +
(xi )
mean shifts the generated points by an equal amount,
i=1
and on average the squared deviations will remain
or equivalently
the same. This is not the case, however, with the
total variance of the mean: As the unknown vari
ance increases, the total variance of the mean will
0 = 0 + n
increase proportionately, and we would like to cap
n
0 02 + i=1 (xi )2
2. From the analysis of the case with unknown variance The priors are normally dened as follows:
but known mean, we see that the update equations
involve sucient statistics over the data consisting
2
2
of the number of data points and sum of squared p(| ; 0 , n0 ) N (0 , /n0 )
deviations.
p( 2 ; 0 , 02 ) I2 (0 , 02 ) = IG(0 /2, 0 02 /2)
14
13
(
2
p(X|, ) =
1
x
=
xi
n i=1
n
1
2 2
)n/2
[
1
exp 2
2
OCCURRENCE
( n
)]
2
2
(xi x
) + n(
x )
i=1
)
1 (
n/2
2
exp 2 S + n(
x )2
2
n
where S = i=1 (xi x
)2 .
n0 0 + n
x
n0 + n
n0 = n0 + n
0 = 0 + n
n
n0 n
0 02 = 0 02 +
(0 x
)2
(xi x
)2 +
n
+
n
0
i=1
0 =
( )
exp 2 0 0 + n0 ( 0 )
2
[
1 (
= ( 2 )(0 +n+3)/2 exp 2 0 02 + S + n0 ( 0 )2 +
2
[
(
1
n0 n
2 (0 +n+3)/2
= ( )
exp 2 0 02 + S +
(0
2
n0 + n
[
(
)2 ]
n0 + n
n0 0 + n
x
2 1/2
( )
exp
2 2
n0 + n
[
(
1
n0 n
( 2 )(0 /2+n/2+1) exp 2 0 02 + S +
(
2
n0 + n
(
)
(
(
n0 0 + n
x
2
1
1
= N|2
,
IG2
(0 + n),
0
n0 + n n0 + n
2
2
p(, 2 ; 0 , n0 , 0 , 02 ) = p(| 2 ; 0 , n0 ) p( 2 ; 0 , 02 )
3. Distributions modeled as normal the normal distri[
]
bution being the
) distribution with maximum entropy
1 (
2 (0 +3)/2
2
2
( )
exp 2 0 0 + for
n0 (
0 )mean and variance.
a given
2
4. Regression problems the normal distribution beThe likelihood function from the section above with
ing found after systematic eects have been modknown variance is:
eled suciently well.
(
2
p(X|, ) =
1
2 2
)n/2
1
exp 2
2
( n
i=1
)]
(xi )
13.3
Assumed normality
15
Thermal light has a BoseEinstein distribution on
very short time scales, and a normal distribution on
longer timescales due to the central limit theorem.
13.2
Approximate normality
Approximately normal distributions occur in many situations, as explained by the central limit theorem. When
the outcome is produced by many small eects acting additively and independently, its distribution will be close to
normal. The normal approximation will not be valid if
the eects act multiplicatively (instead of additively), or
if there is a single external inuence that has a considerably larger magnitude than the rest of the eects.
In counting problems, where the central limit theorem includes a discrete-to-continuum approximation and where innitely divisible and decomposable
distributions are involved, such as
Binomial random variables, associated with
binary response variables;
Poisson random variables, associated with rare
events;
There are statistical methods to empirically test that assumption, see the above Normality tests section.
In biology, the logarithm of various variables tend
to have a normal distribution, that is, they tend to
have a log-normal distribution (after separation on
male/female subpopulations), with examples including:
Measures of size of living tissue (length,
height, skin area, weight);[44]
The length of inert appendages (hair, claws,
nails, teeth) of biological specimens, in the direction of growth; presumably the thickness of
tree bark also falls under this category;
Certain physiological measurements, such as
blood pressure of adult humans.
In nance, in particular the BlackScholes model,
changes in the logarithm of exchange rates, price indices, and stock market indices are assumed normal
(these variables behave like compound interest, not
16
14
like simple interest, and so are multiplicative). Some
mathematicians such as Benot Mandelbrot have argued that log-Levy distributions, which possesses
heavy tails would be a more appropriate model, in
particular for the analysis for stock market crashes.
In regression analysis, lack of normality in residuals simply indicates that the model postulated is inadequate in
accounting for the tendency in the data and needs to be
augmented; in other words, normality in residuals can al Measurement errors in physical experiments are ofways be achieved given a properly constructed model.
ten modeled by a normal distribution. This use of a
normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution pro- 14 Generating values from normal
duces the most conservative predictions possible
distribution
given only knowledge about the mean and variance
of the errors.[45]
17
An easy to program approximate approach, that relies on the central limit theorem, is as follows: generate 12 uniform U(0,1) deviates, add them all up,
and subtract 6 the resulting random variable will
have approximately standard normal distribution. In
truth, the distribution will be IrwinHall, which is
a 12-section eleventh-order polynomial approximation to the normal distribution. This random deviate
will have a limited range of (6, 6).[48]
There is also some investigation[51] into the connection between the fast Hadamard transform and
the normal distribution, since the transform employs
just addition and subtraction and by the central limit
theorem random numbers from almost any distribution will be transformed into the normal distribution.
X=U
2 ln S
,
S
Y =V
are returned. Again, X and Y will be independent and standard normally distributed.
The Ratio method[49] is a rejection method. The algorithm proceeds as follows:
Generate two independent uniform deviates U
and V;
Compute X = 8/e (V 0.5)/U;
Optional: if X 5 4e U then accept X
and terminate algorithm;
2
1/4
t=
1
1+b
18
16 HISTORY
16
16.1
History
Development
19
designation for this distribution.[67]
Many years ago I called the Laplace
Gaussian curve the normal curve, which
name, while it avoids an international question
of priority, has the disadvantage of leading
people to believe that all other distributions
of frequency are in one sense or another
'abnormal'.
Pearson (1920)
df =
strated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural
phenomena:[64] The number of particles whose velocity,
resolved in a certain direction, lies between x and x + dx
is
16.2
(xm)2
1
e 22 dx
2
x2
e 2 dx
Naming
17 See also
BehrensFisher problem the long-standing problem of testing whether two normal samples with different variances have same means;
Bhattacharyya distance method used to separate
mixtures of normal distributions
ErdsKac theoremon the occurrence of the normal distribution in number theory
Gaussian blurconvolution, which uses the normal
distribution as a kernel
Sum of normally distributed random variables
Normally distributed and uncorrelated does not imply independent
Tweedie distribution The normal distribution is
a member of the family of Tweedie exponential dispersion models
20
19 CITATIONS
Rayleigh distribution
18
Notes
[1] For example, this algorithm is given in the article Bc programming language.
[2] De Moivre rst published his ndings in 1733, in a pamphlet Approximatio ad Summam Terminorum Binomii
(a + b)n in Seriem Expansi that was designated for private circulation only. But it was not until the year 1738
that he made his results publicly available. The original
pamphlet was reprinted several times, see for example
Walker (1985).
[5] Besides those specically referenced here, such use is encountered in the works of Peirce, Galton (Galton (1889,
chapter V)) and Lexis (Lexis (1878), Rohrbasser & Vron
(2003)) c. 1875.
[25] O'Hagan, A. (1994) Kendalls Advanced Theory of statistics, Vol 2B, Bayesian Inference, Edward Arnold. ISBN
0-340-52922-9 (Section 5.40)
[26] Bryc (1995, p. 27)
[27] Patel & Read (1996, [2.3.6])
[28] Galambos & Simonelli (2004, Theorem 3.5)
19
Citations
[38] Eugene Lukacs (1942). A Characterization of the Normal Distribution. The Annals of Mathematical Statistics
13 (1): 9193. doi:10.1214/aoms/1177731647.
21
[39] D. Basu and R. G. Laha (1954). On Some Characterizations of the Normal Distribution. Sankhy 13 (4): 359
362.
[40] Lehmann, E. L. (1997). Testing Statistical Hypotheses
(2nd ed.). Springer. p. 199. ISBN 0-387-94919-4.
[41] Krishnamoorthy (2006, p. 127)
[42] Krishnamoorthy (2006, p. 130)
[43] Krishnamoorthy (2006, p. 133)
[44] Huxley (1932)
[45] Jaynes, Edwin T. (2003). Probability Theory: The Logic
of Science. Cambridge University Press. pp. 592593.
[46] Oosterbaan, Roland J. (1994). Chapter 6: Frequency and
Regression Analysis of Hydrologic Data. In Ritzema,
Henk P. Drainage Principles and Applications, Publication
16 (PDF) (second revised ed.). Wageningen, The Netherlands: International Institute for Land Reclamation and
Improvement (ILRI). pp. 175224. ISBN 90-70754-339.
[47] Wichura, Michael J. (1988). Algorithm AS241: The
Percentage Points of the Normal Distribution. Applied Statistics (Blackwell Publishing) 37 (3): 477484.
doi:10.2307/2347330. JSTOR 2347330.
[48] Johnson, Kotz & Balakrishnan (1995, Equation (26.48))
[49] Kinderman & Monahan (1977)
[50] Marsaglia & Tsang (2000)
20 References
Aldrich, John; Miller, Je. Earliest Uses of Symbols in Probability and Statistics.
Aldrich, John; Miller, Je. Earliest Known Uses of
Some of the Words of Mathematics. In particular,
the entries for bell-shaped and bell curve, normal
(distribution)", Gaussian, and Error, law of error,
theory of errors, etc..
Amari, Shun-ichi; Nagaoka, Hiroshi (2000). Methods of Information Geometry. Oxford University
Press. ISBN 0-8218-0531-2.
Bernardo, Jos M.; Smith, Adrian F. M. (2000).
Bayesian Theory. Wiley. ISBN 0-471-49464-X.
Bryc, Wlodzimierz (1995). The Normal Distribution: Characterizations with Applications. SpringerVerlag. ISBN 0-387-97990-5.
Casella, George; Berger, Roger L. (2001). Statistical
Inference (2nd ed.). Duxbury. ISBN 0-534-243126.
Cody, William J. (1969).
"Rational Chebyshev Approximations for the Error Function".
Mathematics of Computation 23 (107): 631638.
doi:10.1090/S0025-5718-1969-0247736-4.
Cover, Thomas M.; Thomas, Joy A. (2006). Elements of Information Theory. John Wiley and Sons.
Galambos, Janos; Simonelli, Italo (2004). Products of Random Variables: Applications to Problems
of Physics and to Arithmetical Functions. Marcel
Dekker, Inc. ISBN 0-8247-5402-6.
Gauss, Carolo Friderico (1809). Theoria motvs corporvm coelestivm in sectionibvs conicis Solem ambientivm [Theory of the Motion of the Heavenly Bodies
Moving about the Sun in Conic Sections] (in Latin).
English translation.
(entry
STANDARD
NORMAL
22
20
on Symbols and Notation. The American Statistician 19 (3): 1214. doi:10.2307/2681417. JSTOR
2681417.
Hart, John F. et al. (1968). Computer Approximations. New York, NY: John Wiley & Sons, Inc.
ISBN 0-88275-642-7.
Hazewinkel, Michiel, ed. (2001), Normal Distribution, Encyclopedia of Mathematics, Springer,
ISBN 978-1-55608-010-4
Herrnstein, Richard J.; Murray, Charles (1994).
The Bell Curve: Intelligence and Class Structure in
American Life. Free Press. ISBN 0-02-914673-9.
Huxley, Julian S. (1932). Problems of Relative
Growth. London. ISBN 0-486-61114-0. OCLC
476909537.
Johnson, Norman L.; Kotz, Samuel; Balakrishnan, Narayanaswamy (1994). Continuous Univariate Distributions, Volume 1. Wiley. ISBN 0-47158495-9.
Johnson, Norman L.; Kotz, Samuel; Balakrishnan, Narayanaswamy (1995). Continuous Univariate Distributions, Volume 2. Wiley. ISBN 0-47158494-0.
Kinderman, Albert J.; Monahan, John F. (1977).
Computer Generation of Random Variables Using the Ratio of Uniform Deviates. ACM Transactions on Mathematical Software 3 (3): 257260.
doi:10.1145/355744.355750.
Krishnamoorthy, Kalimuthu (2006). Handbook of
Statistical Distributions with Applications. Chapman
& Hall/CRC. ISBN 1-58488-635-8.
Kruskal, William H.; Stigler, Stephen M. (1997).
Spencer, Bruce D., ed. Normative Terminology:
'Normal' in Statistics and Elsewhere. Statistics and
Public Policy. Oxford University Press. ISBN 019-852341-6.
Laplace, Pierre-Simon de (1774). Mmoire sur
la probabilit des causes par les vnements. Mmoires de l'Acadmie royale des Sciences de Paris
(Savants trangers), tome 6: 621656. Translated by
Stephen M. Stigler in Statistical Science 1 (3), 1986:
JSTOR 2245476.
Laplace, Pierre-Simon (1812). Thorie analytique
des probabilits [Analytical theory of probabilities].
Le Cam, Lucien; Lo Yang, Grace (2000). Asymptotics in Statistics: Some Basic Concepts (second ed.).
Springer. ISBN 0-387-95036-2.
Lexis, Wilhelm (1878). Sur la dure normale de
la vie humaine et sur la thorie de la stabilit des
rapports statistiques. Annales de dmographie internationale (Paris) II: 447462.
REFERENCES
23
Walker, Helen M. (1985). De Moivre on the Law
of Normal Probability (PDF). In Smith, David Eugene. A Source Book in Mathematics. Dover. ISBN
0-486-64690-4.
Weisstein, Eric W..
MathWorld.
Normal Distribution.
21
External links
Hazewinkel, Michiel, ed. (2001), Normal distribution, Encyclopedia of Mathematics, Springer, ISBN
978-1-55608-010-4
Normal Distribution Video Tutorial Part 1-2 on
YouTube
Normal distribution calculator
An 8-foot-tall (2.4 m) Probability Machine (named
Sir Francis) comparing stock market returns to the
randomness of the beans dropping through the quincunx pattern. on YouTube Link originating from
Index Funds Advisors
24
22
22
22.1
Text
Normal distribution Source: http://en.wikipedia.org/wiki/Normal%20distribution?oldid=660334802 Contributors: Damian Yerrick, AxelBoldt, Lee Daniel Crocker, Bryan Derksen, Zundark, The Anome, Ed Poor, Fnielsen, XJaM, Miguel~enwiki, Graft, Heron, Ryguasu,
Olivier, Ericd, Rbrwr, Edward, Patrick, Michael Hardy, SGBailey, Vinodmp, Dcljr, Tomi, Mdebets, Stevan White, Ronz, Snoyes, Den fjttrade ankan~enwiki, Ciphergoth, Cherkash, Charles Matthews, Stan Lioubomoudrov, Jitse Niesen, Br43402, Prumpf, Taxman, Fibonacci,
Zero0000, AaronSw, Wilke, Robbot, Sander123, Robbyjo~enwiki, Benwing, R3m0t, Naddy, Sverdrup, Henrygb, Nilmerg, Ojigiri~enwiki,
Meelar, Alpharigel, Wile E. Heresiarch, Enochlau, Giftlite, Christopher Parham, Markus Krtzsch, J heisenberg, BenFrantzDale, Wwoods,
Will Thimbleby, Jorge Stol, Gzornenplatz, Tristanreid, Utcursch, Pgan002, Knutux, Fangz, Noe, Rajasekaran Deepak, MarkSweep,
Piotrus, Gunnar Larsson, Aylex~enwiki, Gauss, Elektron, Karl-Henner, Urhixidur, Fintor, Fledylids, Rich Farmbrough, Guanabot, Jpk,
Michael Zimmermann, Paul August, Bender235, Rubicon, Zaslav, MisterSheik, Pjrich, Laurascudder, Aude, Art LaPella, Sole Soul,
O18, Mysteronald, Aydee, Rajah, Darwinek, NickSchweitzer, Jjron, Jumbuck, Jrme, Gary, Denis.arnaud, 119, Eric Kvaalen, Ossiemanners, PAR, Burn, Joris Gillis, Rabarberski, Pontus, Brock, Cburnett, Jheald, MIT Trekkie, Forderud, Kay Dekker, Richwales, Oleg
Alexandrov, Joriki, Cgibbard, Mindmatrix, RHaworth, LOL, David Haslam, SusanLarson, StradivariusTV, Oliphaunt, Jacobolus, Zrenneh, Drostie, Eclecticos, Dzordzm, Yoshis88, Btyner, Graham87, RxS, AllanBz, Rjwilmsi, Seidenstud, Zbxgscqf, Salix alba, Bubba73,
Syced, JanSuchy, Amanjain, Seliopou, Ian Pitchford, RexNL, Dannya222, Fresheneesz, TeaDrinker, ThePlaz, Waagh~enwiki, Kri, Pcody,
Nicholasink, DVdm, YurikBot, Wavelength, Personman, Alektzin, Joebeone, Stephenb, Bruguiea, Trollderella, Schmock, Cruise, Tony1,
Bota47, Nikkimaria, Boreas231, Arthur Rubin, Fang Aili, Petri Krohn, JahJah, HereToHelp, Marlasdad, Mebden, Habbie, Bo Jacoby,
Zvika, Cmglee, Finell, Fjdulles, Edin1, Dudzcom, SmackBot, RDBury, KMcD, Prodego, McGeddon, Unyoyega, AnOddName, Ohnoitsjamie, Chris the speller, Bluebot, Dycotiles, TimBentley, Sirex98, Nbarth, DHN-bot~enwiki, Bsilverthorn, Iwaterpolo, Can't sleep, clown
will eat me, Chlewbot, Jmlk17, Lansey, Morqueozwald, Robma, Cybercobra, Mr Minchin, G716, Mwtoews, Romanski, Meni Rosenfeld, Bidabadi~enwiki, Yoshigev, Wissons, Heimstern, ML5, Jim.belk, Tryggvi bt, A. Parrot, Loadmaster, Beetstra, Hiiiiiiiiiiiiiiiiiiiii,
Josephus78, Hu12, Chris53516, Vrkaul, CapitalR, A. Pichler, Gil Gamesh, Pfeldman, Courcelles, CRGreathouse, Shuhao, Jackzhp, Crescentnebula, Shorespirit, Scohoust, CBM, Ali Obeid, KipKnight, WeggeBot, Neelix, Mr. okinawa, Lixy, 137 0, FilipeS, AndrewHowse,
P.jansson, Siddiganas, Mikewax, Zaracattle, Saintrain, Talgalili, Epbr123, Tdunning, Mirams, Headbomb, Mm 202, Leon7, Pablomme,
DanSoper, AntiVandalBot, Wainson, Gioto, Paclopes, Seaphoto, Lself, Edokter, Pabristow, Jason.yosinski, Chill doubt, Gkhan, Trewin,
JAnDbot, Plantsurfer, Mcorazao, Je560, Owenozier, Pdumon, Ikanreed, Mkch, Coolhandscot, Tstrobaugh, MSBOT, Coee2theorems,
Magioladitis, Javazen, VoABot II, A.M.R., SHCarter, Baccyak4H, Sullivan.t.j, A3nm, User A1, Polyester, JaGa, Mateoee, Doood1,
Dima373, Mozza314, Lilac Soul, Manticore, Jorgenumata, Abecedare, Hughperkins, TheSeven, Troutinthemilk, Kedawa, It Is Me Here,
Mikael Hggstrm, Coppertwig, Jia.meng, Andraaide, SJP, Policron, Shaww, DavidCBryant, DorganBot, Gemini1980, VolkovBot, DrMicro, HughD, Smidas3, Dj thegreat, Appoose, Anna Lincoln, LiranKatzir, Seraphim, The Tetrast, Mundhenk, KP-Adhikari, Joseph A.
Spadaro, Ramiromagalhaes, Markhebner, Why Not A Duck, Junkinbomb, Nolanbard, Scottkosty, Quietbritishjim, SieBot, Ivan tambuk,
Rlendog, Gex999, Metaxal, Cwkmail, Keilana, Charles Wolf, Ck, Hxhbot, RSStockdale, OKBot, MrZeebo, Lscharen, Svick, Water
and Land, Josuechan, Kjtobo, Randomblue, Melcombe, DEMcAdams, ClueBot, David.hilton.p, DFRussia, Jpsauro, Superdupereditor,
Plastikspork, EelkeSpaak, Winsteps, MATThematical, Jorisverbiest, GibboEFC, MrKIA11, SamuelTheGhost, Gauravm1312, Awickert,
Tomeasy, Shabbychef, Egorre, Lartoven, Sun Creator, NuclearWarfare, Cenarium, Livius3, Vhlafuente, SergioBruno66, ChrisHodgesUK,
Evan Manning, Vakulgupta, Bluemaster, Qwfp, SoxBot III, Rishi.bedi, DumZiBoT, XLinkBot, Abtweed98, SilvonenBot, Bhockey10,
Jgonion, Paulpeeling, GDibyendu, Addbot, Somebody9973, Mortense, Willking1979, Tcncv, Fgnievinski, Vijayarya, Mrocklin, Lpele,
MrOllie, Eivindbot, LaaknorBot, Renatokeshet, Lightbot, Matj Grabovsk, Luckas-bot, Yobot, Qonnec, Rsquire3, Wjastle, JBancroftBrown, AnomieBOT, Erel Segal, Message From Xenu, Hiihammuk, Deepakazad, Citation bot, Zhurov, Sergey Suslov, EOBarnett, GB
fan, ArthurBot, Xqbot, Bdmy, Wakamex, Drilnoth, Hugo gasca aragon, Gigemag76, Ptrf, Achalddave, Thekilluminati, Almabot, ComputerPsych, Kaslanidi, Samwb123, Podgy piglet, FrescoBot, Nicolas Perrault III, Gandrusz, Geekinajeep, Citation bot 1, NoahDawg, DrilBot,
Boxplot, Pinethicket, I dream of horses, Elockid, Stpasha, Gperjim, Dude1818, Firelog, WardMuylaert, TobeBot, Trappist the monk, Fergusq, Jonkerz, Tschwertner, Duoduoduo, Aurimus, Stroppolo, RjwilmsiBot, TjBot, Kastchei, Millerdl, Beyond My Ken, GordontheGorgon,
EmausBot, 5:40, John of Reading, Scm, Laurifer, JaeDyWolf, Tommy2010, Netheril96, Coubure, Osure, Chharvey, Derekleungtszhei,
JA(000)Davidson, A930913, Quondum, Galastril, SporkBot, AManWithNoPlan, Arnaugir, Jejnet, Supersidvicious, ClueBot NG, Asitgoes, Mathstat, Wikidilworth, Cheungpuiho04, , Abhijeet Safai, Frietjes, Tianshinihao8, Hikenstu, Alex.j.int, Pat5547, Timutre, Helpful Pixie Bot, BG19bot, Jan Spousta, Davidiad, Itsapatel, Duxwing, Statguy1, Mu.ting, Dexbot, Schbam, AppliedMathematics,
SFK2, Brirush, Jamesmcmahon0, Nilstec inc, Piotr Jurkiewicz, Jurgengripp, Dr.alaagad, Paul2520, Catalin.ghervase, 314piwm, BeyondNormality, Yura1987, Monkbot, Couldbegu, Sotirios-d, Metaprone, Velvel2, Edthewikieditor, Jonnyt43, Boky90, Isambard Kingdom and
Anonymous: 650
22.2
Images
22.3
Content license
25
22.3
Content license