Anda di halaman 1dari 49

Sampling Distribution

Statistical Inference

Suppose we want to know the average height of an Indian or the average life length of a
bulb manufactured by a company, etc. obviously we cannot burn out every bulb and find
the mean life length. One chooses at random, say n bulbs, find their lifelengths
X + X 2 + .... + X n
X 1 , X 2 ..... X n and take the mean life length X = 1 as an ‘approximation’
n
to the actual (unknown) mean life length. Thus we make a statement about the
“population” (of all life lengths) by looking at a sample of it. This is the basis behind
statistical inference. The whole theory of statistical inference tells us how close we are to
the true (unknown) characteristic of the population.

Random Sample of size n

In the above example, let X be the lifelength of a bulb manufactured by the company.
Thus X is a rv which can assume values > 0. It will have a certain distribution and a
certain mean µ etc. When we make n independent observations, we get n values
x1 , x 2 ....x n . clearly if we again take n observations, we would get y1 , y 2 .... y n . Thus we
may say

Definition

Let X be a random variable. A random sample of size n from x is a finite ordered


sequence {X 1 , X 2 ...., X n }of n independent rv3 such that each Xi has the same
distributions that of X.

Sampling from a finite population

Suppose there is an universe having a finite number of elements only (like the number of
Indians, the number of females in USA who are blondes etc.). A sample of size n from
the above is a subset of n elements such that each subset of n elements has the same prob
of being selected.

115
Statistics

Whenever we sample, we use a characteristic of the sample to make a statement about the
population. For example suppose the true mean height of an Indian is µ (cms). To make a
statement about µ , we randomly select n Indians, Find their heights {X 1 , X 2 ...., X n }and
then their mean namely

X 1 + X 2 + ..... + X n
X=
n

We use then X as an estimate of the unknown parameter µ . Remember µ is a


parameter, a constant that is unchanged. But the sample mean X is a r.v. It may assume
different values depending on the sample of n Indians chosen.

Definition : Let X be a r.v. Let {X1 , X 2 .....X n } be a sample of size n from X. A statistic
is a function of the sample {X 1 , X 2 ,...., X n }.

Some Important Statistics

X 1 + X 2 + ..... + X n
1. The sample mean X =
n

2. The sample Variance S 2 =


1 n
n − 1 i =1
(
Xi − X )
2

3. The minimum of the sample K = min {X 1 , X 2 ,...., X n }

4. The maximum of the sample M = max {X 1 , X 2 ,......X n }.

5. The Range of the sample R = M − K

Definition

∧ ∧
If X 1 ,.....X n is a random sample of size n and if X is a statistic, then we remember X is

also a r.v. Its distribution is referred to as the sampling distribution of X .
116
The Sampling Distribution of the Sample Mean X .

Suppose X is a r.v. with mean µ and variance σ 2 . Let X 1 , X 2 .....X n be a random sample
X 1 + X 2 + ........ + X n
of size n from X. Let X = be the sample mean. Then
n

(a) ( )
E X = µ.

(b) ( )
VX =
σ2
n
.

(c) If X 1 ....X n is a random sample from a finite population with N elements, then

Var X =( ) σ2 N − n
n N −1
.

(d) If X is normal, X is also normal

X −µ
(e) Whatever be the distribution of X, if n is “large” has approximately the
σ
n
standard normal distribution. (This result is known as the central limit theorem.)

Explanation

(a) tells us that we can “expect” the sample mean X to be an approximation to


the population mean µ .
(b) tells us that the “nearness” of X to µ is small when the sample size n is
large.
X −µ
(c) says that if X has a normal distribution. has a standard normal
σ
n
distribution.
X −µ
(d) says that whatever be the distribution of X, discrete or continuous,
σ
n
has approximately standard normal distribution if n is large.
117
Example 1 (See exercise 6.14, page 207)

The mean of a random sample of size n = 25 is used to estimate the mean of an infinite
population with standard deviation σ = 2.4. What can we assert about the prob that the
error will be less than 1.2 if we use

(a) Chebyshev’s theorem


(b) The central limit theorem?

Solution

(a) We know the sample mean X is a rv with E X = µ and Var X =( ) ( ) σ2


n
Chebyshev’s theorem tell us that for any r.v. T,

(
P | T − E (T ) | k Var(T ) ≥ 1 − ) 1
k2

Taking T = X, and noting E (T ) = E X = µ, ( )


σ 2 (2.4 )
( )
2
var(T ) = var X = = , we find
n 25

2 .4 1
P X − µ < k. ≥ 1− 2 .
5 k

(
Desired P X − µ < 1.2 ? )
2 .4 5
k. =1.2 gives k =
5 2

Thus we can assert using Chebyshev’s theorem that

(
P X − µ < 1 .2 ≥ 1 −) 1
25
=
21
25
= 0.84
4

118
X−µ X−µ
(b) Central limit theorem says σ
= 2.4
is approximately standard normal.
n 5

(
Thus P X − µ < 1.2 )
X−µ 1 .2
=P σ
< 2.4
n 5

5 5
≈P Z < = 2F −1
2 2

= 2 × F(2.5) − 1 = 2 × 0.9938 − 1 = 0.9876

Example 2 (See exercise 6.15 on page 207)

A random sample of size 100 is taken from an infinite population having mean µ = 76
and variance σ 2 = 256. What is the prob that X will be between 75 and 78?

Solution

X−µ
We use central limit theorem namely σ
is approximately standard normal.
n

(
Required P 75 < X < 78 )
75 − 76 X−µ 78 − 76
=P 16
< σ
< 16
10 n 10

10 20 5 5
≈P − <Z< =P − <Z<
16 16 8 4
5 5 5 5
=F −F − =F +F −1
4 8 4 8
= 0.8944 + 0.7340 − 1 = 0.8284

119
Example 3 (See Exercise 6.17 on page 217)

If the distribution of weights of all men travelling by air between Dallas and El Paso has
a mean of 163 pounds and a s.d .of 18 pounds, what is the prob. That the combined gross
weight of 36 men travelling on a plane between these two cities is more than 6000
pounds?

Solution

Let X be the weight of a man traveling by air between D and E. It is given that X is a rv
with mean E(X ) = µ = 163 lbs and sd σ = 18 lbs.

Let X 1 , X 2 .....X 36 be the weights of 36 men traveling on a plane between these two cities.
Thus we can regard {X 1 , X 2 ....., X 36 }as a random sample of size 36 from X.

Required P(X 1 + X 2 + ..... + X 36 > 6000 )

6000
=P X>
36

X−µ 1000
− 163
=P σ
> 6
18
by central limit theorem
n 6

22
≈P Z>
18

22
= 1− P Z ≤ = 1 − F (1.22 )
18

= 1 − 0.8888 = 0.1112

120
The sampling distribution of the sample mean X (when σ is unknown).

Theorem

Let X be a rv having normal distribution with mean E(X ) = µ . Let X be the sample
mean and S2 the sample variance of a random sample of size n form that of X.

X−µ
Then the rv. t = S
has (student’s) t-distribution with n-1 degrees of freedom.
n

Remark

(1) The shape of the density curve of t-distribution (with parameter ν -greek nu)
is like that of standard normal distribution and is symmetrical about the y-
axis.

t ν,α is that
unique number such that

P(t > t v,α ) = α


(ν → the parameter)

By symmetry tν ,1−α = 1 − tν ,α

The values of tν ,α for various υ and α are tabulated in Table 4.

For ν large, tν ,α ≈ Z α .

Example 4 (See exercise 6.20 on page 213)

A random sample of size 25 from a normal population has the mean x = 47.5 and the s.d.
s = 8.4. Does this information tend to support or refute the claim that the mean of the
population is µ = 42.1?

121
Solution:

x−µ
t= s
has a t-distribution with parameter ν = n − 1
n

Here µ = 42.1, s = 8.4, n = 25

t n −1,α 0.005 = t 24, 0.005 = 2.797

Thus P(t > 2.797 ) = 0.005

X−µ
Or P s
> 2.797 = 0.005
n

8 .4
Or P X > 42.1 + 2.797 × = 0.005
5

(
Or P X > 46.78 = 0.005 )
This means when µ = 4.21 only in about 0.5 percent of the cases we may get an
X > 46.78 . Thus we will have to refute the claim µ = 42.1 (in favour of µ > 42.1)

Example 5 (See exercise 6.21 on page 213)

The following are the times between six calls for an ambulance (in a certain city) and the
patients arrival at the hospital : 27, 15,20, 32, 18 and 26 minutes. Use these figures to
judge the reasonableness of the ambulance service’s claim that it takes on the average 20
minutes between the call for an ambulance and the patients arrival at the hospital.

Solution

Let X = time (in minutes) between the call for an ambulance and the patient’s arrival at
the hospital. We assume X has a normal distribution. (When nothing is given, we assume
normality). We want to judge the reasonableness of the claim that E(X ) = µ = 20 minutes.
For this we recorded the times for 6 calls. So we have a random sample of size 6 from X
with
122
X 1 = 27, X 2 = 15, X 3 = 20, X 4 = 32, X 5 = 18, X 6 = 26. Thus X = (27 + 15 + 20 + 32 + 18 + 26 ) / 6

138
= = 23.
6

S2 =
1
6 −1
[
(27 − 23)2 + (15 − 23)2 + (20 − 23)2 + (32 − 23)2 + (18 − 23)2 + (26 − 23)2 ]
1
= [16 + 64 + 9 + 81 + 25 + 9] = 204
5 5

204
Hence S =
5

We calculate

x −µ 23 − 20
t= s
= = 1.150
n
204
5 / 6

Now t n −1,α = t 5,α = 2.015 for α = 0.05


= 1.476 for α = 0.10

Since our observed t = 1.150 < t 5.10

We can say that it is reasonable to assume that the average time is µ = 20 minutes

Example 6
A process for making certain bearings is under control if the diameters of the bearings
have a mean of 0.5000 cm. What can we say about this process if a sample of 10 of these
bearings has a mean diameter of 0.5060 cm and sd 0.0040 cm?

X − 0 .5
H int . P − 3.25 < .004
< 3.25 = 0.01
10

(
or P 0.492 < x < 0.504 = 0.01 )
Since X = 0.506 > 0.504,
the process is not under control.
123
Sampling Distribution of S2 (The sample variance)

Theorem

If S2 is the sample variance of a random sample of size n taken from the normal
population with (population) variance σ 2 , then

S2 1
(X )
n
Χ 2 = (n − 1)
2
= 2 i −X
σ 2
σ i =1

is a random variable having chi-square distribution with parameter ν = n − 1.

Remark

Since S2 > 0, the rv has +ve density only to right of the origin. Χ ν2 ,α is that unique
( )
number such that P Χ 2 > Χ ν2 ,α = α and is tabulated for some α s and ν s in table 5.

Example 7 (See exercise 6.24 on page 213)

A random sample of 10 observations is taken from a normal population having the


variance σ 2 = 42.5 . Find approximately the prob of obtaining a sample standard
deviation S between 3.14 and 8.94

Solution

Required P(3.14 < S < 8.94)

=p ( (3.14)2 < S 2 < (8.94)2 )


=P
9
× (3.14 ) <
2 (n − 1) S 2 < 9
× (8.94 )
2

42.5 σ 2
42.5
(
= P 2.088 < Χ < 16.925
2
)
(From Table 5, Χ 92 05 = 16.919, Χ 92 , 0.99 = 2.088 )
( ) (
= P Χ 2 > 2.088 − P Χ 2 > 16.919 (approx ) )
= 0.99 − 0.05 = 0.94 (approx)

124
Example 8 (See exercise 6.23 on page 213)

The claim that the variance of a normal population is σ 2 = 21.3 is rejected if the
variance of a random sample of size 15 exceeds 39.74. What is the prob that the claim
will be rejected even though σ 2 = 21.3 ?

Solution

The prob that the claim is rejected

(
= P S 2 > 29.74 )

=P
(n − 1) S 2 > 14
(
× 39.74 = P Χ 2 > 21.12 )
σ 2
21.3

(
= 0.025 As from table 5, Χ14
2
, 0.025 = 21.12 )
Theorem

If S12 , S 22 are the variances of two independent random samples of sizes n1 , n2


respectively taken from two normal populations having the same variance, then

S12
F= 2
S2

is a rv having the (Snedecor’s) F distribution with parameters ν 1 = n1 − 1 and ν 2 = n2 − 1

Remark

1. n1 − 1 is called the numerator degrees of freedom and n2 − 1 is called the


denominator degrees of freedom.

2. If F is a rv having (ν 1 ,ν 2 ) degrees of freedom, then Fν1 ,ν 2 ,α is that unique number


such that

125
( )
P F > Fν 1ν 2 ,α = α and is tabulated for α = 0.05 in table 6(a) and for α = 0.01 in table
6(b).

1
We also note the fact : Fν 2 ,ν 2 ,α =
Fν1 ,ν 2 ,1−α

1 1
Thus F10, 20,0.95 = = = 0.36
F20,10, 0.05 2.77

Example 9

1 1
(a) F12,15, 0.95 = = = 0.38
F15,12,0.05 2.62

1 1
(b) F6, 20, 0.99 = = = 0.135
F20, 6, 0.01 7.40

Example 10 (See Exercise on page 213)

If independent random samples of size n1 = n2 = 8 come from two normal populations


having the same variance, what is the prob that either sample variance will be at least
seven times as large as the other?

Solution

Let S12 , S 22 be the sample variances of the two samples.

(
Reqd P S12 > 7S 22 OR S 22 > 7S12 )

S12 S 22
=P > 7 or >7
S 22 S12

= 2 P (F > 7 )

where F is a rv having F distribution with (7,7) degrees of freedom


= 2 x 0.01 = 0.02 (from table 6(b)).
126
Example 11 (see exercise 6.38 on page 215)

If two independent random samples of size n1 = 9 and n2 = 16 are taken from a normal
population, what is the prob that the variance of the first sample will be at least four times
as large as the variance of the second sample?

(
Hint : Reqd prob = P S12 > 4S 22 )

S12
=P > 4 = P(F > 4 )
S 22

= 0.01 (as F8,15, 0.01 = 4 )

Example 12 (See Exercise 6.29 on page 214)

The F distribution with (4,4) degrees of freedom is given by

6 F (1 + F )
−4
F >0
f (F ) =
0 F ≤0

If random samples of size 5 are taken from two normal populations having the same
variance, find the prob that the ratio of the larger to the smaller sample variance will
exceed 3?

Solution

Let S12 , S 22 be the sample variance of the two random samples.


(
Reqd P S12 > 3S 22 or S 22 > 3S12 )

S12
= 2 P 2 > 3 = 2 P ( F > 3)
S2

where F is a rv having (4,4) degrees of freedom

127
∞ ∞
6F 1 1
=2 dF = 12 − dF
3 (1 + F) 4
3 (1 + F)3
(1 + F)4

1 1
= 12 − +
2(1 + F) 3(1 + F)
2 3

1 1 5 × 12 5
= 12 − = =
32 192 192 16

Inferences Concerning Means

We shall discuss how we can make statement about the mean of a population from the
knowledge about the mean of a random sample. That is we ‘estimate’ the mean of a
population based on a random sample.

Point Estimation

Here we use a statistic to estimate the parameter of a distribution representing a


population. For example if we can assume that the lifelength of a transistor is a r.v.
having exponential distribution with (unknown) parameter β , β can be estimated by
some statistic, say X the mean of a random sample. Or we may say the sample mean is
an estimate of the parameter β .

Definition


Let θ be a parameter associated with the distribution of a r.v. A statistic θ (based on a
random sample of size n) is said to be an unbiased estimate ( ≡ estimator) of θ if
∧ ∧
E θ = θ . That is, θ will be on the average close to θ .

Example

Let X be a rv; µ the mean of X. If X is the sample mean then we know E X = µ . Thus ( )
we may say the sample mean X is an unbiased estimate of µ (Note X is a rv, a
X 1 + X 2 + ..... + X n
statistic, X= a function of the random sample
n
128
(X1 , X 2 ....., X n ). If ω1 , ω 2 ....ω n are any n non-ve numbers ≤1 such that
ω1 + ω 2 + ...... + ω n = 1, then we can easily see that ω1 x 1 + ω 2 x 2 + ..... + ω n x n is also an
unbiased estimate of µ . (Prove this). X is got as a special case by taking
1
ω1 = ω 2 = .... = ω n = . Thus we have a large number of unbiased estimates for µ .
n
∧ ∧
Hence the question arises : If θ 1 ,θ 2 are both unbiased estimates of θ , which one do we
prefer? The answer is given by the following definition.

Definition

∧ ∧ ∧
Let θ 1 ,θ 2 be both unbiased estimates of the parameter θ . We say θ is more efficient than
∧ ∧ ∧
θ 2 if Var θ1 ≤ Var θ 2 .

Remark

That is the above definition says prefer that unbiased estimate which is “more closer” to

θ . Remember the variance is a measure of the “closeness’ of θ X to θ .

Maximum Error in estimating µ by X

Let X be the sample mean of a random sample of size n from a population with
(unknown) mean µ . Suppose we use X to estimate µ . X - µ is called the error in
estimating µ by X . Can we find an upperbound on this error? We know if X is normal
(or if n is large) then by Cantral Limit Theorem.

X−µ
σ
is a r.v. having (approximately) the standard normal distribution. And we can say
n

X−µ
P − Zα < σ
< Zα = 1− α
2 2
n

129
Thus we can say with prob (1 − α ) that the max absolute error X − µ in estimating µ by
σ
X is atmost Z α . (Here obviously we assume, σ the population s.d. is known. And
2
n

2
(
Z α is that unique no. such that P Z > Z α =
2
) α
2
.

We also say that we can say with 100(1 − α ) percent confidence that the max. abs error is
σ
atmost Z α . The book denotes, this by E.
2
n

Estimation of n

Thus to find the size n of the sample so that we may say with 100(1 − α ) percent
confidence, the max. abs. error is a given quantity E, we solve for n, the equation

σ
Zα = E.
2
n

2
Zασ
or n = 2

Example 1

What is the maximum error one can expect to make with prob 0.90 when using the mean
of a random sample of size n = 64 to estimate the mean of a population with σ 2 = 2.56 ?

Solution

α
Substituting n = 64, σ = 1.6 and Z α = Z 0.05 = 1.645 (Note 1 − α = 0.90 implies = 0.05 )
2 2
σ
in the formula for the maximum error E = Z α we get
2
n
1 .6 1 .6
E = 1.645 × = 1.445 × = 1.645 × 0.2 = 0.3290
64 8
Thus the maximum error one can expect to make with prob 0.90 is 0.3290.

130
Example 2

If we want to determine the average mechanical aptitude of a large group of workers,


how large a random sample will we need to be able to assert with prob 0.95 that the
sample mean will not differ from the population mean by more than 3.0. points? Assume
that it is known from past experience that σ = 200.

Solution

α
Here 1 − α = 0.95 so that = 0.025 , hence Z α = Z 0.025 = 1.96
2 2

Thus we want n so that we can assert with prob 0.95 that the max error E = 3.0

2
Zασ 1.96 × 20
2

∴n = 2
= = 170.74
E 3

Since n must be an integer, we take it as 171.

Small Samples

If the population is normal and we take a random sample of size n (n small) from it, we
note
X −µ
t= s
( X sample mean, S = Sample s.d)
n

is a rv having t-distribution with (n-1) degrees of freedom.


Thus we can assert with prob 1 − α that t ≤ t n −1, α where t n−1, α is that unique no such that
2 2

(
P t > t n−1, α =
2 2
) α
. Thus if we use X to estimate µ , we can assert with prob (1 − α ) that

the max error will be


S
E = t n −1, α
2 n
(Note : If n is large, then t is approx standard normal. Thus for n large, the above
S
formula will become E = Z α )
2
n

131
Example 3

20 fuses were subjected to a 20% overload, and the times it took them to blow had a
mean x = 10.63 minutes and a s.d. S = 2.48 minutes. If we use x = 10.63 minutes as a
point estimate of the true average it takes for such fuses to blow with a 20% overload,
what can we assert with 95% confidence about the maximum error?

Solution

Here n = 20 (fuses) x = 10.63, S = 2.478

95 α
1−α = = 0.95 so that = 0.025
100 2

Hence t n −1, α = t19, 0.025 = 2.093


2

Hence we can assert with 95% confidence (ie with prob 0.95) that the max error will be

S 2.48
E = t n −1, α = 2.093 × = 1.16
2
n 20

Interval Estimation

If X is the mean of a random sample of size n from a population with known sd σ , then
we know by central limit theorem,

X−µ
Z= σ
n

is (approximately) standard normal. So we can say with prob (1 − α ) that


X−µ
− Zα < σ
< Zα .
2 2
n

which can be rewritten as

σ σ
X− Zα < µ<X + Zα
n 2
n 2

132
Thus we can assert with Prob (1 − α ) (≡ ie. with (1 − α ) × 100% confidence ) that µ lies in
σ σ
the interval X − − Zα , X + Zα .
n 2
n 2

We refer to the above interval as a (1 − α )100% confidence interval for µ . The end
σ
points X ± Z α are known as (1 − α )100% . confidence limits for µ .
n 2

Example 4

Suppose the mean of a random sample of size 25 from a normal population (with σ = 2 )
is x = 78.3. Obtain a 99% confidence interval for µ , the population mean.

Solution

79
Here n = 25, σ = 2, (1 − α ) = = 0.99
100

α
∴ = 0.005 ∴Z α = Z 0.005 = 2.575
2 2

x = 78.3

Hence a 99% confidence interval for µ is

σ σ
x − Zα , x + Zα
2
n 2
n

2 2
= 78.3 − 2.575 × , 78.3 + 2.575 ×
25 25

= (78.3 − 1.0300, 78.3 + 1.0300 )

= (77.27, 79.33)

133
σ unknown

Suppose X is the sample mean and S is the sample sd of a random sample of size n taken
from a normal population with (unknown) mean µ . Then we know the r.v.

X−µ
t=
s
n

has a t-distribution with (n-1) degrees of freedom. Thus we can say with prob 1 − α that

− t n −1, α < t < t n−1, α


2 2

X−µ
or − t n −1, α < <t α
2 S n −1,
2
n

S S
or X − t α <µ < X + t α
n −1,
2 n n −1,
2 n

Thus a (1 − α )100% confidence interval for µ is

S S
X−t α ,X + t α
n −1,
2 n n −1,
2 n

Note :

(1) If n is large, t has approx the standard normal distribution. In which case the
(1 − α )100% confidence interval for µ will be

S S
x − Zα , x + Zα
2 n 2 n

(2) If nothing is mentioned, we assume that the sample is taken from a normal
population so that the above is valid.

134
Example 5

Material manufactured continuously before being cut and wound into large rolls must be
monitored for thickness (caliper). A sample of ten measurements on paper, in mm,
yielded

32.2, 32.0, 30.4, 31.0, 31.2, 31.2, 30.3, 29.6, 30.5, 30.7

Obtain a 95% confidence interval for the mean thickness.

Solution

Here n = 10

x = 30.41 S = 0.7880
α
1 − α = 0.95 or = 0.025
2

∴t α = t 9, 0.0025 = 2.262
n −1,
2

Hence a 95% confidence interval for µ is

0.7880 0.7880
30.9 − 2.262 × , 30.9 + 2.262 ×
10 10

= (30.34, 31.46 )

Example 6:

Ten bearings made by a certain process have a mean diameter of 0.5060 cm with a sd of
0.0040 cm. Assuming that the data may be looked upon as a random sample from a
normal population, construct a 99% confidence interval for the actual average diameter of
bearings made by this process.

135
Solution

Here n = 10, x = 0.5060, S = 0.0040

99
(1 − α ) = = 0.99. Hence α = 0.005
100

∴t α = t 9, 0.005 = 3.250
n −1,
2

Thus a 99% confidence interval for the mean

S s
= x −t α , x+t α
n −1,
2 n n −1,
2 n

0.0040 0.0040
= 0.5060 − 3.250 × , 0.5060 + 3.250 ×
10 10

= (0.5019, 0.5101)

Example 7

In a random sample of 100 batteries the lifetimes have a mean of 148.2 hours with a s.d.
of 24.9 hours. Construct a 76.60% confidence interval for the mean life of the batteries.

Solution

Here n = 100, x = 148.2, S = 24.9


76.60 α
1−α = = .7660 so that = 0.1170
100 2
Thus t α = t 99, 0.1170 ≈ Z 0.1170 = 1.19
n −1,
2

Hence a 76.60% confidence interval is


24.9 24.9
148.2 − 1.19 × ,148.2 + 1.19 ×
100 100
= (145.2,151.2 ).

136
Example 8

A random sample of 100 teachers in a large metropolitan area revealed a mean weekly
salary of $487 with a sd of $48. With what degree of confidence can we assert that the
average weekly salary of all teachers in the metropolitan area is between $472 and $502?

Solution

Suppose the degree of confidence is (1 − α ) × 100%

S
Thus x + t α = $502
n −1,
2 n

Here x = 487, S = 48, n = 100

∴t α ≈ Zα
99 ,
2 2

48
Thus we get 487 + Z α = 502
2
10
15
Or Z α = = 3.125
2
4 .8

α
∴ = 0.0009 or 1 − α = 0.9982
2

∴ We can assert with 99.82% confidence that the true mean salaries will be between
$472 and $502.

Maximum Likelihood Estimates (See exercise 7.23, 7.24)

Definition

Let X be a rv. Let f ( x,θ ) = P( X = x ) be the point prob function if X is discrete and let
f ( x,θ ) be the pdf of X if X is continuous (here θ is a parameter). Let X 1 , X 2 .....X n be a
random sample of size n from X. Then the likelihood function based on the random
sample is defined as

137
L (θ) = L(x 1 , x 2 ,....x n ; θ) = f (x 1 , θ)f (x 2 , θ).....f (x n , θ).

Thus the likelihood function L(θ ) = P ( x1 = x1 )P ( x 2 = x 2 )...P( x n = x n ) if X is discrete and


is the joint pdf of X 1 ,...X n when X is continous. The maximum likelihood estimate

(MLE)of θ is that θ which maximizes L(θ ) .

Example 8

Let X be a rv having Poisson distribution with parameter λ .

λx
Thus f (x , λ ) = P(X = x ) = e − λ ; x = 0,1,2.......
x!

Hence the likelihood function is

λx λx λx
L(λ ) = e −λ
1 2 n

e −λ ....e −λ
x1! x2 ! xn !

e − nλ λx1 + x 2 +....+ x n
= ; x i = 0,1,2.......
x 1! x 2 !.....x n !


To find λ the value of λ which maximizes L(λ ) , we use calculus.

First we take ln (log to base e natural logarithm)

ln L(λ ) = − nλ + (x 1 + ..... + x n ) ln λ − ln (x 1!....x n !)

Differentiating w.r.t. λ (noting x1 .....x n are not be varied)

1 ∂L (x + ....xn )
We get = −n + 1
L(λ ) ∂λ λ

x 1 + .... + x n
= 0 gives λ =
n

138
∂2L
We can ‘easily’ verify is <0 for this λ .
∂λ2

∧ x1 + ....x n
Hence the MLE of λ is λ = = x (The sample mean)
n

Example 9 MLE of Proportion

Suppose p is the proportion of defective bolts produced by a factory. To estimate p, we


proceed as follows. We take n bolts at random and calculate fD = Sample proportion of
defectives.

No of defectives found among the n chosen ones


=
n

we show fD ist he MLE of p.

We define a rv X as follows.

0 if the bolt chosen is not defective


X=
1 if the bolt chosen is defective

Thus X has the prob distribution

x 0 1
Prob 1-p p

It is clear that the point prob function

f ( x; p )(of X ) is given by

f (x; p ) = p x (1 − p )
1− x
; x = 0,1

(Note f (x;0 ) = P(x = 0 ) = 1 − p & f (x;1) = P(x = 1) = p)

Choosing n bolts at random amounts to choosing a random sample {X 1 , X 2 ..., X n }from X


where Xi = 0 if the ith bolt chosen is not defective and = 1 if it is defective (I=1,2…n).
139
Hence X 1 + X 2 .... + X n (can you guess?)

= no of defective bolts among the n chosen.


The likelihood function of the sample is

L(p ) = f (x 1 ; p )f (x 2 , p ).....f (x n ; p )

= p x1T ...+ x n (1 − p )
n − ( x1 + x 2 +...+ x n )
x i = 0 or1 for all i = 1,....n
= p (1 − p ) (s = x 1 .... + x n )
s n −s

Taking ln and differentiating (partially) wrt p,

We get
1 ∂L s (n − s )
= −
L ∂p p 1 − p

∂L s n −s
for maximum, = 0 or =
∂p p 1− p

s x 1 + x 2 + ..... + x n
(i.e) p = =
n n

No of defectives among the n chosen


=
n

= Sample proportion of defectives

∂ 2L
(One can easily see this p makes < 0 so that L is maximum for this p).
∂p 2
Example 10

Let X be a rv having exponential distribution with parameter β (unknown). Hence the


x
1 −
density of X is f ( x; β ) = e β
(x > 0)
β

140
Let {X1 , X 2 ....., X n } be a random sample of size n. Hence the likelihood function is

L(β ) = f ( x1 ; β ) f ( x 2 ; β ).... f ( x n ; β )

( x1 + x2 +....+ xn )
1 −
= e β
( xi > 0)
β n

Taking ln and differentiating (partially) w.r.t. β , we get

1 ∂L n x + .... + x n
=− + 1 = 0 (for max imum )
L ∂β β β2

x1 + x 2 + .... + x n
gives β = =x
n

Thus the sample mean x is the MLE of β .

Example 11

A r.v. X has density

f (x; β ) = (β + 1)x β ;0 < x < 1

Obtain the ML estimate of β based on a random sample {X1 , X 2 .....X n } of size n from
x.

Solution

The likelihood function is

L(β ) = (β + 1) (x 1 x 2 ...x n ) ; 0 < x i < 1


n β

Taking ln and differentiating (partially) wrt β , we get

141
1 ∂L n
= + ln (x 1 .......x n )
L ∂β β + 1

= 0 (for L to be max imum )

1
gives β = −1 −
ln (x 1 .....x n )
n

which is the ML estimate for β .

So far we have considered situations where the ML estimate is got by differentiating L


(and equalizing the derivative) to zero. The following example is one where the
differentiation will not work.

Example 12

A rv X has uniform density over [0, β ]

1
(ie) The density of X is f (x; β) = ; 0 ≤ x ≤ β (and 0 elsewhere)
β

The likelihood function based on a random sample of size n from X is

L(β ) = f (x 1 ; β )f (x 2 ; β ).....f (x n ; β )

1
= ; 0 ≤ x 1 ≤ β, 0 ≤ x 2 ≤ β, ....,0 ≤ x n ≤ β
βn

This is a maximum when the Dr is least

(ie) when β is least. But β > xi ∀ i = 1,2....n

Hence the least β is max {x1 .....x n } which is the MLE of β

142
Estimation of Sample proportion

We have just in the above seen if p = population proportion (i.e proportion of persons,
things etc. having a characteristics) then the ML estimate of p = sample proportion Now
we would like to find a (1 − α ) 100% confidence interval for p.

(This is treated in chapter 9 of your text book)

Large Samples

Suppose we have a ‘dichotomous’ universe; that is a population whose members are


either “haves” on “have – nots”; that is a member has a property or not.

For example we can think of a population of all bulbs produced by a factory. Any bulb is
either a “have” (ie defective) or is a “have-not” (ie it is good) and p = proportion of haves
= “Prob that a randomly chosen member is a “have”.

As another example, we can think of a population of all females in USA. A member is a


“have” ( = 0) is a blond or is a “have-not
“ (=is not a blond). As a last example, consider the population of all voters in India. A
member is a “have” if he follows BJP and is a “have-not” otherwise.

To estimate p, we choose n members at random and count the number X of “haves”. Thus
X is a rv having binomial distribution with parameters n and p!

n
P(X = x ) = f (x; p ) = p x (1 − p )
n −x
; x = 0,1,2.....n
x

and if n is large, we know “standardized Binomial standard normal”

X − np
(ie) for large n , has approx standard normal distribution. So we can say with
np(1 − p )
prob (1 − α ) that

143
x − np
− zα < < zα
2 np (1 − p ) 2

x
−p
or − z α < n zα
2 p (1 − p ) 2
n

x p (1 − p ) x p (1 − p )
or − zα < p < + zα
n 2
n n 2
n

X
In the end points, we replace ‘p’ by the MLE (=sample proportion)
n

Thus we can say with prob (1 − α ) that

x x x x
1− 1−
x n n x n n
− zα < p< + zα
n 2
n n 2
n

Hence a (1 − α )100% confidence interval for p is


x x x x
1− 1−
x n n X n n
− zα , + zα
n 2
n n 2
n

X
Remark : We can say with prob (1 − α ) that the max error − p in approximating p by
n
X
is
n
p (1 − p )
E = Zα
2
n

X
We can replace p by and say the
n

144
X X
1−
n n
Max error = Z α
2
n

1 1
Or we note that p(1 − p ) for (0 ≤ p ≤ 1) is a maximum (which is obtained when p = )
4 2

Thus we can also say with prob (1 − α ) that the max error.

1
= Zα
2
4n

This last equation tell us that to assert with prob (1 − α ) that the max error is E, n must be

2

1 2

4 E

Example 13

In a random sample of 400 industrial accidents, it was found that 231 were due at least
partially to unsafe working conditions. Construct a 99% confidence interval for the
corresponding true proportion p.

Solution

Here n = 400 , x = 231, (1 − α ) = 0.99


α
so that = 0.005 hence Z α = 2.575
2 2

Thus a 99% confidence interval for p will be

x x x x
1− 1−
x n n x n n
− Zα + Zα
n 2
n n 2
n

145
231 231 231 231
1− 1−
231 400 400 231 400 400
= − 2.575 + 2.575
400 400 400 400

= (0.5139,0.6411)

Example 14

In a sample survey of the ‘safety explosives’ used in certain mining operations,


explosives containing potassium mitrate were found to be used in 95 out of 250 cases. If
95
= 0.38 is used as an estimate of the corresponding true proportion, what can we say
250
with 95% confidence about the maximum error?

Solution

Here n = 250, X = 95, 1 − α = 0.95

α
so that = 0.025 ; hence Z α = 1.96
2 2

x x
1−
n n
Hence we can say with 95% confidence that the max. error is E = Z α
2
n
0.38 × 0.62
= 1.96 ×
250

= 0.0602

Example 15:

Among 100 fish caught in a large lake, 18 were inedible due to the pollution of the
.18
environment. If we use = 0.18 as an estimate of the corresponding true proportion,
100
with what confidence can we assert that the error of this estimate is atmost 0.065?

146
Solution

Here n = 100, X = 18 max error = E = 0.065

X X
1−
n n .18 × .82
We note E = Z α = Zα
2
n 2
100

= Z α × 0.03842
2

0.065
∴ Zα = = 1.69
2
0.03842

α
Hence = 1 − 0.9545 = 0.0455
2

∴α = 0.0910 or 1 − α = 0.9190

So we can assert with (1 − α ) × 100% = 91.9% confidence that the error is at most 0.065.

Example 16

What is the size of the smallest sample required to estimate an unknown proportion to
within a max. error of 0.06 with at least 95% confidence?

Solution
α
Here E = 0.06 ;1 − α = 0.95 or = 0.025
2

∴ Z α = Z 0.025 = 1.96
2

Hence the smallest sample size n is

147
2
Zα 2
1 1 1.96
n= 2
=
4 E 4 0.06

= 266.77

Since n must be an integer, we take the size to be 267.

Remark

Read the relevant material in your text on pages 279-281 of finding the confidence
interval for the proportion in case of small samples.

Tests of Statistical Hypothesis

In many problems, instead of estimating the parameter, we must decide whether a


statement concerning a parameter is true of false. For instance one may like to test the
truth of the statement: The mean life length of a bulb is 500 hours.

In fact we may even have to decide whether the mean life is 500 hours or more (!)

In such situations, we have a statement whose truth or falsity we want to test. We then
say we want to test the null hypothesis H0 = the mean life lengths is 500 hours (Here
onwards, when we say we want to test a statement, it shall mean we want to test whether
the statement is true). We then have another (usually called alternative) hypothesis. Make
some ‘experiment’ and on the basis of that we will ‘decide’ whether to accept the null
hypothesis or reject it. (When we reject the null hypothesis we automatically accept the
alternative hypothesis).

Example

Suppose we wish to test the null hypothesis H0 = The mean life length of a bulb is 500
hours against the alternative H1 = The mean life length is > 500 hours. Suppose we take a
random sample of 50 bulbs and found that the sample mean is 520 hours. Should we
accept H0 or reject H0 ? We have to note that even though the population mean is 500
hours the sample mean could be more or less. Similarly even though the population mean
is > 500 hours, say 550 hours, even then the sample mean could be less than 550 hours.
Thus whatever decision we may make, there is a possibility of making an error. That is

148
falsely rejecting H0 (when it should have been accepted) and falsely accepting H0 (when
it should have been rejected). We put this in a tabular form as follows:

Accept H0 Reject H0
H0 is true Correct Decision Type I error
H0 is false Type II Error Correct Decision

Thus the type I error is the error of falsely rejecting H0 and the type II error is the error of
falsely accepting H0. A good decision ( ≡ test) is one where the prob of making the errors
is small.

Notation

The prob of committing a type I error is denoted by α . It is also referred to as the size of
the test or the level of significance of the test. The prob of committing Type II error is
denoted by β .

Example 1

Suppose we want to test the null hypothesis µ = 80 against the alternative hyp µ = 83 on
the basis of a random sample of size n = 100 (assume that the population s.d. σ = 8.4 )
The null hyp. is rejected if the sample mean x > 82 ; otherwise is is accepted. What is the
prob of typeI error; the prob of type II error?

Solution

X−µ
We know that when µ = 80 (and σ = 8.4 ) the r.v. has a standard normal
σ
n
distribution. Thus,

P (Type I error)
=P (Rejecting the null hyp when it is true)

149
(
= P X > 82 given µ = 80 )

X − µ 82 − 80
=P >
σ 8 .4
n 10

= P(Z > 2.38)

= 1 − P(Z ≤ 2.38) = 1 − 0.9913 = .0087

Thus in roughly about 1% of the cases we will be (falsely) rejecting H0. Recall this is also
called the size of the test or level of significance of the test.

P (Type II error) = P (Falsely accepting H0)

= P (Accepting H0 when it is false)


(
= P X ≤ 82 given µ = 83 )

X − µ 82 − 83
=P ≤
σ 8 .4
n 10

= P(Z ≤ 1.19 )

= 1 − P( Z ≤ 1.19) = 1 − 0.8830 = 0.1170

Thus roughly in 12% of the cases we will be falsely accepting H0.

Definition (Critical Region)

In the previous example we rejected the null hypothesis when x > 82 (i.e.) when x lies in
the ‘region’ x>82 (of the x axis). This portion of the horizontal axis is then called the
critical region and denoted by C. Thus the critical region for the above situation is
{ }
C = x > 82 and remember we reject H0 when the (test) statistic X lies in the critical

150
region (ie takes a value > 82). So the size of the critical region ( ≡ prob that X lies in C)
is the size of the test or level or significance.

The shaded portion is the critical region. The portion ... is the region of false
acceptance of H0.

Critical regions for Hypothesis Concerning the means

Let X be a rv having a normal distribution with (unknown) mean µ and (known) s.d. σ .

Suppose we wish to test the null hypothesis µ = µ 0 .

The following tables given the critical regions (criteria for rejecting H0) for various
alternative hypotheses.

Null hypothesis : µ = µ 0 (Normal population σ known)


x − µ0
Z=
σ
n

Alternative Hypothesis
Reject H0 if Prob of Type I error Prob of type II error
H1

µ 0 − µ1
µ = µ1 (< µ 0 ) Z < −Zα α 1− F − Zα
σ
n
µ < µ0 Z < −Zα α

µ 0 − µ1
µ = µ1 > µ 0 Z > Zα α F + Zα
σ
n
µ > µ0 Z > Zα α
Z < −Z α
µ ≠ µ0 2
α
or Z > Z α
2

151
F(x) = cd f of standard normal distribution.

Remark:

The prob of Type II error is blank in case H1 (the alternative hypothesis) is one of the
following three things = µ < µ 0 , µ > µ 0 , µ ≠ µ 0 . This is because the Type II error can
happen in various ways and so we cannot determine the prob of its occurrence.

Example 2:

According to norms established for a mechanical aptitude test, persons who are 18 years
old should average 73.2 with a standard deviation of 8.6. If 45 randomly selected persons
averaged 76.7 test the null hypothesis µ = 73.2 against the alternative µ > 73.2 at the
0.01 level of significance.

Solution

Step I Null hypothesis H 0 : µ = 73.2


Alternative hypothesis H 1 : µ > 73.2
(Thus here µ 0 = 73.2 )

Step II The level of significance

= α = 0.01

Step III Reject the null hypothesis if Z > Z α = Z 0.01 = 2.33

Step IV Calculations

x − µ0 76.7 − 73.2
Z= = = 2.73
σ 8 .6
n 45

Step V Decision net para since Z = 2.73 > Z α = 2.33


we reject H0 (at 0.01 level of significance)
(i.e) we would say µ > 73.2 (and the prob of falsely saying this is ≤ 0.01 ).

152
Example 3

It is desired to test the null hypothesis µ = 100 against the alternative hypothesis
µ < 100 on the basis of a random sample of size n = 40 from a population with σ = 12.
For what values of x must the null hypothesis be rejected if the prob of Type I error is to
be α = 0.01?

Solution

Z α = Z 0.01 = 2.33 . Hence from the table we reject H0 if Z < − Z α =-2.33 where

x − µ0 x − 100
Z= = < −2.33 gives
σ 12
n 40

12
x < 100 − 2.33 × = 95.58
40

Example 4

To test a paint manufacturer’s claim that the average drying time of his new “fast-drying”
paint is 20 minutes, a ‘random sample’ of 36 boards is painted with his new paint and his
claim is rejected if the mean drying time x is > 20.50 minutes. Find

(a) The prob of type I error


(b) The prob of type II error when µ = 21 minutes.
(Assume that σ = 2.4 minutes)

Solution

Here null hypothesis H 0 : µ = 20


Alt hypothesis H 1 : µ > 20

P (Type I error) = P (Rejecting H0 when it is true)

Now when H0 is true, µ = 20 and hence

153
X − µ X − 20
σ
=
2 .4
=
6
2 .4
( )
X − 20 is standard normal.

n 36

Thus P (Type I error)

= P ( X > 20.50 given that µ = 20 )

X − µ 20.50 − 20
=P >
σ 2 .4
n 36
= P(Z > 1.25) = 1 − P(Z ≤ 1.25) = 1 − F(1.25)

= 1 − 0.8944 = 0.1056

(b) P (Type II error when µ = 21 )

=P (Accepting H0 when µ = 21 )
(
= P X ≤ 20.50 when µ = 21 )

X − µ 20.50 − 21
=P ≤ = P(Z ≤ −1.25) = P(Z > 1.25)
σ 2 .4
n 36

= 0.1056

154
Example 5

It is desired to test the null hypothesis µ = 100 pounds against the alternative hypothesis
µ < 100 pounds on the basis of a random sample of size n=40 from a population with
σ = 12. For what values of x must the null hypothesis be rejected if the prob of type I
error is to be α = 0.01?

Solutions

We want to test the null hypothesis H 0 : µ = 100 against the alt hypothesis H 1 : µ < 100
given σ = 12, n = 50.
Suppose we reject H0 when x < C.

Thus P (Type I error)


= P (Rejecting H0 when it is true)
(
= P X < C given µ = 100 )

X − µ C − 100 C − 100
=P < =P Z<
σ 12 12
n 50 50

C − 100
=F = 0.01
12
50

C − 100
implies = −2.33
12
50

12
Or C = 100 − × 2.33 = 96.05
50

Thus reject H0 if X < 96.05

155
Example 6

Suppose that for a given population with σ = 8.4 in 2 , we want to test the null hypothesis
µ = 80.0 in 2 against the alternative hypothesis µ < 80.0 in 2 on the basis of a random
sample of size n = 100.

(a) If the null hypothesis is rejected for x < 78.0 in 2 and otherwise it is accepted,
what is the probability of type I error?
(b) What is the answer to part (a) if the null hypothesis is µ ≥ 80 in 2 instead of
µ = 80.0 in 2

Solution

(a) null hypothesis H 0 : µ = 80


Alt hypothesis H 1 : µ < 80
Given σ = 8.4, n = 100

P (Type I error) = P (Rejecting H0 when it is true)

(
= P X < 78.0 given µ = 80 )

X − µ 78.0 − 80.0 10
=P < = P Z < 1−
σ 8 .4 4 .2
n 100

10
= 1− P Z < = 1 − F (2.38)
4 .2
=1-0.9913 =.0087

(b) In this case we define the type I error as the max prob of rejecting H0 when it is
(
true = P x < 78.0 given µ is a number ≥ 80.0 )
(
Now P x < 78.0 when the population mean is µ )

156
x−µ 78.0 − µ 10
=P < =P Z< (78 − µ )
σ 8 .4 8 .4
n 100

= F (1.19(78 − µ ))

We note that cdf of Z, viz F(z) is an increasing function of Z. Thus when


µ ≥ 80, F (1.19(78 − µ )) is largest when µ is smallest i.e. µ = 80. Hence P (Type I
error)

= Max F (1.19(78 − µ )) = F (1.19 × (78 − 80 ))


µ ≥ 80
= 0.0087

Example 7

If the null hypothesis µ = µ 0 is to be tested against the one-sided alternative hypothesis


µ < µ 0 (or µ > µ 0 ) and if the prob of Type I error is to be α and the prob of Type II
error is to be β when µ = µ1 , it can be shown that this is possible when the required
sample size is

σ 2 (Z α + Z β )
2

n=
(µ1 − µ 0 )2

where σ 2 is the population variance.

(a) It is desired to test the null hypothesis µ = 40 against the alternative hypothesis
µ < 40 on the basis of a large random sample from a population with σ = 4.
If the prob of type I error is to be 0.05 and the prob of Type II error is to be 0.12
for µ = 38, find the required size of the sample.

(b) Suppose we want to test the null hypothesis µ = 64 against the alternative
hypothesis µ < 64 for a population with standard deviation σ = 7.2. How large a

157
sample must we take if α is to be 0.05 and β is to be 0.01 for µ = 61? Also for
what values of x will the null hypothesis have to be rejected?

Solution

(a) Hence α = 0.05 , β = 0.12 µ 0 = 40, µ1 = 38, σ = 4

Z α = Z 0.05 = 1.645, Z β = Z 0.12 = 1.175


Thus the required sample size

16(1.645 + 1.175)
2
= = 31.89 ∴n ≥ 32.
(38 − 40)2

(b) Here α = 0.05, β = 0.01, µ 0 = 64, µ1 = 61, σ = 7.2

∴n ≥
(7.2 ) (1.645 + 2.33)
2 2
= 91.01 ∴ n ≥ 92
(61 − 64 )2

X − 64
We reject H 0 if Z < − Z α ie < −1.645 or X < 62.76
7 .2
92

Tests concerning mean when the sample is small

If X is the sample mean and S the sample s.d. of a (small) random sample of size n from
X − µ0
a normal population (with mean µ 0 ) we know that the statistic t = has a t-
S
n
distribution with (n-1) degrees of freedom. Thus to test the null hypothesis H 0 : µ = µ 0
against the alternative hypothesis H 1 : µ > µ 0 , we note that when H 0 is true, (ie) when
µ = µ 0 , P(t > t n −1,α ) = α

S
Thus if we reject the null hypothesis when t > t n −1,α (ie) when X > µ 0 + t n −1,α we
n
shall be committing a type I error with prob α .

158
The corresponding tests when the alternative hypothesis is µ < µ 0 (& µ ≠ µ 0 ) are
described below.

Note: If n is large, we can approximate t n −1,α by Z α in these tests.

Critical Regions for Testing H 0 : µ = µ 0 (Normal population, σ unknown )

Alt Hypothesis Reject Null hypothesis if


µ < µ0 t < −t n −1,α

µ > µ0 t > t n −1,α

t < −t n −1,α or
µ ≠ µ0 2

t > t n −1,α
2

X − µ0
t= (n → sample size)
s
n

In each case P(Type I error) = α

Example 8

A random sample of six steel beams has a mean compressive strength of 58,392 psi
(pounds per square inch) with a s.d. of 648 psi. Use this information and the level of
significance α = 0.05 to test whether the true average compressive strength of the steel
from which this sample came is 58,000 psi. Assume normality.

Solution

1. Null Hypothesis µ = µ 0 = 58,000


Alt hypothesis µ > 58,000 (why!)
2. Level of significance α = 0.05
3. Criterion : Reject the null hypothesis if t > t n −1,α = t 5, 0.05 = 2.015
4. Calculations

159
X − µ 0 58,392 − 58,000
t= =
S 648
n .6
= 1.48
5. Decision
= 1.48 ≤ 2.015
Since t observed

we cannot reject the null hypothesis. That is we can say the true average compressive
strength is 58,000 psi.

Example 9

Test runs with six models of an experimental engine showed that they operated for
24,28,21,23,32 and 22 minutes with a gallon of a certain kind of fuel. If the prob of type I
error is to be at most 0.01, is this evidence against a hypothesis that on the average this
kind of engine will operate for at least 29 minutes per gallon with this kind of fuel?
Assume normality.

Solution

1. Null hypothesis H 0 : µ ≥ µ 0 = 29
Alt hypothesis: H 1 : µ < µ 0
2. Level of significance ≤ α = 0.01
3. Criterion : Reject the null hypothesis if t < − t n −1,α = − t 5, 0.01 = −3.365 (Note n = 6 )
X − µ0
where t =
S
n
4. Calculations
24 + 28 + 21 + 23 + 32 + 22
X= = 25
6

160
S2 =
1
6 −1
[
(24 − 25)2 + (28 − 25)2 + (21 − 25)2 + (23 − 25)2 + (32 − 25)2 + (22 − 25)2 ]
= 17.6

25 − 29
∴t = = −2.34
17.6
6

5. Decision

Since t obs = −2.34 ≥ − 3.365 , we cannot reject the null hypothesis. That is we can
say that this kind of engine will operate for at least 29 minute per gallon with this
kind of fuel.

Example 10

A random sample from a company’s very extensive files shows that orders for a certain
piece of machinery were filled, respectively in 10,12,19,14,15,18,11 and 13 days. Use the
level of significance α = 0.01 to test the claim that on the average such orders are filled
in 10.5 days. Choose the alternative hypothesis so that rejection of the null hypothesis.
µ = 10.5 indicates that it takes longer than indicated. Assume normality.

Solution

1. Null hypothesis H 0 : µ ≥ µ 0 = 10.5


Alt hypothesis : H 1 : µ < 10.5
2. Level of significance α = 0.01
3. Criterion : Reject the null hypothesis if t < −t n −1,α = −t 8−1, 001 = t 7, 0.01 = 2.998
X − µ0
where t = (where µ 0 = 10.5, n = 8)
S
n
4. Calculations

10 + 12 + 19 + 14 + 15 + 18 + 11 + 13
X= =14
8

161
1 (10 − 14 ) + (12 − 14) + (19 − 14 ) + (14 − 14) + (15 − 14)
2 2 2 2 2

S2 =
8 − 1 + (18 − 14)2 + (11 − 14 )2 + (13 − 14 )2
= 10.29
14 − 10.5
∴t = = 3.09
10.29
8

5. Decision
Since t observed = 3.09 > 2.998 , we have to reject the null hypothesis .That is we can
say on the average, such orders are filled in more than 10.5 days.

Example 11

Tests performed with a random sample of 40 diesel engines produced by a large


manufacturer show that they have a mean thermal efficiency of 31.4% with a sd of 1.6%.
At the 0.01 level of significance, test the null hypothesis µ = 32.3% against the
alternative hypothesis µ ≠ 32.3%

Solution

1. Null hypothesis µ = µ 0 = 32.3


Alt hypothesis µ ≠ 32.3
2. Level of significance α = 0.01
3. Criterion : Reject H 0 if < −t n −1,α or t n −1,α (ie) if t < −t 39, 0.005 or t 39, 0.005 .
2 2

Now t 39, 0.005 ≈ Z 0.005 = 2.575


Thus we reject H 0 if t < −2.575 or t > 2.575
X − µ0
where t =
S
n
4. Calculations
31.4 − 32.3
t= = −3.558
1 .6
40
5. Decision
Since t observed = −3.558 < −2.575
Reject H0 ; That is we can say the mean thermal efficiency ≠ 32.3

162
Example 12

In 64 randomly selected hours of production, the mean and the s.d. of the number of
acceptable pieces produced by an automatic stamping machine are
X = 1,038 and S = 146. At the 0.05 level of significance, does this enable us to reject the
null hypothesis µ = 1000 against the alt hypothesis µ > 1000 ?

Solution

1. The null hypothesis H 0 : µ = µ 0 = 1000


Alt hypothesis H 1 : µ > 1000
2. Level of significance α = 0.05
3. Criterion : Reject H 0 if t > t n −1,α = t 64−1, 0.05
Now t 63, 0.05 ≈ Z 0.05 = 1.645
Thus we reject H 0 if t > 1.645
X − µ 0 1,038 − 1,000
4. Calculations: t = = = 2.082
S 146
n 64
5. Decision : Since t obs = 2.082 > 1.645

we reject H 0 at 0.05 level of significance.

163

Anda mungkin juga menyukai