Anda di halaman 1dari 90

Chapter 1 Data and Statistics

Motivation: the following kinds of statements in newspaper and magazine appear very
frequently,
Sales of new homes are accruing at a rate of 70300 homes per year.
The unemployment rate has dropped to 4.0%.
The Dow Jones Industrial Average closed at 10000
Census

The above numerical descriptions are very familiar to most of us since we use it in
everyday life. As a matter of fact, these are part of statistics. Therefore, statistics is in
our everyday life. We now give a description about statistics.

Definition of statistics:
Statistics is the art and science of collecting, analyzing, presenting, interpreting
and predicting data.

Objective of this course: using statistics is to give the managers and decision makers a
better understanding of the business and economic environment and thus enable them
to make more informed and better decision.

1.1 Data

(I) Basis components of a data set:

Usually, a data set consists the following components:


Element: the entities on which data are collected.
Variable: a characteristic of interest for the element.
Observation: the set of measurements collected for a particular element.

Example 1:
We have a data set for the following 5 stocks:

Stock Annual Sales (in Earnings per share Exchange (where to


million) ($) trade)
Cache Inc. 86.6 0.25 OTC

1
Koss Corp 36.1 0.89 OTC
Par Technology 81.2 0.32 NYSE
Scientific Tech. 17.3 0.46 OTC
Western Beef 273.7 0.78 OTC
Note: OTC stands for over the counter while NYSE stands for New York Stock
Exchange.

In the above data set,


Elements Cache Inc., Koss Corp, Par Technology, Scientific Tech,
Western Beef
Variables Annual Sales, Earnings per share, Exchange
Observations (86.6,0.25,OTC),(36.1,0.89,OTC),(81.2,0.32,NYSE),
(17.3,0.46,OTC),(273.7,0.78,OTC)

(II) Qualitative and Quantitative Data:

Qualitative data: labels or names used to identify an attribute of each element.


Quantitative data: indicating how much or how many

Example 1 (continue):
Qualitative data: OTC, OTC, NYSE, OTC and OTC
Quantitative data: 86.6, 36.1, 81.2, 17.3, 273.7, 0.25, 0.89, 0.32, 0.46 and 0.78

The variable Exchange is referred to as a qualitative variable.


The variables Annual Sales and Earnings per share are referred to as quantitative
variables.

Note: quantitative data are always numeric, but qualitative data may be either
numeric or nonnumeric, for example, id numbers and automobile license plate
numbers are qualitative data.

Note: ordinary arithmetic operations are meaningful only with quantitative data
and are not meaningful with qualitative data.

(III) Cross-Sectional and Time Series Data:

Cross-sectional data: data collected at the same or approximately the same point in

2
time.
Time series data: data collected over several time periods.
Online Exercise:
Exercise 1.1.1
Exercise 1.1.2

1.2 Data Source:


There are two sources for data collection, one is existing sources and the other is
statistical studies.

(I) Existing Sources:


There are two existing sources:
Company: some of the data commonly available from the internal information
sources of most company including employee records, production records, inventory
records, sale records, credit records and customer profile, etc
Government Agency: Department of Labor, Bureau of the Census, Federal Reserve
Board, Office of Management and Budget, Department of Commerce.

(II) Statistical Studies:


A statistical study can be conducted to obtain the data not readily available from
existing sources. Such statistical studies can be classified as either experimental or
observational.
Experimental Study: attempt is made to control or influence the variables of interest,
for example, drug test and industrial product test.
Observational Study: no attempt is made to control or influence the variable of
interest, for example, survey.

Online Exercise:
Exercise 1.2.1

1.3 Descriptive Statistics:


There are two classes of descriptive statistics, one class includes table and graph and
the other class includes numerical measures and index numbers.

(I) Tabular and Graphical Approaches:

3
Example 1(continue):
Tabular approach for stock data:

Exchange Frequency Percent


OTC 4 80%
NYSE 1 20%
5 100%
Graphical approach for stock data:
80
60
40
20
0

OTC NYSE

(II) Numerical Measures and Index Numbers:

Some numerical quantities can be used to provide important information about the
data, for example, the average or mean. Index numbers are widely used in business,
for example, the Consumer Price Index (CPI) and te Dow Jones Industrial Average
(DJIA).
Online Exercise:
Exercise 1.3.1
Exercise 1.3.2

1.4 Statistical Inference:

Descriptive statistics introduced in section 1.4 can provide important and intuitive
information about the data of interest. However, these statistical measures are mainly
exploratory. For more detailed, rigorous and accurate results, the statistical inference

4
procedure is required. To conduct a statistical inference, data need to be drawn from a
set of elements of interest. We now introduce some basic components in the statistical
inference procedure. They are:

Population: the set of all elements of interest in a particular study.


Sample: a subset of the population.

Data from a sample can be used to make estimates and test hypotheses about the
characteristics of a population

Example 2:

Suppose there are 100000 bulbs produced in a bulbs factory.

Objective: want to know the average lifetime of the 100000 bulbs.

The 100000 bulbs are the population of interest. In practice, it is not possible (also not
realistic) to test 100000 bulbs for the lifetime. One workable way is to draw a sample,
say 100 bulbs, and then test for their lifetime. Suppose the average lifetime of the 100
bulbs is 750 hours. Then, the estimate (guess) of the average lifetime of the 100000
bulbs is 750 hours.

Note: the process of making estimates and testing hypotheses about the
characteristics of a population is referred to as statistical inference.

Online Exercise:
Exercise 1.4.1

Chapter 2 Descriptive Statistics: Table and Graph

The logical flow of this chapter:


Summarizing qualitative data using tables and graphs (2.1)
Summarizing quantitative data using tables and graphs (2.2)
Exploratory data analysis using simple arithmetic and easy-to-
draw graphs such that the data can be summarized quickly.

5
2.1 Summarizing Qualitative Data:

For qualitative data, we can use frequency distribution and relative frequency. We
now introduce frequency distribution, relative frequency and percent frequency.

Frequency distribution: tabular summary of data indicating the number of data


values in each of several nonoverlapping classes.

Relative frequency: (frequency of a class)/n, where n is the total number of the data.

Percent frequency: (relative frequency) 100%.

Based on the frequency distribution, relative frequency, and percent frequency of the
data, we can use table and graphs to display these frequencies.

Example:

Forbes investigates the degrees of 25 best paid CEO (chief executive officer).

Tabular summary:

Degrees Frequency Relative Frequency Percent Frequency

None 2 0.08 8

Bachelor 11 0.44 44

Master 7 0.28 28

Doctorate 5 0.20 20

Total 25 1.00 100

Graphical display:

6
Bar Graph:

10
8
6
4
2
0

N one Bachelor Master Doctorate

Pie Graph:

CEO Degrees

None Bachelor Master Doctorate

Note: most statisticians recommend that from 5 to 20 classes be used in


a frequency distribution; classes with smaller frequencies should
normally be grouped!!

Online Exercise:
Exercise 2.1.1
Exercise 2.1.2

7
2.2 Summarizing Quantitative Data:

Determine the classes:

For quantitative data, we need to define the classes first. There are 3 steps to define
the classes for a frequency distribution:

Step 1: Determine the number of nonoverlapping classes, usually 5 to 20 classes.

Step 2: Determine the width of each class,


largest data value smallest data value
class width
number of classes
Note: the number of classes and the approximate class are determined
by trial and error!!

Step 3: Determine the class limits: the smallest possible data value should be larger
than the lower class limit while the largest possible data value should be smaller than
the upper class limit.

Example:

Suppose we have the following data (in days):


12 14 19 18 15 15 18 17 20 27
22 23 22 21 33 28 14 18 16 13
We applied the above procedure to this data.

Step 1:
We choose 5 to be the number of classes.
Step 2:
largest data value smallest data value 33 12
class width 4.2 .
number of classes 5
Therefore, we use 5 as the class width.
Step 3:
The 5 classes we choose are
10-14 15-19 20-24 25-29 30-34

8
Note: the lower class limit in the first class (10) is smaller than the
smallest data value 12. Also, the upper class limit in the last class (34) is
smaller than the largest data value 33.

Summarizing quantitative data:

Tabular summary:

In addition to frequency, relative frequency and percent frequency, another tabular


summary of quantitative data is the cumulative frequency distribution.

Cumulative frequency distribution: the number of data items with values less than
or equal to the upper class limit of each class.

Graphical display:

In addition to histogram, another graphical display of quantitative data is ogive.

Ogive: the number of data items with values less than or equal to the upper class
limit of each class.

Example (continue):

Classes Frequency Relative Frequency Percent Frequency

10-14 4 0.2 20

15-19 8 0.4 40
20-24 5 0.25 25

25-29 2 0.1 10

30-34 1 0.05 5
Total 20 1 100

Classes Cumulative Cumulative Relative Cumulative Percent


Frequency Frequency Frequency
14 4 0.2 20

9
19 4+8=12 0.2+0.4=0.6 20+40=60

24 4+8+5=17 0.2+0.4+0.25=0.85 20+40+25=85

29 4+8+5+2=19 0.2+0.4+0.25+0.1=0.95 20+40+25+10=95

34 4+8+5+2+1=20 0.2+0.4+0.25+0.1+0.05= 20+40+25+10+5=10


1 0

The histogram is
8
6
4
2
0

10 15 20 25 30 35

data

The ogive plot is


Ogive plot

20
cumulative frequency

15

10

0 5 10 15 20 25 30 35
data

Online Exercise:

10
Exercise 2.2.1
Exercise 2.1.2

2.3 Exploratory Data Analysis:

Stem-and-leaf display is a useful exploratory data analysis tool which can provide an
idea of the shape of the distribution of a set of quantitative data.

Example:

Suppose the following data are the midterm scores of 10 students,


17, 22, 93, 82, 95, 87, 66, 68, 71, 52.
Then, the stem-and-leaf display is

1 7
2 2
3
4
5 2
6 68
7 1
8 27
9 35
Online Exercise:
Exercise 2.3.1

Chapter 3 Descriptive Statistics: Numerical


Methods

Suppose y1 , y 2 , , y N are all the elements in the population and x1 , x 2 , , x n are


the sample drawn from y1 , y 2 , , y N , where N is referred to as the population size
and n is the sample size. In this chapter, we introduce several numerical measures to
obtain important information about the population. These numerical measures
computed from a sample are called sample statistics while those numerical
measures computed from a population are called population parameters.

11
In practice, it is not realistic or not possible to obtain population parameter from a
population, for example, the average lifetime of 100000 bulbs. Therefore, the sample
statistic can be used to estimate the population parameter, for example, the average
lifetime of 100 bulbs can be used to estimate the average lifetime of 100000 bulbs..

3.1 Measure of Location:

Example:
Suppose the following data are the scores of 10 students in a quiz,
1, 3, 5, 7, 9, 2, 4, 6, 8, 10.
Some measures need to be used to provide information about the performance of the
10 students in this quiz.

(I) Mean:
n

Sample mean: x i
(sample statistic)
x i 1

n
N

Population mean: y i
(population parameter)
i 1

Basically, the mean can provide the information about the center of the data.
Intuitively, it can measure the rough location of the data.

Example (continue):
1 3 10
x 5.5
10

(II) Median:

The data are arranged in ascending (or descending) order. Then,


1. As the sample size is odd, the median is the middle value.
2. As the sample size is even, the median is the mean of the middle two numbers.

Example (continue):

12
56
median 5.5
2
If the data are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11. Then,
median 6

Note: the median is less sensitive to the data with extreme values than
the mean. For example, in the previous data, suppose the last data has
been wrongly typed, the data become 1, 3, 5, 7, 9, 2, 4, 6, 8, 100. Then
the median is still 5.5 while the mean becomes 14.5.

(III) Mode:
The data value occurs with greatest frequency (not necessarily to be numerical).

Note: if the data have exactly two modes, we say that the data are
bimodal. If the data have more than two modes, we say that the data are
multimodal.

(IV) Percentile:
The pth percentile is a value such as at least p percent of the data have this value or
less.

Note: 50th percentile median!!

The procedure to calculate the pth percentile:


1. Arrange the data in ascending order.
p
2. Compute an index i, i n.
100
3. (a) If i is not an integer, round up, i.e., the next integer value greater than i denote
the position of the pth percentile.
(b) If i is an integer, the pth percentile is the average of the data values in positions
i and i+1.

Example (continue):

Please find 40th percentile and 26th percentile for the previous data.

[Solution]

13
Step 1: the data in ascending order are
1, 2, 3, 4, 5, 6, 7, 8, 9, 10.
Step 2:
For 40 th percentile,
40
i 10 4 .
100
For 26 th percentile,
26
i 10 2.6
100
Step 3:
45
40th percentile 4.5
2
and
26th percentile 3

(V) Quartiles:
When dividing data into 4 parts, the division points are referred to as the quartile!!
That is,
Q1 the first quartile or 25th percentile
Q2 the second quartile or 50th percentile
Q3 the third quartile or 75th percentile

Example (continuous):
Find the first quartile and the third quartile for the previous example.

Step 2:
For the first quartile,
25
i 10 2.5 .
100
For the third quartile,
75
i 10 7.5
100
Step 3:
Q1 3
and
Q3 8

Online Exercise:
14
Exercise 3.1.1
Exercise 3.1.2

3.2 Measure of Dispersion:

Example:
Suppose there are two factories producing the batteries. From each factory, 10
batteries are drawn to test for the lifetime (in hours). These lifetimes are:

Factory 1: 10.1, 9.9, 10.1, 9.9, 9.9, 10.1, 9.9, 10.1, 9.9, 10.1
Factory 2: 16, 5, 7, 14, 6, 15, 3, 13, 9, 12.

The mean lifetimes of the two factories are both 10. However, by looking at the data,
it is obvious that the batteries produced by factory 1 are much more reliable than the
ones by factory 2. This implies other measures for measuring the dispersion or
variation of the data are required.

(I) Range:
range(largest value of the data)(smallest value of the data).

Example (continue):

Range of lifetime data for factory 110.19.90.2


Range of lifetime data for factory 216313
The range of battery lifetimes for factory 1 is much smaller than the one for
factor 2.

Note: the range is seldom used as the only measure of dispersion. The
range is highly influenced by an extremely large or an extremely small
data value.

(II) Interquartile Range:


Interquartile is the difference between the third and the first quartiles. That is,
IQR Q3 Q1 .

Example:

15
The first quartile and the third quartile for the data from factory 1 are 9.9 and 10.1,
respectively, and 6 and 14 for the data from factory 2. Therefore,
IQR (factory 1)10.19.90.2
IQR (factory 2)1468.
The interquartile of battery lifetimes for factory 1 is much smaller than the one
for factor 2.

(III) Variance and Standard Deviation:


population deviation about the mean: y i , i 1,2, , N
sample deviation about the mean: xi x , i 1,2, , n
Intuitively, the population deviation and the sample deviation can measure how far the
data is from the center of the data. Then, population variance and sample
variance are the sum of square of the population deviation and sample deviation,
N

y
2
i
2 i 1

N
and
n n

xi x xi2 nx 2 ,
2

s2 i 1
i 1

n 1 n 1
respectively. The population standard deviation and sample standard deviation are the
square root of population variance and sample variance:
2
and
s s2 ,
respectively.
Large sample variance or sample standard deviation implies the data are dispersed
or are highly varied.

Note: n n n x i n n

xi x xi nx xi n i 1
i 1 i 1 i 1 n
xi xi 0
i 1 i 1

Example:

s 2 ( factory .1)
10.1 10 2 9.9 10 2 10.1 10 2 0.0111
10 1

16
s 2 ( factory.2)
16 10 2 5 10 2 12 10 2 21.1111
10 1
The sample variance of battery lifetimes for factory 2 is 190 times larger than
the one for factor 1.
The sample standard deviation for the data from factories 1 and 2 are
0.01111 0.1054 and 21.1111 4.5946 ,
respectively.

(IV) Coefficient of Variation:


The coefficient of variation is another useful statistic for measuring the dispersion of
the data. The coefficient of variation is
s
C.V . 100
x
The coefficient of variation is invariant with respect to the scale of the data. On the
other hand, the standard deviation is not scale-invariant. The following example
demonstrates the property.

Example:

In the battery data from factory 1, suppose the measurement is in minutes rather than
hours. Then, the data are 606, 594, 606, 594, 594, 606, 594, 606, 594, 606.
Thus, the standard deviation becomes 6.3245 which is 60 times larger than the one
0.1054 based on the original data measured in hours. However, no matter the data are
measured in hours and minutes, the coefficient of variation is
0.1054 6.3245
C.V . 100 100 1.054.
10 600

Note: since the coefficient of variation is scale-invariant, it is very useful


for comparing the dispersion of different data. For example, in the
previous battery data, if the lifetime of the batteries from factory 1 and
factory 2 are measured in minutes and hours, respectively, the standard
deviation for factory 1, 6.3245, would be larger than for factory 2,
4.5946. However, the coefficient of variation for factory 1, 1.054 is still
much smaller than the one for factory 2, 45.946.

Online Exercise:
Exercise 3.2.1
Exercise 3.2.2

17
3.3 Exploratory Data Analysis:

(I) Five-Number Summary:

The five number summary can provide important information about both the location
and the dispersion of the data. They are
Smallest value
First quartile
Median
Third quartile
Largest value

Example (continue):

The original data (in hours) are:


Factory 1: 10.1, 9.9, 10.1, 9.9, 9.9, 10.1, 9.9, 10.1, 9.9, 10.1
Factory 2: 16, 5, 7, 14, 6, 15, 3, 13, 9, 12.

The five-number summary for the data from both factories is

Smallest Q1 Median Q3 Largest


Factory 1 9.9 9.9 10 10.1 10.1
Factory 2 3 6 10.5 14 16

(II) Box Plot:

The box-plot is commonly used graphical method to provide information about both
the location and dispersion of the data. Especially, as the interest is the comparison of
the data from different populations, the box-plot can provide insight. The box-plot is

1.5IQR 1.5IQR

lower IQR upper

18
limit Q1 Q3 limit

Note: data outside upper limit and lower limit are called outliers.

Example (continue):

The box-plot for the data from the two factories is


16
14
12
10
8
6
4

factory1 factory2

Online Exercise:
Exercise 3.3.1

3.4 Measures of Relative Location:

z-score is the quantity which can be used to measure the relative location of the data.
Z-score, referred to as the standardized value for observation i, is defined as
xi x
zi .
s
Note: zi is the number of standard deviation xi from the mean x .

Example (continue):

19
Factory 1:

xi 10.1 9.9 10.1 9.9 9.9 10.1 9.9 10.1 9.9 10.1
zi 0.948 -0.948 0.948 -0.948 -0.948 0.948 -0.948 0.948 -0.948 0.948

Factory 2:

xi 16 5 7 14 6 15 3 13 9 12
zi 1.305 -1.088 -0.652 0.870 -0.870 1.088 -1.523 0.652 -0.217 0.435

There are two results related to the location of the data. The first result is Chebyshevs
theorem.

Chebyshevs Theorem:
For any population, within k standard deviation of mean, there are at least
1
(1 ) 100%
k2
of the data, where k is any value greater than 1.

Based on Chebyshevs theorem, for any data set, it could be roughly estimated that at
1
least (1 ) 100% of data within k sample standard deviation of mean.
k2
Example (continue):

As k=2, based on Chebyshevs theorem, at least


1
(1 ) 100% 75%
22
of the data are estimated within 2 standard deviations of mean. For the data from
factory 1 and factory 2, all the data are within 2 sample deviations of mean, i.e., all
the data have z-score with absolute values smaller than 2.

The second result is based on the empirical rule. The rule is especially applicable as
the data have a bell-shaped distribution. The empirical rule is

Approximately 68% of the data will be within one standard deviation of the
mean ( 1 z i 1 ).

20
Approximately 95% of the data will be within one standard deviation of the
mean ( 2 z i 2 ).

Almost all of the data will be within one standard deviation of the mean (
3 z i 3 ).

Example (continue):

For data from factory 1, all the data are within one standard deviation of the mean
while 60% of the data are within one standard deviation of the mean for the data from
the factory2. The result based on the empirical rule is not applicable to the two data
set since the two data sets are not bell-shaped. However, for the following data,

2.11 -0.83 -1.43 1.35 -0.42 -0.69 -0.65 -0.29 -0.54 1.92
0.53 -0.27 1.7 0.88 1.25 0.32 -2.18 0.68 0.85 0.34

The histogram of the above data given below indicates the data is roughly bell-
shaped.
4
3
2
1
0

-2 -1 0 1 2

rn1

Approximately 65% of the data are within one standard deviation of the mean, which
is similar to the result based on the empirical rule (68%).

Detecting Outliers:
To identify the outliers, we can use either the box-plot or the z-score. The outliers
identified by the box-plot are those data outside the upper limit or lower limit while

21
the outliers identified by z-score are those with z-score smaller than 3 or
greater than 3.
Note: the outliers identified by box-plot might be different from
those identified by using z-score .

Online Exercise:
Exercise 3.4.1

3.5 The Weighted Mean and Grouped Data:

Weighted Mean:
n

w x i i
xw i 1
n .
w
i 1
i

Note: when data values vary in importance, the analyst must choose the
weight that best reflects the importance of each data value in the
determination of the mean.

Example 1:

The following are 5 purchases of a raw material over the past 3 months.
Purchase Cost per Pound ($) Number of Pounds
1 3.00 1200
2 3.40 500
3 2.80 2750
4 2.90 1000
5 3.25 800

Find the mean cost per pound.

[solutions:]
w1 1200, w2 500, w3 2750, w4 1000, w5 800.
and
x1 3.00, x2 3.40, x3 2.80, x4 2.90, x5 3.25.

22
Then,
5

w x i i
xw i 1
5

w
i 1
i

1200 3.00 500 3.40 2750 2.80 1000 2.90 800 3.25

1200 500 2750 1000 800
2.96

Population Mean for Grouped Data:


m m

Fk M k F M k k
g k 1
m
k 1
,
N
F k 1
k

where
Mk : the midpoint for class k,
Fk : the frequency for class k in the population,
m
N Fk : the population size.
k 1

Sample Mean for Grouped Data:


m m

fk M k f M k k
xg k 1
m
k 1
,
n
f k 1
k

where
fk : the frequency for class k in the sample,
m
n f k : the sample size.
k 1

Population Variance for Grouped Data:


F M g
m
2
k k
g2 k 1
N
Sample Variance for Grouped Data:
f k M k xg
m m

f M
2
k
2
k nxg2
s g2 k 1
k 1
n 1 n 1

23
Example 2:

The following are the frequency distribution of the time in days required to complete
year-end audits:
Audit Time (days) Frequency
10-14 4
15-19 8
20-24 5
25-29 2
30-34 1
What is the mean and the variance of the audit time?

[solutions:]

f1 4, f 2 8, f 3 5, f 4 2, f 5 1.

n f 1 f 2 f 3 f 4 f 5 4 8 5 2 1 20
and
M 1 12, M 2 17, M 3 22, M 4 27, M 5 32.
Thus,
5

fM i i
4 12 8 17 5 22 2 27 1 32 and
xg i 1
19
5
4 8 5 2 1
f i
i 1 Online
Exercise:
Exercise 3.5.1
f M xg
5
2
i i
s g2 i 1

n 1
4 12 19 8 17 19 5 22 19 2 27 19 1 32 19
2 2 2 2 2

20 1
30

Chapter 4 Association Between Two Variables

In this chapter, we introduce several methods to measure the association.

24
They are:

Crosstabulations and scatter diagrams


Numerical measures of association

4.1 Crosstabulations and Scatter Diagrams:


The crosstabulation (table) and the scatter diagram (graph) can help us understand the
relationship between two variables.

1. Crosstabulations

Example:

Objective: explore the association of the quality and the price for the
restaurants in the Los Angeles area.

The following table is the crosstabulation of the quality rating (good, very good
and excellent) and the mean price ($10-19, $20-29, $30-39, and $40-49) data
collected for a sample 300 restaurants located in the Los Angeles area.

Meal Price
Quality $10-19 $20-29 $30-39 $40-49 Total
Rating
Good 42 40 2 0 84
Very Good 34 64 46 6 150
Excellent 2 14 28 22 66
Total 78 118 76 28 300

The above crosstabulation provides insight abut the relationship between the
variables, quality rating and mean price. It seems higher meal prices appear to be
associated with the higher quality restaurants and the lower meal prices appear to be
associated with the lower quality restaurants. For example, for the most expensive
restaurants ($40-49), none of these restaurants is rated the lowest quality but most of
them are rated highest quality. On the other hand, for the least expensive restaurants

25
($10-19), only 2 of these restaurants are rated the highest quality ( 2 2.56% ) but
78
over half of them are rated lowest quality.

2. Scatter Diagram

Suppose we have the following scatter diagrams for the weights and heights of the
students:

Scatter Diagram of Weight v.s. Height

180
180

175
175

171

170
170

height
height

height

165
170
165

160
160

155
169
155

150

50 55 60 65 70 75 80 50 55 60 65 70 75 80 50 55 60 65 70 75 80

weight weight weight

The left scatter diagram indicates the positive relationship between weight and
height while the right scatter diagram implies the negative relationship between
the two variables. The middle scatter diagram shows that there is no apparent
relationship between the weight and height.

Online Exercise:
Exercise 4.1.1

4.2 Numerical Measures of Association:

There are several numerical measures of association. We first introduce the covariance
of two variables.

(I) Covariance:
Suppose we have two populations,

26
population 1: y1 , y 2 , , y N and population 2: w1 , w2 , , w N .
Also, let
sample 1: x1 , x2 , , xn and sample 2: z1 , z 2 , , z n
are drawn from population 1 and population 2, respectively.

Let u y and u w be the population means of populations 1 and 2, respectively.


Let
n n

xi and z i
zx i 1 i 1
n n
be the sample means of samples 1 and 2, respectively.

Then, the population covariance is


N

(y i y )( wi w )
,
yw i 1

N
while the sample covariance
n n

( x x )( z
i i z) x z i i nx z
.
s xz i 1
i 1
n 1 n 1

Intuitively, s xz would be very large (positive) as the observations in two population


are larger or smaller than the sample means simultaneously. That is, the observations
are positively correlated. On the other hand, s xz would be very small
(negative) as the observations in one population are larger than the sample mean
while the ones in the other population are smaller than the sample mean. Therefore,
the observations are negative correlated. Finally, s xz would be close to 0 as the
observations in one population being larger than the sample mean while the ones in
the other population are sometimes larger but sometimes smaller than the sample
mean, i.e., the observations in the two populations are not correlated.

Example: .

Let xi be the total money spent on advertisement for some product and z i be the
sales volume (1 unit 1000 packs).

xi 2 5 1 3 4 1 5 3 4 2

27
zi 50 57 41 54 54 38 63 48 59 46
( xi x )( z i z ) 1 12 20 0 3 26 24 0 8 5

10

(x i x )( z i z )
99 .
s xz i 1
11
10 1 10 1

Note: is not scale invariant. For example, in the above example, if


s xz
the sales volume is 1 unit 1 pack. Then, z i would be 5000, 5700, 4100,
5400, 5400, 3800, 6300, 4800, 5900, 4600. Thus, s xz will be 1100, which
1000 times larger than the original one. It is not plausible since the
correlation between the total money on advertisement and the sales
volume would change as the measurement unit changes. The quantity
introduced next is scale-invariant and can be used to measure the
correlation of two populations.

(II) Correlation Coefficient:


Let
y : population standard deviation for y1 , y 2 , , y N
w : population standard deviation for w1 , w2 , , wN
s x : sample standard deviation for x1 , x2 , , xn
s z : sample standard deviation for z1 , z 2 , , z n .

Then, the population correlation coefficient is


yw
yw ,
y w
while the sample correlation coefficient is
s
ryw xz .
sx sz

Note: yw 1 and rxz 1

Example (continue):

10 10

( xi x ) 2 and (z i z )2
s x2 i 1
1.4907 s z2 i 1
7.9303
10 1 10 1

28
Then,
10

s (x i x )( zi z )
rxz xz i 1
0.93 .
sx sz 10 10

(x
i 1
i x)2 (z
i 1
i z)2

Note: is scale-invariant. For example, even the sales volume is


rxz
measured in 1 pack per unit, the value of rxz is still the same, 0.93.

Example:

Let z i 2 xi , i 1,2,3,4,5 .

xi 1 2 3 4 5
zi 2 4 6 8 10
Then,
5 5

( xi x ) 2 5
(z i z)2
,
x 3, z 6, s x i 1
, sz i 1
10
5 1 2 5 1
5

(x i x )( z i z )
.
s xz i 1
5
5 1
Thus,
s xz 5
rxz 1
sx sz 5 .
10
2

Note: when there is a perfect positive linear relationship between


variable x and z, then rxz 1 . rxz 1 might indicate a positive linear
relationship.

Online Exercise:
Exercise 4.2.1

29
Chapter 5 Introduction to Probability

5.1. Experiments, Counting Rules, and Probabilities

Experiment: any process that generates well-defined outcomes.


Example:

Experiment Outcomes
Toss a coin Head, Tail
Roll a dice 1, 2, 3, 4, 5, 6
Play a football game Win, Lose, Tie
Rain tomorrow Rain, No rain

Sample Space: the set of all experimental outcomes, denoted by S

Example:

Experiment Sample Space


Toss a coin S={Head, Tail}
Roll a dice S={1, 2, 3, 4, 5, 6}
Play a football game S={Win, Lose, Tie}
Rain tomorrow S={Rain, No rain}

Counting Rules: the rules for counting the number of the


experimental outcomes.

We have the following counting rules:

Multiple Step Experiment:


Permutations
Combinations

1. Multiple Step Experiment:

Example:

30
Step 1 Step 2 Experimental Outcomes
(throw dice) (throw coin)
1 T
(1,T),(1,H)
H
2 T
(2,T),(2,H)
H
3 T
(3,T),(3,H)
H
4 (4,T),(4,H)
T
H
5 (5,T),(5,H)
T
H
6 (6,T),(6,H)
T
H
S {(1, T ), (1, H ), (2, T ), (2, H ), (3, T ), (3, H ), (4, T ), (4, H ), (5, T ), (5, H ), (6, T ), (6, H )}
The total number of experimental outcomes= 12 6 2

Counting rule for multiple step experiments:


If there are k-steps in an experiment which there are n1 possible
outcomes on the first step, n2 possible outcomes on the second step,
and so on, then the total number of experimental outcomes is given
by n1 n2 nk .

2. Permutations:
n objects are to be selected from a set of N objects, where the order is
important.

Example:

Suppose we take 3 balls from 5 balls, 1, 2, 3, 4 and 5. Then,

two permutations (different orders)

31
Example:
n=3

543

N(N-1)(N-n+1)


N=5 5 4 3

Example:
n

N-1 N!
N (N-1) (N-2) [N-(n-1)]=
( N n)!

n
Counting rule for permutation:
As n objects are taken from N objects, then the total number of

32
permutations is given by
N!
PnN ( N n 1)( N n 2) N
( N n)!
where N ! 1 2 3 N and 0! 1.

3. Combinations:
n objects are to be selected from a set of N objects, where the order
is not important.

Example:

1 combination, but 6 permutations.

Example:

5
P2 5 4= 20
5

P
5
C 2
2

2!
10

10 combinations

20 permutations

33
Example:

5
P 3
= 543=60

P
5 P35
C 3
3

3!

6
10

5 4 3

3!=6
1 combinations, total 10 combinations.

Example:

34
N
Pn permutations

P
N
C n
n-1
n!
n

N-1

n n! n!
P n!
n
(n n)! 0!

1 combination

n-1

n n-1 1

Counting rule for combination:


As n objects are taken from N objects, then the total number of
combinations is given by
N N! PN
CnN n
n n!( N n)! n!
Online Exercise:
Exercise 5.1.1
Exercise 5.1.2

35
5.2. Events and Their Probability

Modern probability theory: a probability value that expresses our degree of belief that
the experimental outcome will occur is specified.

Basic requirement for assigning probabilities:


1. Let ei denote the ith experimental outcome and P ( ei ) be its
probability
0 P (e i ) 1 .
2. If there are n experimental outcomes, e1 , e2 , , en ,
P(e1 ) P (e2 ) P(en ) 1

Example:

Roll a fair dice. Let ei be the outcome the point is i. Then,


1
0 P (e1 ) P (e2 ) P (e3 ) P (e4 ) P (e5 ) P (e6 ) 1
6

Event: an event is a collection (set) of sample points


(experimental outcomes).

Example:

E1 the event that the points are even.

E2 the event that the points are odd.

E1 {2,4,6} and E 2 {1,3,5}

Probability of an event: the probability of any event


is equal to the sum of the sample points in the event.

Example:

1 1 1 1
P ( E1 ) P ({2,4,6}) P (e2 ) P (e4 ) P (e6 ) .
6 6 6 2

36
Note: P(S ) 1

Online Exercise:
Exercise 5.2.1

5.3. Some Basic Relationships of Probability


A c : the complement of A, the event containing all sample points that are not in A.
A B : the union of A and B, the event containing all sample points belonging to A or
B or Both.
A B : the intersection of A and B, the event containing all sample points belonging
to both A and B.

Example:

E1 {2,4,6} , E 2 {1,3,5} and E3 {1,2,3} po int s 3 . Then,


E1c E 2 .
E1 E3 even po int s or po int s 3 or both {1,2,3,4,6}
E1 E3 even po int s and po int s 3 {2}
Note: two events having no sample points in common is called
mutually exclusive events. That is, if A and B are mutually
exclusive events, then A B empty event

Example:

E1 {2,4,6} , E 2 {1,3,5} E1 and E 2 are mutually exclusive events.

Results:
1. P ( A c ) 1 P ( A)

2. If A and B are mutually exclusive events, then


P( A B) 0 and P ( A B ) P ( A) P ( B ) .

3. (addition law) For any two events A and B,

37
P( A B ) P( A) P ( B ) P ( A B)

[Intuition of addition law]:

A B

A B, A

P ( A B ) P(I) P(II ) P( III )

P (I ) P( II) P (II) P( III ) P (II)


P (I II ) P (II III) P(A B)
P ( A) P ( B ) P ( A B )

Example:

1 1
1. P( E 2 ) P({1,3,5}) P({2,4,6}c ) P( E1c ) 1 P( E1 ) 1
2 2
1 1
2. P( E1 E 2 ) 0, P ( E1 E 2 ) P( E1 ) P ( E 2 ) 1
2 2
5
3. P( E1 E3 ) P({1,2,3,4,6}) . We can also use the addition law, then
6
P( E1 E3 ) P( E1 ) P( E3 ) P ( E1 E3 ) P ({2,4,6}) P ({1,2,3}) P ({2})
1 1 1 5

2 2 6 6
Online Exercise:
Exercise 5.3.1
Exercise 5.3.2

38
5.4. Conditional Probability

A|B: event A given the condition that event B has

occurred.

Example:

{2}| E1 : point 2 occurs given that the point is known to be even.

P( A | B) : the conditional probability of A given B (as the event B has


occurred, the chance of the event A then occurs!!)

Formula of the conditional probability:


P( A B)
P( A | B)
P( B)
and
P( A B)
P ( B | A)
P( A) .

Example:

1
P ( E1 {2}) P ({2}) 6 1
P ({2} | E1 )
P ( E1 ) P ( E1 ) 1 3
2

Note: P( A | B) P( A | B) 1
c

Note: P( A B) P( B) P( A | B) P( A) P( B | A)
Independent Events:
P ( A | B ) P ( A)

or A and B are independent events.

P ( B | A) P ( B ) .

39
Dependent Events:
P ( A | B ) P ( A)

or A and B are dependent events.

P ( B | A) P ( B ) .

Intuitively, if events A and B are independent, then the chance of event A occurring is
the same no matter whether event B has occurred. That is, event A occurring is
independent of event B occurring. On the other hand, if events A and B are
dependent, then the chance of event A occurring given that event B has occurred will
be different from the one with event B not occurring.

Example:

A: the event of a police officer getting promotion.


M: the event of a police officer being man.
W: the event of a police officer being woman.

P( A) 0.27, P( A | M ) 0.3, P( A | W ) 0.15

The above result implies the chance of a promotion knowing the candidate being male
is twice higher than the one knowing the one being female. In addition, the chance of
a promotion knowing the candidate being female (0.15) is much lower than the
overall promotion rate (0.27). That is, the promotion event A is dependent on the
gender event M or W.
A promotion is related to the gender.
Note: P( A B) P( A) P( B) as events A and B are independent.

Online Exercise:
Exercise 5.4.1
Exercise 5.4.2

40
5.5. Bayes Theorem

Example 1:

B: test (positive) A: no AIDS


B c : test (negative) A c : AIDS
From past experience and records, we know
P ( A) 0.99, P ( B | A) 0.03, P ( B | A c ) 0.98.
That is, we know the probability of a patient having no AIDS, the conditional
probability of test positive given having no AIDS (wrong diagnosis), and the
conditional probability of test positive given having AIDS (correct diagnosis).

Our object is to find P ( A | B ) , i.e., we want to know the probability of a patient


having not AIDS even known that this patient is test positive.

Example 2:

A1 : the finance of the company being good.


A2 : the finance of the company being O.K.
A : the finance of the company being bad.
3

B1 : good finance assessment for the company.


B2 : O.K. finance assessment for the company.
B : bad finance assessment for the company.
3

From the past records, we know


P( A1 ) 0.5, P( A2 ) 0.2, P ( A3 ) 0.3,
P( B1 | A1 ) 0.9, P ( B1 | A2 ) 0.05, P ( B1 | A3 ) 0.05.
That is, we know the chances of the different finance situations of the company and
the conditional probabilities of the different assessments for the company given the
finance of the company known, for example, P ( B1 | A1 ) 0.9 indicates 90% chance
of good finance year of the company has been predicted correctly by the finance
assessment.

Our objective is to obtain the probability P ( A1 | B1 ) , i.e., the conditional probability


that the finance of the company being good in the coming year given that good

41
finance assessment for the company in this year.

To find the required probability in the above two examples, the following Bayess
theorem can be used.

Bayess Theorem (two events):

P( A B) P ( A) P( B | A)
P ( A | B)
P( B) P ( A) P ( B | A) P ( A c ) P ( B | A c )

[Derivation of Bayess theorem (two events)]:

A B Ac

BA BAc

P( B A)
We want to know P ( A | B ) . Since
P( B)
P ( B A) P ( A) P( B | A) ,
and
P( B) P( B A) P( B A c ) P( A) P( B | A) P( A c ) P( B | A c ) ,
thus,
P( B A) P( B A) P( A) P( B | A)
P( A | B)
P( B) P( B A) P( B A ) P( A) P ( B | A) P ( A c ) P ( B | A c )
c

Example 1:

P ( A c ) 1 P( A) 1 0.99 0.01. Then, by Bayess theorem,


P ( A) P ( B | A) 0.99 * 0.03
P( A | B) 0.7519
P ( A) P ( B | A) P ( A ) P ( B | A ) 0.99 * 0.03 0.01 * 0.98
c c

A patient with test positive still has high probability (0.7519) of no AIDS.

42
Bayess Theorem (general):
Let A , A , , A be mutually exclusive events and
1 2 n

A A A S , 1 2 n

then

P ( Ai B )
P( Ai | B )
P( B)
P ( Ai ) P ( B | Ai ) ,
..............
P ( A1 ) P( B | A1 ) P ( A2 ) P ( B | A2 ) P ( An ) P ( B | An )
i 1,2, , n .

[Derivation of Bayess theorem (general)]:

A1 A2 .. An

B
BA
BA
1
BABA
2
BA
BAnn
1 2
BA1 BA2 BAn

Since
P( B Ai ) P ( Ai ) P ( B | Ai ) ,
and
P ( B ) P ( B A1 ) P ( B A2 ) P ( B An )
,
....... P ( A1 ) P ( B | A1 ) P ( A2 ) P ( B | A2 ) P ( An ) P ( B | An )
thus,
P( B Ai ) P( Ai ) P( B | Ai )
P( Ai | B)
P( B) P( A1 ) P( B | A1 ) P( A2 ) P( B | A2 ) P( An ) P( B | An )

Example 2:

43
P ( A1 ) P ( B1 | A1 )
P( A1 | B1 )
P ( A1 ) P( B1 | A1 ) P( A2 ) P ( B1 | A2 ) P( A3 ) P ( B1 | A3 )
0.5 * 0.9
................ 0.95
0.5 * 0.9 0.2 * 0.05 0.3 * 0.05
A company with good finance assessment has very high probability (0.95) of
good finance situation in the coming year.

Online Exercise:
Exercise 5.5.1

Chapter 6 Probability Distribution

6.1. Random Variable

Example:

Suppose we gamble in a casino and the possible result is as follows.

Outcome Token (X) Money (Y)

Win 3 30

Lose -4 -40

Tie 0 0

In this example, the sample space is S {Win , Lose, Tie} , containing 3 outcomes. X
is the quantity representing the token obtained or lose under different result while Y is
the one representing the money obtained or lost.

In the above example, X and Y can provide a numerical summary corresponding to

44
the experimental outcome. A formal definition for these numerical quantities is in the
following.

Definition (random variable): A random variable is a numerical


description of the outcome of an experiment.

Example:

In the previous example,


X: the random variable representing the token obtained or lose corresponding to
different outcomes.
Y: the random variable representing the money obtained or lose corresponding to
different outcomes.

X has 3 possible values corresponding to 3 outcomes


X ({Win}) 3, X ({Lose}) 4, X ({Tie}) 0

Y has 3 possible values corresponding to 3 outcomes


Y ({Win}) 30, Y ({Lose}) 40, Y ({Tie}) 0 .

Note that
Y 10 X ,
since

Y{{Win}) 30 10 X ({Win}),Y ({Lose}) 40 10 X ({Lose}),


Y ({Tie}) 0 10 X ({Tie})
That is, Y is 10 times of X under all possible experimental outcomes.

There are two types of random variables. They are:


Discrete random variable: a quantity assumes either a finite
number of values or an infinite sequence of values, such as 0, 1, 2,

45
Continuous random variable: a quantity assumes any numerical
value in an interval or collection of intervals, such as time, weight,
distance, and temperature.

Example:

Let the sample space


S {z | z is the delay time for a flight, 0 z 1} .
Let Z be the random variable representing the delay flight time, defined as
Z ({ z t}) Z ({the flight tim e is t}) t ,0 t 1.
For example, Z 0.5 corresponds to the outcome that the flight time is 0.5 hour (30
minutes) late.

Online Exercise:
Exercise 6.1.1

6.2. Probability Distribution

Definition (probability distribution): a function describes how


probabilities are distributed over the values of the random
variable.

(I): Discrete Random Variable:

Example:

Suppose the probability for the outcomes in the gamble example is


1
P ({Win}) P ( X 3) P (Y 30)
6
2
P ({Lose}) P ( X 4) P (Y 40)
3
1
P ({Tie}) P( X 0) P (Y 0)
6

46
Let f x (x) be some function corresponding to the probability of the gambling
outcomes for random variable X, defined as
1
f x (3) P ( X 3)
6
2
f x ( 4) P ( X 4)
3
1
f x (0) P ( X 0)
6
f x (x) is referred as the probability distribution of random variable X.
Similarly, the probability distribution f y (x ) of random variable Y is
1
f y (30) P (Y 30)
6
2
f y ( 40) P (Y 40)
3
1
f y (0) P (Y 0)
6

Required conditions for a discrete probability distribution:
Let a1 , a 2 , , a n , be all the possible values of the discrete random
variable X. Then, the required conditions for f x (x) to be the discrete
probability distribution for X are
(a) f x (a i ) 0, for every i.

(b) f (a ) f (a ) f (a
i
i 1 2 ) f (a n ) 1

Example:

In the gambling example, f x (x) is a discrete probability distribution for the random
variable X since
(a) f x (3) 0, f x ( 4) 0, and f x (0) 0 .
(b) f x (3) f x (4) f x (0) 1 .
Similarly, f y (x ) is also a discrete probability distribution for the random variable Y.

Note: the discrete probability distribution describes the probability


of a discrete random variable at different values.

(II): Continuous Random Variable:

For a continuous random variable, it is impossible to assign a probability to every

47
numerical value since there are uncountable number of values in an interval. Instead,
the probability can be assigned to a small interval. The probability density function
can describe how the probability distributes in the small interval.

Example:

In the delay flight time example, suppose the probability of being late within 0.5
hours is two times of the one of being late more than 0.5 hour, i.e.,
2 1
P (0 Z 0.5) and P (0.5 Z 1) .
3 3
Then, the probability density function f 1 ( x ) for the random variable Z is
4 2
f1 ( x ) , 0 x 0.5; f1 ( x ) , 0.5 x 1.
3 3
1.5
1.0
f1(x)

0.5
0.0

0.0 0.2 0.4 0.6 0.8 1.0


x

The area corresponding to the interval is the probability of the random variable Z
taking values in this interval. For example, the probability of the flight time being late
within 0.5 hour (the random variable Z taking value in the interval [0,0.5]). is
0.5
4 2
P(The flight time being late within 0.5 hour) P(0 Z 0.5) f
0
1 ( x)dx
3
* 0.5
3
.
Similarly, the probability of the flight time being late more than 0.5 hour (the random
variable Z taking value in the interval (0.5,1]). is
1
2 1
P(The flight time being late more than 0.5 hour) P(0.5 Z 1) f
0.5
1 ( x)dx
3
* 0.5
3
.

48
On the other hand, If the probability of being late within 0.5 hours is the same as the
one of being late more than 0.5 hour, i.e.,
1
P (0 Z 0.5) P (0.5 Z 1) ,
2
then, the probability density function f 2 ( x ) for the random variable Z is
f 2 ( x) 1, 0 x 1.

Note that the probability density function corresponds to the probability of the random
variable taking values in some interval. However, the probability density function
evaluated at some value, not like the probability distribution, can not be used to
describe the probability of the random variable Z taking this value.

Required conditions for a continuous probability density:


Let the continuous random variable Z taking values in [a,b]. Then,
the required conditions for f (x) to be the continuous probability
distribution for Z are
(a) f ( x) 0, a x b.
b

(b) f ( x)dx 1
a
d

Note: P(c Z d ) f ( x) dx, a c d b . That is, the area under the


c

graph of f (x) corresponding to a given interval is the probability of


the random variable Z taking value in this interval.

Example:

In the flight time example, f 1 ( x ) is a discrete probability distribution for the random
variable Z since
(a) f1 ( x ) 0, 0 x 1 .
0.5 1
4 2
(b) 0 3 dx 0.5 3 dx 1 .

Similarly, f 2 ( x ) is also a discrete probability distribution for the random variable Z.

Online Exercise:
Exercise 6.2.1

6.3. Expected Value and Variance:

49
(I): Discrete Random Variable:

(a) Expected Value:

Example:

X: the random variable representing the point of throwing a fair dice. Then,
1
P ( X i ) f x (i ) , i 1, 2, 3, 4, 5, 6.
6
Intuitively, the average point of throwing a fair dice is
1 2 3 4 5 6
3.5 .
6
The expected value of the random variable X is just the average,
6
1 1 1 1 1 1
E ( X ) if x (i ) 1 2 3 4 5 6 3.5 average point .
i 1 6 6 6 6 6 6

Formula for the expected value of a discrete random


variable:
Let a1 , a 2 , , a n , be all the possible values of the discrete random
variable X and f x (x) is the probability distribution. Then, the
expected value of the discrete random variable X is
E ( X ) if x (i ) a1 f x (a1 ) a2 f x (a2 ) an f x (an )
i

Example:

In the gambling example, the expected value of the random variable X is


1 2 1 13
E ( X ) 3 f x (3) (4) f x (4) 0 f x (0) 3 (4) 0 .
6 3 6 6
13
Therefore, on the average, the gambler will lose for every bet.
6
Similarly, the expected value of the random variable Y is

50
1 2 1 130
E (Y ) 30 f y (30) (40) f y (40) 0 f y (0) 30 (40) 0 .
6 3 6 6

(b) Variance:

Example:

Suppose we want to measure the variation of the random variable X in the dice
example. Then, the square distance between the values of X and its mean E(X)=3.5
can be used, i.e., (1 3.5) 2 , (2 3.5) 2 , (3 3.5) 2 , (4 3.5) 2 , (5 3.5) 2 , (6 - 3.5) 2
can be used. The average square distance is
(1 3.5) 2 ( 2 3.5) 2 (3 3.5) 2 (4 3.5) 2 (5 3.5) 2 (6 3.5) 2 8.75
.
6 3
Intuitively, large average square distance implies the values of X scatter widely.

The variance of the random variable X is just the average square distance (the
expected value of the square distance). The variance for the dice example is
6
Var ( X ) E X E ( X ) E ( X 3.5) 2 (i 3.5) 2 f (i )
2

i 1

1 1 1 1 1 1
(1 3.5) 2 (2 3.5) 2 (3 3.5) 2 (4 3.5) 2 (5 3.5) (6 3.5) 2
6 6 6 6 6 6
8.75
the average square distance
3

Formula for the variance of a discrete random variable:


Let a1 , a 2 , , a n , be all the possible values of the discrete random
variable X and f x (x) is the probability distribution. Let E ( X ) be
the expected value of X. Then, the variance of the discrete random
variable X is
Var ( X ) 2 E X E ( X ) (ai ) 2 f x (ai )
2

(a1 ) f x (a1 ) (a2 ) f x (a2 ) (an ) 2 f x (an )


2 2

Example:

In the gambling example, the variance of the random variable X is

51
2 2 2
13 13 13
Var ( X ) 3 f x (3) 4 f x ( 4) 0 f x (0 )
6 6 6
2 2 2
.
31 1 11 2 13 1
7.472
6 6 6 3 6 6
Similarly, the variance of the random variable Y is
2 2 2
130 130 130
Var (Y ) 30 f y (30) 40 f y ( 40) 0 f y ( 0)
6 6 6
2 2 2
310 1 110 2 130 1
747.2
6 6 6 3 6 6

(II): Continuous Random Variable:

(a) Expected Value:

Example:

Z: the random variable representing the delay flight time taking values in [0,1].
1
P (0 Z 0.5) P(0.5 Z 1) .
2

Then, the probability density function for Z is


f 2 ( x) 1, 0 x 1.
Intuitively, since there is equal chance for any delay time in [0,1], 0.5 hour seems to
be a sensible estimate of the average delay time.

The expected value of the random variable Z is just the average delay time.
1 1
x2
E ( Z ) xf 2 ( x)dx xdx 1
0 0.5 average delay time .
0 0
2

Formula for the expected value of a continuous random


variable:
Let the continuous random variable X taking values in [a,b] and
f (x ) is the probability density function. Then, the expected value of
the continuous random variable X is

52
b
E ( X ) xf ( x )dx .
a

Example:

In the flight time example, suppose the probability density function for Z is
4 2
f1 ( x ) , 0 x 0.5; f1 ( x ) , 0.5 x 1.
3 3
Then, the expected value of the random variable Z is
1 0.5 1
4 2 x2 4 x2 2
E ( Z ) xf1 ( x) dx x dx x dx 0.5
0 1
0.5
0 0
3 0.5
3 2 3 2 3
.
0.5 4 0 4 12 2 0.5 2 2 5
2 2

2 3 2 3 2 3 2 3 12
5
Therefore, on the average, the flight time is hour.
12

(b) Variance:

Example:

Suppose we want to measure the variation of the random variable Z in the flight time
example. Suppose f 2 ( x ) is the probability density function for Z. Then, the square
1
distance between the values of Z and its mean E ( Z ) can be used, i.e.,
2
2
1
x , 0 x 1 can be used. The average square distance is
2
2 2
1
1
1 x3 x2 x 1 1 1 1 0 0 0 1
E Z x f 2 ( x)dx 0 .
2 0 2 3 2 4 3 2 4 3 2 4 12

The variance of the random variable Z is just the average square distance (the
expected value of the square distance). The variance for the flight time example is
2
1 1
Var ( Z ) E Z E ( Z ) E Z
2
the average square distance .
2 12

Formula for the variance of a continuous random variable:


Let the continuous random variable X taking values in [a,b] and
f (x ) is the probability distribution. Let E ( X ) be the expected
value of X. Then, the variance of the continuous random variable X is

53
b
Var ( X ) 2 E X E ( X ) ( x u ) 2 f ( x)dx
2

Example:

In the flight time example, suppose f 1 ( x ) is the probability density function for Z.
Then, the variance of the random variable Z is
1 2 0.5 2 1 2
5 5 4 5 2
Var ( Z ) E Z E ( Z ) x f1 ( x)dx 0 x 12 3 dx 0.5 x 12 3 dx
2

0
12
x 3 5 x 2 25 x 4 x 3 5 x 2 25 x 2 11
0.5
1

12 144 3 12 144 3
0 0.5
3 3 144

Online Exercise:
Exercise 6.3.1
Exercise 6.3.2

Chapter 7 Discrete Probability Distribution

7.1. The Binomial Probability Distribution

Example:

X 2 : representing the number of heads as flipping a fair coin twice.

H : head T : tail .


1 1 2 1 1
X2 0 T T P ( X 2 0) (1 combination)
2 2 0 2 2

X2 1 H T

54
1 1 2 1 1
T H P ( X 2 1) 2 (2 combinations)
2 2 1 2 2


1 1 2 1 1
X2 2 H H P ( X 2 2) (1 combination)
2 2 2 2 2

2
2 1
P ( X 2 i ) f 2 (i )
i 2
(number of combinations) ( the probability of every combination)

, i 0, 1, 2.

X 3 : representing the number of heads as flipping a fair coin 3 times.


3
1 1 1 3 1
X3 0 T T T P ( X 3 0) (1 combination)
2 2 2 0 2


H T T
3
1 1 1 3 1
X3 1 T H T P ( X 3 1) 3 (3 combinations)
2 2 2 1 2

T T H


H H T
3
1 1 1 3 1
X3 2 H T H P ( X 3 2) 3 (3 combinations)
2 2 2 2 2

T H H


3
1 1 1 3 1
X3 3 H H H P ( X 3 3) (1 combination)
2 2 2 3 2

55
3
3 1
P ( X 3 i ) f 3 (i )
i 2
(number of combinations) ( the probability of every combination)

, i 0, 1, 2, 3.

X n : representing the number of heads as flipping a fair coin n times.

Then,

n n
1 n 1
Xn 0 T T.T P ( X n 0)
2 0 2

(1 combination)

H T....T
n n
1 n 1
Xn 1 n T H T P ( X n 1) n
2 1 2
(n combinations)
T T ..H

n
n 1
P ( X n i ) f n (i )
i 2
(number of combinations) (the probability of every combination)

Note: the number of combinations is equivalent to the number of


ways as drawing i balls (heads) from n balls (n flips).

56
Example:

Z 3 : representing the number of successes over 3 trials.

S : Success F : Failure

1 2
Suppose the probability of the success is while the probability of failure is .
3 3
Then,


0 3 0 3
1 2 3 1 2
Z3 0 F F F P ( Z 3 0)
3 3 0 3 3
(1 combination)


S F F
2 2
1 2 3 1 2
Z3 1 F S F P ( Z 3 1) 3
3 3 1 3 3
(3 combinations)
F F S


S S F
2 2

Z3 2 1 2 3 1 2
S F S P( Z 3 2) 3
3 3 2 3 3
(3 combinations)
F S S


3 0 3 0
1 2 3 1 2
Z3 3 S S S P( Z 3 3)
3 3 3 3 3
(1 combination)

57
i 3i
3 1 2
P ( Z 3 i ) f 3 (i )
i 3 3
(number of combinations) ( the probability of every combination)

, i 0, 1, 2, 3.

Z n : representing the number of successes over n trials.

Then,

n 0 n
2 n 1 2
Zn 0 F F.F P ( Z n 0)
3 0 3 3

(1 combination)

S F.. .F
n 1 n 1
1 2 n 1 2
Zn 1 n F S F P ( Z n 1) n
3 3 1 3 3
(n combinations)
F F .S

i n i
n 1 2
P ( Z n i ) f n (i )
i 3 3
(number of combinations) (the probability of every combination)

58
From the above example, we readily describe the binomial experiment.

Properties of Binomial Experiment


X: representing the number of successes over n independent
identical trials.
The probability of a success in a trial is p while the probability of
a failure is (1-p).

Binomail Probability Distribution:


Let X be the random variable representing the number of successes
of a Binomial experiment. Then, the probability distribution function
for X is
n n!
P ( X i ) f x (i ) p i 1 p p i 1 p , i 0,1, 2, , n .
n i n i

i
i! n i !

Properties of Binomial Probability Distribution:


A random variable X has the binomial probability distribution f (x )
with parameter p , then
E ( X ) np
and
Var ( X ) np (1 p ) .

[Derivation:]

n n
n n
n!
E ( X ) i f (i ) i p i 1 p i p i 1 p
n i n i

i 0 i 0 i
i 0 i! n i !
n n
n! n!
i p i 1 p p i 1 p
n i n i

i 1 i! n i ! i 1 i 1! n i !
n
np
n 1! p i 1 1 p
( n 1) ( i 1)

i 1 i 1 ! n 1 i 1 !

np
n 1
n 1! p j 1 p n1 j ( j i 1)
j 0 j! n 1 j !

np (since
n 1! p j 1 p n1 j is the probability
j! n 1 j !
distributi on of a binomial random variable over n 1
trials)

59
The derivation of Var ( X ) np (1 p ) is left as exercise.

How to obtain the binomail probability distribution:


(a) Using table of Binomail distribution.
(b)Using computer
by some software, for example, Excel or Minitab.
by some computing resource in the internet, for example,
http://home.kimo.com.tw/g894730/stat/ca1/index.html
or http://140.128.104.155/wenwei/stat-life/stat/ca1/index.html

Online Exercise:
Exercise 7.1.1
Exercise 7.1.2

7.2. The Poisson Probability Distribution:

Properties of Poisson Experiment:


X : representing the number of occurrences in a continuous
interval.
: expected value of occurrences in this interval.
The probability of an occurrence is the same for any two
intervals of equal length!! The expected value of occurrences in
an interval is proportional to the length of this interval.
The occurrence or nonoccurrence in any interval is independent
of the occurrence or nonoccurrence in any other interval.
The probability of two or more occurrences in a very small
interval is close to 0

Poisson Probability Distribution:


Let X be the random variable representing the number of
occurrences of a Poisson experiment in some interval. Then, the
probability distribution function for X is
e i
P ( X i ) f x (i ) , i 0,1, 2, ,
i!
where
e 2.71828
and is some parameter.

60
Properties of Poisson Probability Distribution:
A random variable X has the Poisson probability distribution f (x )
with parameter , then
E ( X ) the expected number of occurrences
and
Var ( X ) .

The derivations of the above properties are similar to the ones for the binomial
random variable and are left as exercises.

Example:

Suppose the average number of car accidents on the highway in one day is 4. What is
the probability of no car accident in one day? What is the probability of 1 car
accidence in two days?

[solution:]

It is sensible to use Poisson random variable representing the number of car accidents
on the high way. Let X representing the number of car accidents on the high way in
one day. Then,
e 4 4i
P ( X i ) f x (i ) , i 0,1, 2,
i!
and
E( X ) 4 .
Then,
e 4 4 0
P ( No car accident in one day) P ( X 0) f x (0) e 4 0.0183 Since the
0!
average number of car accidents in one day is 4, thus the average number of car
accidents in two days should be 8. Let Y represent the number of car accidents in two
days. Then,
e 8 8i
P (Y i ) f y (i ) , i 0,1, 2,
i!
and
E (Y ) 8 .
Then,

61
e 8 81
P (1 car accidents in two days) P (Y 1) f y (1) 8e 8 0.002
1!

Example:

Suppose the average number of calls by 104 in one minute is 2. What is the
probability of 10 calls in 5 minutes?

[solution]:

Since the average number of calls by 104 in one minute is 2, thus the average number
of calls in 5 minutes is 10. Let X represent the number of calls in 5 minutes. Then,
e 1010 i
P ( X i ) f x (i ) , i 0,1, 2,
i!
and
E ( X ) 10 .
Then,
e 10 1010
P (10 calls in 5 minutes) P ( X 10) f x (10) 0.1251 .
10!

How to obtain the Poisson probability distribution:


(c) Using table of Poisson distribution.
(d)Using computer
by some software, for example, Excel or Minitab.
by some computing resource in the internet, for example,
http://home.kimo.com.tw/g894730/stat/ca1/index.html
or http://140.128.104.155/wenwei/stat-life/stat/ca1/index.html

Online Exercise:
Exercise 7.2.1
Exercise 7.2.2

7.3. The Hypergeometric Probability Distribution:

62
Example:

Suppose there are 50 officers, 10 female officers and 40 male officers. Suppose 20 of
them will be promoted. Let X represent the number of female promotions. Then,

10 40

0 20
P ( X 0)
50

20

# of combinations for 0 female promotion ( # of combinations for 20 male promotions)


(# of combinations for 20 promotions)

10 40

1 19
P ( X 1)
50

20

# of combinatio ns for 1 female promotion ( # of combinatio ns for 19 male promotions )


(# of combinatio ns for 20 promotions)

10 40

i 20 i
P( X i)
50

20

# of combinations for i female promotion (# of combinations for 20-i male promotions)


(# of combinations for 20 promotions)

10 40

10 10
P ( X 10)
50

20

# of combinations for 10 female promotion ( # of combinations for 10 male promotions)


(# of combinations for 20 promotions)

Therefore, the probability distribution function for X is

63
10 40

i 20 i
P( X i) , i 0,1, ,10.
50

20

Hypergeometric Probability Distribution:


There are N elements in the population, r elements in group 1 and the
other N-r elements in group 2. Suppose we select n elements from the
two groups and the random variable X represent the number of
elements selected from group 1. Then, the probability distribution
function for X is
r N r

i n i
P ( X i ) f x (i ) , 0 i r.
N

n

r
Note: i is the number of combinations as selecting i elements from
N r
group 1 while is the number of combinations as selecting n-i
ni
N
elements from group2. is the total number of combinations as
n
r N r
selecting n elements from the two groups while is the total
i n i
number of combinations as selecting i and n-i elements from groups 1
and 2, respectively.

How to obtain the hypergeometric probability distribution:


(e) Using table of Poisson distribution.
(f) Using computer
by some software, for example, Excel or Minitab.
by some computing resource in the internet, for example,
http://home.kimo.com.tw/g894730/stat/ca1/index.html
or http://140.128.104.155/wenwei/stat-life/stat/ca1/index.html

Online Exercise:
Exercise 7.3.1

64
Chapter 8 Continuous Probability Density

8.1. The Uniform Probability Density:

Example:

X: the random variable representing the flight time from Taipei to Kaohsiung.
Suppose the flight time can be any value in the interval from 30 to 50 minutes. That
is, 30 X 50. .
Question: if the probability of a flight time within any time interval
is the same as the one within the other time interval with the same
length. Then, what density f (x) is sensible for describing the
probability?

Recall that the area under the graph of f (x ) corresponding to any interval is the
probability of the random variable X taking values in this interval. Since the
probabilities of X taking values in any equal length interval are the same, then the the
areas under the graph of f (x ) corresponding to any equal length interval are the
same. Thus, f (x) will take the same value over any equal length area. For example,
within one minute interval, then
31 32 50
P(30 X 31)
30
f ( x) dx P (31 X 32)
31
f ( x) dx P ( 49 X 50) f ( x)dx
49

Therefore, we have
1
f ( x) , 30 x 50; f ( x ) 0, otherwise.
20

Note: since we know f ( x) c some constant , then by the property that


50 50
1
f ( x)dx cdx 1 20c 1 c 20 .
30 30

In the above example, the probability density has the same value in the interval the
random variable taking value. This probability density is referred as the uniform
probability density function.

Uniform Probability Density Function:


A random variable X taking values in [a,b] has the uniform

65
probability density function f (x ) if
1
f ( x) , a x b; f ( x) 0, otherwise. .
ba
The graph of f (x ) is
f(x)

1/(b-a)

a b
x

Properties of Uniform Probability Density Function:


A random variable X taking values in [a,b] has the uniform
probability density function f (x) , then
ba
E( X )
b a 2 , Var ( X )
2 12
[Derivation]:

1 b2 a2
b b
1 1 x2 b
E ( X ) xf ( x)dx x dx |a
a a
ba ba 2 b a 2 2
1 b a b a b a

ba 2 2

The derivation of Var ( X )


b a 2 is left as an exercise.
12

Example:

In the flight time example, b 50, a 30, then

66
E( X )
50 30
40, Var ( X )
50 30 33.33 2

2 12

Online Exercise:
Exercise 8.1.1
Exercise 8.1.2

8.2. The Normal Probability Density

The normal probability density, also called the Gaussian density, might be the most
commonly used probability density function in statistics.

Normal Probability Density Function:


A random variable X taking values in [ , ] has the normal
probability density function f (x) if
x 2
1
f ( x) e 2 2
, - x ,
2

where
E ( X ), 2 Var ( X ), 3.14159

The graph of f (x ) is

67
f(x)

Properties of Normal Density Function:

(a)
x 2
1
E ( X ) xf ( x)dx x e 2 2
dx
2

and
x 2
1
Var ( X ) x f ( x)dx x
2 2
2
e 2 2
dx
2

(b)
the mean of the normal random variable X
the median of the normal random variable X ( P( X ) P( X u ))
the mode of the normal probability density ( f (u ) f ( x), x )

(c) X is a random variable with the normal density function. X is


denoted by
X ~ N ( , 2 )
(d)The standard deviation determine the width of the curve. The
normal density with larger standard deviation would be more
dispersed than the one with smaller standard deviation. In the
following graph, two normal density functions have the same

68
means but different standard deviations, one is 1 (the solid line)
and the other is 2 (the dotted line):
f(x)

u
x

(e) The normal density is symmetric with respect to mean. That is,
f (u c ) f (u c ), where c is any number
(f) The probability of a normal random variable follows the
empirical rule introduce previously. That is,
P (u X ) 0.6826 68%
P ( 2 X 2 ) 0.9544 95%
P ( 3 X 3 ) 0.9973 100%

i.e., the probability of X taking values within one standard


deviation is about 0.68, within two standard deviations about
0.95, and within three standard deviation about 1.

Standard Normal Probability Density Function:


A random variable Z, taking values in [, ] has the standard
normal probability density function f (x) if
x2
1
f ( x) e 2
, - x ,
2

where
E ( Z ) 0, 2 Var ( Z ) 1 .

Note: we denote Z as Z ~ N (0,1)

69
The probability of Z taking values in some interval can be found by the normal
table. The probability of Z taking values in [0,z], z 0, can be obtained by the
normal table. That is,
P(0 Z z ) the area of the region between two vertical lines
z x2
1
0 2
e 2
dx

The graph is given below:

-z 0 z

Example:

P(0 Z 1.0) 0.3413


P (0 Z 1.03) 0.3485

P( 1.0 Z 1.0) P(1 Z 0) P(0 Z 1)


2 P(0 Z 1) 0.6826 (symmetry of Z )

P( Z 1.5) 1 P ( Z 1.5) 1 P( Z 0) P(0 Z 1.5)


1
1 ( 0.4332) 0.0668
2

P ( Z 1.5) P( 1.5 Z 0) P ( Z 0)
1
P (0 Z 1.5) (symmetry of Z )
2
0.4332 0.5 0.9332

P (1 Z 1.5) P (0 Z 1.5) P (0 Z 1)
0.4332 0.3413 0.0919

70
Example:

P ( Z x) 0.0099 . What is x?

[solutions:]

P( Z x) 1 P( Z x) 1 0.0099 0.9901
1
x 0 (if x 0, then P( Z x ) )
2
1
P ( Z x) P ( Z 0) P (0 Z x) P(0 Z x) 0.9901
2
1
P(0 Z x) 0.9901 0.4901 x 2.33
2

Computing Probabilities for any Normal Random Variable:


Once the probability of the standard normal random variable can be
obtained, the probability of any normal random variable (not
standard) can be found via the following important rescaling:
X
X ~ N ( , 2 ) Z ~ N (0,1)

Example:

Let X ~ N (1,4) . Please find P (1 X 3) .


[solutions:]

1 and 2. Then,
0 X 1 2
P(1 X 3) P(1 1 X 1 3 1) P ( ) P(0 Z 1) 0.3413
2 2 2

Online Exercise:
Exercise 8.2.1
Exercise 8.2.2

8.3 The Exponential Density:

The exponential random variable can be used to describe the life time of a machine,

71
industrial product and Human being. Also, it can be used to describe the waiting time
of a customer for some service.

Exponetial Probability Density Function:


A random variable X taking values in [0, ] has the exponential
probability density function f (x) if
x
1
f ( x) e , 0 x,

where 0. .
The graph of f (x ) is
f(x)

Properties of Exponential Density Function:


Let X be the random variable with the exponential density function
f (x ) and the parameter . Then
1.
x0

P ( X x0 ) 1 e ,

for any x0 0 .
2.
x
1
E ( X ) x e dx
0

and
x
1
Var ( X ) x e dx 2 .
2

[derivation:]

72
x0
x0 x x0 x
1 x x
P( X x0 ) e
dx e
d e
y
dy (y )
0
0
0

x0 x0 x0

e y
0 e
e0 1 e

The derivation of 2 is left as exercise.

x0
Note: S ( x0 ) P( X x0 ) 1 P( X x0 ) e is called the survival
function.

Example:

Let X represent the life time of a washing machine. Suppose the average lifetime for
this type of washing machine is 15 years. What is the probability that this washing
machine can be used for less than 6 years? Also, what is the probability that this
washing machine can be used for more than 18 years?

[solution:]

X has the exponential density function with 15 (years). Then,


6 18

P( X 6) 1 e 15 0.3297 and P( X 18) e 15 0.3012


Thus, for this washing machine, it is about 30% chance that it can be used for quite a
long time or a short time.

Relationship Between Poisson and Exponential Random


Variable:
Let Y be a Poisson random variable representing the number of
occurrences in an time interval of length t with the probability
distribution
e u i
P (Y i ) ,
i!
where is the mean number of occurrences in this time interval.
Then, if X represent the time of one occurrence, X has the
1
exponential density function with mean E( X ) (t).

73
The intuition of the above result is as follows. Suppose the time interval is [0,1] (in
hour) and 4 . Then, on the average, there are 4 occurrences during 1 hour period.
1 1
Thus, the mean time for one occurrence is (hour). The number of
4
occurrences can be described by a Poisson random variable (discrete) with mean 4
while the time of one occurrence can be described by an exponential random variable
1
(continuous) with mean .
4

Example:

Suppose the average number of car accidents on the highway in two days is 8. What is
the probability of no accident for more than 3 days?

[solutions:]

8
The average number of car accidents on the highway in one day is 4 . Thus, the
2
1
mean time of one occurrence is (day) .
4

Let Y be the Poisson random variable with mean 4 representing the number of car
1
accidents in one day while X be the exponential random variable with mean (day)
4
representing the time of one accident occurrence. Thus,
P(No accident for more than 3 days) P ( the time of one occurrence larger than 3)
3
1
P ( X 3) e 4
e 12 0

Online Exercise:
Exercise 8.3.1

Appendix: Poisson and Normal Approximations

(I) Poisson Approximation:

74
Example:

Let X be the binomial random variable over 250 trials with p 0.01 . Then, it might
not be easy to obtain
250!
P ( X 3) 0.01 3 0.99 247
3!247!
directly. However, if we only want to obtain an approximation, Poisson
approximation is a good choice.

Poisson approximation:
Let X be a binomial random variable over n trials and let
p 0.05, n 20. Let Y be a Poisson random variable with mean np.

Then, the probability of X taking value i can be approximated by the


probability of Y taking value i. That is,
e np np
i
n i
p 1 p n i
i , i 1, 2, , n.
i!

Example:

In the above example, the Poisson random variable with mean np 250 0.01 2.5
can be used for approximation. Thus,

0.01 3 0.99 247 e 2.5 0.2138


2.5 3
250!
P( X 3)
3!247! 3!

Note that the exact probability is


250
0.01 0.99
3 247
P( X 3) 0.2149 .
3
Therefore, the normal approximation is reasonably accurate.

(II) Normal Approximation:

Normal approximation:
Let X be a binomial random variable over n trials and the probability
of success be p. Let Y be the normal random variable with mean np
and variance np(1-p). Then, the probability of X taking value i can be
1 1
approximated by the probability of Y taking values in i 2 , i 2 .

75
That is,
n i 1 1
p 1 p ni P (i Y i )
i 2 2
1 x np 2
.
i 1

i
1
2
2
2 np 1 p
e 2 np (1 p ) dx

1 1
Note: the probability P(i
2
Y i )
2
can be obtained by
transforming Y to the standard normal random variable Z.

Example:

Let X be the binomial random variable over 100 trials and let the probability of a
success be 0.1. What is the probability of 12 successes by normal approximation?

[solutions:]

The normal random variable with mean np 100 0.1 10 and variance
np (1 p ) 100 0.1 (1 0.1) 9 can be used for approximation. Thus,

1 1 11 .5 10 Y 10 12.5 10
P ( X 12) P (12 Y 12 ) P ( )
2 2 3 3 3
P (0.5 Z 0.83) P (0 Z 0.83) P (0 Z 0.5)
0.2967 - 0.1915 0.1052

Note that the exact probability is


100
0.1 0.9 0.0988 .
12 88
P ( X 12)
12
Therefore, the normal approximation is reasonably accurate.

Note:
Let X be a binomial random variable over n trials and the probability
of success be p and let Y be the normal random variable with mean
np and variance np(1-p). Then,
1 1
i np i np
1 Y np
P ( X i ) P (Y i ) P ( 2 ) P( Z 2 ),
2 np(1 p ) np(1 p ) np(1 p )

where Z is the standard normal random variable. Similarly,

76
1
k np
1 2
P ( X k ) P (Y k ) P ( Z )
2 np (1 p )

Example:

In the previous example, what is the probability of at most 13 successes by normal


approximation?

[solution:]

1
13 10
P ( X 13) P ( Z 2 ) P( Z 1.17) P ( Z 0) P (0 Z 1.17)
3
0.5 0.3790 0.8790
Note that the exact probability is

13 100

P ( X 13) 0.1 0.9
i 100i
0.8761 .
i 0 i

Therefore, the normal approximation is reasonably accurate.

Review 1
Chapter 1:

1. Elements, Variable, and Observations:

Example:

Table 1.1 (p. 5) in the textbook!!

25 elements (25 companies): Advanced Comm. Systems, Ag-Chem


Equipment Co.,,Webco Industries
Inc..
5 variables : Exchange, Ticker Symbol, Annual Sales, Share Price,
Price/Earnings Ratio.
25 observations: (OTC, ACSC, 75.10, 0.32, 39.10), (OTC, AGCH,

77
321.10, 0.48, 23.40),, (AMEX, WEB, 153.50, 0.88,
7.50).

2. Type of Data: Qualitative Data and Quantitative Data

(a) Qualitative data may be nonnumeric or numeric.


(b) Quantitative data are always numeric.
(c) Arithmetic operations are only meaningful with quantitative data.

Example (continue):

Qualitative variables: Exchange, Ticker Symbol.


Quantitative variables: Annual Sales, Share Price, Price/Earnings Ratio.

Chapter 2: Figure 2.22, p. 66.

1. Summarizing qualitative data:

Frequency distribution, relative frequency distribution, and


percent frequency distribution.
Bar plot and Pie plot.
Example:

Below you are given the grades of 20 students.


D C E B B B A D B C
B E C B C B B D B C

Then,
Grades Frequency Relative Percent
Frequency Frequency
E 2 2/20=0.1 10
D 3 3/20=0.15 15
C 5 5/20=0.25 25
B 9 9/20=0.45 45
A 1 1/20=0.05 5
Total 20 1 100

78
2. Summarizing qualitative data:

Frequency distribution, relative frequency distribution, percent


frequency distribution, cumulative frequency distribution,
cumulative relative frequency distribution, cumulative percent
frequency distribution
Histogram, Ogive, and stem-and leaf display.

Example:

Suppose we have the following data:


30 79 59 65 40 64 52 53 57
39 61 47 50 60 48 50 58 67
Suppose the number of nonoverlapping classes is determined to be 5.
Please construct the frequency distribution table (including frequency,
percent frequency, cumulative frequency, and cumulative percent
frequency) for the data.

[solution:]

79 30
Approximate class width 9.8
5
The class width is 10.
Thus,
Class Frequency Percent Cumulative Cumulative
Frequency Frequency Percent
Frequency
30-39 2 (2/18)100=11 2 11
40-49 3 (3/18)100=17 5 28
50-59 7 (7/18)100=39 12 67
60-69 5 (5/18)100=28 17 95
70-79 1 (1/18)100=5 18 100

Chapter 3: Key Formulas, pp. 128-129.

79
Example:

Suppose we have the following data:


Rent 420-439 440-459 460-479 480-499 500-519
Frequency 8 17 12 8 7
Rent 520-539 540-559 560-579 580-599 600-619
Frequency 4 2 4 2 6
What are the mean rent and the sample variance for the rent?

[solution:]

10

fM i i
, where fi is the frequency of class i Mi is the midpoint of
xg i 1

70
class i and n is the sample size. Then,
Rent 420-439 440-459 460-479 480-499 500-519
fi 8 17 12 8 7
Mi 429.5 449.5 469.5 489.5 509.5
Rent 520-539 540-559 560-579 580-599 600-619
fi 4 2 4 2 6
Mi 529.5 549.5 569.5 589.5 609.5
Thus,
10
34525
fM
i 1
i i 34525 and x g
70
493.21 .

For the sample variance,

f M xg
10
2
i i
208234.287
s g2 i 1
3017.89
70 1 69

Chapter 4:

Tabular and Graphical Methods: Crosstabulation (qualitative


and quantitative data) and Scatter Diagram (only quantitative
data).
Numerical Method: Covariance and Correlation Coefficient.

80
Chapter 5:

1. Multiple Step Experiments, Permutations, and Combinations:

Example:

How many committees consisting of 3 female and 5 male students can be


selected from a group of 5 female and 8 male students?

[solution:]

5 8 5! 8!

3

5 3!2! 5!3! 560

2. Event, Addition Law, Mutually Exclusive Events and Independent


Event:

Example:

Assume you are taking two courses this semester (S and C). The
probability that you will pass course S is 0.835, the probability that you
will pass both courses is 0.276. The probability that you will pass at least
one of the courses is 0.981.
(a) What is the probability that you will pass course C?
(b) Is the passing of the two courses independent event?
(c) Are the events of passing the courses mutually exclusive? Explain.

[solution:]

(a)
Let A be the event of passing course S and B be the event of passing
course C. Thus,
P( A) 0.835, P( A B ) 0.276, P ( A B ) 0.981 .
P ( A c B ) P ( A B ) P ( A) 0.981 0.835 0.146
.
P ( B ) P ( A B ) P ( A c B ) 0.276 0.146 0.422

81
(b)
P ( A B ) 0.276
P( A | B) 0.654 P ( A) 0.835
P( B) 0.422
Thus, events A and B are not independent. That is, passing of two courses
are not independent events.
(c)
Since P( A B) 0.276 0 , events A and B are not mutually exclusive.

Review 2
Chapter 4

Bayes Theorem:

Example:

In a random sample of Tung Hai University students 50% indicated they


are business majors, 40% engineering majors, and 10% other majors. Of
the business majors, 60% were female; whereas, 30% of engineering
majors were females. Finally, 20% of the other majors were female.
Given that a person is female, what is the probability that she is an
engineering major?

[solution:]

Let
A1: the students are engineering majors
A2: the students are business majors A1 A2 A3
A3: the students are other majors.

B: the students are female.

Originally, we know
P( A1) 0.4, P( A2) 0.5, P( A3) 0.1, P( B | A1) 0.3, P( B | A2) 0.6, P( B | A3) 0.2

82
.
Then, by Bayes theorem,
P ( A1) P ( B | A1)
P ( A1 | B )
P ( A1) P ( B | A1) P ( A2) P ( B | A2) P( A3) P ( B | A3)
0.4 0.3
0.2727.
0.4 0.3 0.5 0.6 0.1 0.2

Chapter 5

1. Random Variables, Discrete Probability Function, Expected Value


and Variance

Example:

The probability distribution function for a discrete random variable X is


f ( x ) 2k , x 1
3k , x 3
4k , x 5
0, otherwise

where k is some constant. Please find


(a) k. (b) P ( X 2) (c) E ( X ) and Var ( X )

[solution:]

(a)
f ( x)
x
f (1) f (3) f (5) 2k 3k 4k 9k 1

1
k .
9
(b)
P( X 2) P( X 3 or X 5) P( X 3) P( X 5)
1 7
f (3) f (5) 3k 4k 7k 7
.
9 9
(c)

83
u E ( X ) xf ( x) 1 f (1) 3 f (3) 5 f (5)
x

2 3 4 31
1 3 5
9 9 9 9

and
Var ( X ) x u f ( x)
2

x
2 2 2
31 31 31
1 f (1) 3 f (3) 5 f (5)
9 9 9


22 2 2 16 3 14 2 4 200
81 9 81 9 81 9 81

2. Continuous Probability Density Function, Expected Value and


Variance.

Example:

The probability density function for a continuous random variable X is


f ( x ) a bx 2 , 0 x 1
0, otherwise.

where a, b are some constants. Please find


3
(a) a, b if E( X ) (b) Var ( X ) .
5

[solution:]

(a)
1 1

a bx dx 1
b 3 1
f ( x) dx 1 ax x |0 1
2

0 0
3
b
a 1
3
and
1 1
E ( X ) xf ( x)dx x a bx 2 dx a 2 b 4 1 a b 3
2
x x |0
4 2 4 5
0 0

Solve for the two equations, we have


3 6
a , b .
5 5

84
(b)
3 6 2
f ( x) x , 0 x 1
5 5
0, otherwise.

Thus,
2
3
Var ( X ) E X E ( X ) E ( X 2 ) E ( X ) E ( X 2 )
2 2

5
1 1
9 3 6 9
x 2 f ( x) dx x 2 x 2 dx
0
25 0 5 5 25
1 3 6 5 1 9 1 6 9 2
x x |0
5 25 25 5 25 25 25

Chapter 6

1. Uniform Probability Density Function, Normal Probability


Density Function, and Exponential Probability Density Function:

Example:

The number of customers arriving at Taiwan Bank is Poisson distributed


with a mean, 4 customers/per minute.
(a) Within 2 minutes, what is the probability that there are 3 customers?
(b) What is the probability density function for the time between the
arrival of the next customer?

[solution:]

(a)
Let
Y: the number of customers arriving within 2 minutes.
Then,
E (Y ) u 2 4 8 (customers/two minutes)
and
e u u i e 8 8i
P (Y i ) , i 0,1,2, .
i! i!
Thus,
e 8 83
P (Y 3) 0.0286 .
3!

85
(b)

X: the time between the arrival of the next customer

The average time between arrival of the next customer is


t 2 1
.
u 8 4
1
X has the exponential density function with mean ,
4
x x
1 1 1 / 4
f ( x) e e 4e 4 x .
1/ 4

Review 2
Chapters 3, 4
Measures of Location, Dispersion, Exploratory Data Analysis,
Measure of Relative Location, Weighted and Grouped Mean and
Variance, Association between Two Variables

Example:

The flashlight batteries produced by one of the manufacturers are known


to have an average life of 60 hours with a standard deviation of 4 hours.
(a) At least what percentage of batteries will have a life of 54 to 66 hours?
(b) At least what percentage of the batteries will have a life of 52 to 68
hours?

86
(c) Determine an interval for the batteries lives that will be true for at
least 80% of the batteries.

[solution:]

Denote
x 60, s 4

(a)
[54,66] 60 6 x 1.5s
Thus, by Chebyshevs theorem, within 1.5 standard deviation, there is at
least
1
1 2 100% 55.55%
1.5
of batteries.
(b)
[52,68] 60 8 x 2 s
Thus, by Chebyshevs theorem, within 1.5 standard deviation, there is at
least
1
1 2 100% 75%
2
of batteries.
(c)
1 1
1 2 100% 80% 1 2 0.8 k 5
k k
Thus, within 5 standard deviation, there is at least 80% of batteries.
Therefore,
x 5s 60 5 4 60 8.94 51.06,68.94 .

Chapter 5
Basic Relationships of Probability, Conditional Probability and
Bayes Theorem

Example:

The following are the data on the gender and marital status of 200
customers of a company.

87
Male Female
Single 20 30
Married 100 50

(a) What is the probability of finding a single female customer?


(b) What is the probability of finding a married male customer?
(c) If a customer is female, what is the probability that she is single?
(d) What percentage of customers is male?
(e) If a customer is male, what is the probability that he is married?
(f) Are gender and martial status mutually exclusive? Explain.
(g) Is martial status independent of gender? Explain.

[solution:]

A1: the customers are single


A2: the customers are married
B1: the customers are male.
B2: the customers are female.

(a)
30
P A1 B2 0.15
200
(b)
100
P A2 B1 0.5
200
(c)
P A1 B2
P A1 | B2 .
P B2

Since
30 50 80
P B2 P A1 B2 P A2 B2 ,
200 200 200

30
P A1 B2 200 30
P A1 | B2 0.375 .
P B2 80 80
200

88
(d)
20 100 120
P B1 P A1 B1 P A2 B1 0.6
200 200 200
(e)
100
P A2 B1 200 100 5
P A2 | B1
P B1 120 120 6 .
200
(f)
Gender and martial status are not mutually exclusive since
P A1 B1 0
(f)
Gender and martial status are not independent since

30 50
P A1 | B2 P A1 .
80 200

Example:

In a recent survey in a Statistics class, it was determined that only 60% of


the students attend class on Thursday. From past data it was noted that
98% of those who went to class on Thursday pass the course, while only
20% of those who did not go to class on Thursday passed the course.

(a) What percentage of students is expected to pass the course?


(b) Given that a student passes the course, what is the probability that
he/she attended classes on Thursday.

[solution:]

A1: the students attend class on Thursday


A2: the students do not attend class on Thursday A1 A2
B1: the students pass the course
B2: the students do not pass the course
P A1 0.6, P A2 1 P A1 0.4, P B1 | A1 0.98, P B1 | A2 0.2

(a)

89
P B1 P B1 A1 P B1 A2
P A1 P B1 | A1 P A2 P B1 | A2
0.6 0.98 0.4 0.2
0.668
(b)
By Bayes theorem,
P A1 B1 P ( A1) P ( B1 | A1)
P ( A1 | B1 )
P B1 P ( A1) P ( B1 | A1) P ( A2) P ( B1 | A2)
0.6 0.98

0.6 0.98 0.4 0.2
0.854

Chapter 6

3. Random Variables, Discrete Probability Function and Continuous


Probability Density

Example:

The probability distribution function for a discrete random variable X is


f ( x ) 2k , x 1
3k , x 3
4k , x 5
0, otherwise

where k is some constant. Please find


(a) k. (b) P ( X 2)

[solution:]

(a)
f ( x)
x
f (1) f (3) f (5) 2k 3k 4k 9k 1

1
k .
9
(b)
P( X 2) P( X 3 or X 5) P( X 3) P( X 5)
1 7
f (3) f (5) 3k 4k 7k 7
.
9 9

90

Anda mungkin juga menyukai