Anda di halaman 1dari 70

Chapter 5 - Hirschey

Demand Estimation
The lynch pin of successful agribusiness firms
Consumer interviews
Market experiments
Regression analysis
Consumer Interviews
Requires questioning customers to estimate the relation
between demand and a variety of underlying factors;
however most consumers are unable to answer various
questions, so it is difficult for survey techniques to
accurately estimate demand relations.

Market Experiments
Demand estimation in a controlled environment; with this
technique, firms study one or more markets with specific
prices, packaging, or advertising and then vary controllable
factors over time or between markets.
Shortcomings: (1) expensive; (2) usually undertaken on a
scale too small to allow high levels of confidence in the
results; (3) seldom run for sufficiently long periods.

Regression Analysis
-Simple linear regression
-Multiple regression
-Estimation and interpretation
-Inference
Tintner (1953)
The Study of the Application of Statistical Methods
to the Analysis of Economic Phenomena
Business analysts often need to be in
position to:


- Interpret the economic or financial landscape

- Identify and assess the influence of several exogenous or
predetermined factors on one or more endogenous
variables

- Provide ex-ante forecasts of one or more endogenous
variables

How does one achieve these objectives?
Why do Business Analysts Wish to
Achieve These Objectives?
-To improve decision-making!

Example: Investigate key determinants of demand for Prego
spaghetti sauce: Price of Prego; price of competitors
(Ragu, Classico, Hunts, Newmans Own); in-store
displays; coupons; price of spaghetti

Forecast sales of Prego spaghetti sauce one month, one quarter, or
even one year into the future


Course of Action Development of
Formal Quantitative Models

Regression Analysis
Components of Regression Analysis
Regression Analysis involves four phases:

Specification the model building activity
Estimation fitting the model to data
Verification testing the model
Prediction producing ex-ante forecasts and
conducting ex-post forecast evaluations

Components of
Regression Analysis
Specification Estimation
Verification Prediction
Getting Started
- Model specification entails the expression of theoretical
constructs in mathematical terms
- This phase of regression analysis constitutes the model building
activity
- In essence, model specification is the translation of theoretical
constructs into mathematical/statistical forms
- Fundamental principles in model building:
* The principle of parsimony (other things the same, simple
models generally are preferable to complex models,
especially in forecasting)
* The shrinkage principle (imposing restrictions either on
estimated parameters or on forecasts often improves
model performance)
* The KISS principle Keep it Sophistically Simple

Regression Analysis Begins
with Model Specification
y = |
0
+ |
1
x + u
Dependent Variable
Left-Hand Side Variable
Explained Variable
Regressand
Response Variable
Endogenous Variable
Independent Variable
Right-Hand Side Variable
Explanatory Variable
Regressor
Control Variable

Error Term
Disturbance Term
Innovation
Coefficients

0
: Intercepts

1
: Slope

The Simple Linear Regression Model
The error term u explicitly relates that the
relationship between y and x is not an
identity; u arises for two reasons:
(1) measurement error
(2) the regression inadvertently omits the effects
of other variables besides x that could impact y.

y =
0
+
1
x + u
Regression line: E(y|x)=
0
+
1
x
X (Independent Variable)
Y

(
D
e
p
e
n
d
e
n
t

V
a
r
i
a
b
l
e
)
Graphical Illustration

1:
Slope

x
x + h
0
How to interpret coefficients?
- Often in regression analysis, analysts have
interest in estimating demand relationships,
particularly for commodities.

- Analysts may wish to estimate the demand
for cosmetic products, automobiles, various
food products, or various beverages.
Example: Estimation of Demand Relationships
Average price
per package
Demand Curve
Q = 2,500 500P
P = 5 .002Q
5

4

3

2

1

500 1000 1500 2000 2500
packages of Lipton tea
Q
LT
= B
0
+ B
1
P
LT
+ u
Demand Curve for Lipton Tea
The Demand Curve shows the theoretical
relation between price and quantity
demanded, holding all other factors constant.

Axes: price is on y-axis, quantity on the x-axis

Example: Demand curve for Lipton tea,
Q=2500 500P

Key question: How are these numbers obtained?

Demand Curve
500

2500

1
0
=
=

Interpretation:
Random Sampling
Randomly sample n
observations from a
population
For each observation,
yi = 0 + 1xi + ui
Qi = 0 + 1Pi + ui
Goal: Estimate 0 and
1
Independent
variable
(Exogenous
variable)
Dependent
variable
x
1
(Price
1
) y
1
(Quantity
1
)
x
2
(Price
2
) y
2
(Quantity
2
)

x
n
(Price
n
) y
n
(Quantity
n
)
1. Q = a-bP
2. Q = a0 a1P + a2I + a3A + a4PS






- The coefficients a0, a1, a2, a3, and a4 are labeled the demand
parameters; we expect certain signs and magnitudes of the
demand parameters according to economic theory.

- Different versions of the regression model for applied analysis are
possible.

own-price effect
(-)
income effect
(+)
advertising
effect
(+)
price of substitute product
(+)
Translations of the Theoretical
Construct into a Statistical Model
Excel file of Prego spaghetti sauce
Excel file of Keynesian consumption
function

Population
Sample
(data)
Population
parameters
Sample
parameters
t tests, F tests,
confidence intervals
OLS: assumptions, properties of OLS
estimates, interpretation of estimates
Measure of central tendency: mean, median, mode

Measures of variability or dispersion: range, variance, standard
deviation, coefficient of variation
Sampling
Inference
Regression
Descriptive
Statistics
Measures of central tendency:
- mean
- median
- mode
Measures of dispersion/variability:
- range
- variance
-standard deviation
- coefficient of variation
Descriptive Statistics
Sufficiently large amount of historical data.
Ask not what you can do to the data, but rather
what the data can do for you.
Data Types:
-Time-Series
-Cross-Sectional
Critical Ingredient in all
Regression Models
Critical Ingredient data (sample sufficiently large)

- Time-series data
* daily, weekly, monthly, quarterly, annual
DAILY closing prices of stock prices
WEEKLY measures of money supply
MONTHLY housing starts
QUARTERLY GDP figures
ANNUAL salary figures
- Cross-Sectional Data
* Snapshot of activity at a given point in time
* Survey of household expenditure patterns
* Sales figures from a number of supermarkets at a
given point in time.
DataThe Critical Ingredient
I often say that when you can measure what you are speaking
about, and express it in numbers, you know something about
it; but when you cannot measure it, when you cannot express
it in numbers, your knowledge is of a meager and
unsatisfactory kind.
Quote from Lord Kelvin
Get a Feel for the Data
-Plots of key variables
-Scatter plots
-Descriptive statistics
mean
median
minimum
maximum
standard deviation

skewness
kurtosis
distribution

Figure 5.5 Scatter Diagrams of Various Unit Cost-
Output Relations
Figure 5.6 Regression Relation Between Units Sold and
Personal Selling Expenditures for Electronic Data
Processing (EDP), Inc.
Let X correspond to a vector of T observations for the
variable X.

Mean
The mean, a measure of central tendency, corresponds to
the average of the set of observations corresponding to a particular
data series. The mean is given by:


The units associated with the mean are the same as the units of xi,
i= 1, 2, , T.
continued . . .
Descriptive Statistics
(
(
(

=
T
X
X
X

2
1
T
x
x
T
i
i
=
=
1
Median
The median also is a measure of central tendency of a data
series. The median corresponds to the 50
th
percentile of the data
series. The units associated with the median are the same as the
units of xi, i = 1, 2, , T. To find the median, arrange the
observations in increasing order. When sample values are arranged
in this fashion, they often are called the 1
st
, 2
nd
, 3
rd
order
statistics. In general, if T is an odd number, the median is the
order statistic whose number is .

If T is an even number, the median corresponds to the average of
the order statistics whose numbers are .

continued . . .


2
1 + T
2
1
and
2
+ T T
Variance
The variance also is a measure of the spread or dispersion of
a series about the mean.
The variance is expressed as Note that .
The units associated with the variance are the square of the units of
xi. continued . . .
Standard Deviation
The standard deviation is a measure of the spread or
dispersion of a series about the mean.
The standard deviation is given by S = .

The units associated with the standard deviation are the same as
the units of xi.
2
1
1
2
1
) (
(
(
(
(

=
T
x x
T
i
i
1
) (

1
2
2

=

=
T
x x
T
i
i
o
2 2

S = o
Minimum
The minimum series corresponds to the smallest value,
min(x1, x2, , xT). The units associated with the minimum are the
same as the units of xi.
Maximum
The maximum of a series corresponds to the largest value,
max(x1, x2, , xT). The units associated with the maximum are the
same as the units of xi.
Range
The range of a series is the difference between the
maximum and the minimum values. The range is expressed as
Range x = max x min x. The units associated with the range are
the same as the units of xi.
continued . . .
Skewness
Skewness is a measure of the amounts of asymmetry
in the distribution of a series. If a distribution is symmetric,
skewness equals zero. If the skewness coefficient is negative
(positive), then the distribution of the series has a left (right)
tail. The greater the absolute value of the skewness statistic,
the more asymmetrical is the distribution. The skewness
coefficient is given by:


The skewness statistic is a unitless measure.


continued . . .
3
1
3
) (
1

S
x x
T
m
T
i
i
=

=
Kurtosis
Kurtosis is a measure of the flatness or peakedness of the
distribution of a series relative to that of a normal distribution. A
normal random variable has a kurtosis of 3. A kurtosis statistic
greater than 3 indicates a more peaked distribution than the
normal distribution. A kurtosis statistic less than 3 indicates a
more flat distribution than the normal distribution. The kurtosis
coefficient is given by
.

The kurtosis coefficient also is a unitless measure.
4
1
4
) (
1

S
x x
T
k
T
i
i
=

=
Jarque-Bera test statistic (Jarque and Bera, 1980)
The Jarque-Bera (JB) statistic combines the skewness and
kurtosis coefficients to produce another measure of the
departure of normality of a series. This measure is given by:


For a normal distribution, Thus, the JB statistic
is zero for normal distributions. Values greater than zero indicate
the degree of departure from normality.



continued . . .
. ) 3

(
4
1

6
2 2
|
.
|

\
|
+ = k m
T
JB
. 3

and 0 = = k m
Coefficient of variation
The coefficient of variation is the ratio of the standard
deviation to its mean. This measure typically is converted to a
percentage by multiplying the ratio by 100. This statistic
describes how much dispersion exists in a series relative to its
mean. This measure is given by:
.
The utility of this information is that in most cases the mean and
the standard deviation change together. As well, this statistic is
not dependent on units of measurement.
% 100 =
x
S
CV
Correlation Coefficient
The correlation coefficient is a measure of the degree of
linear association between to variables. The statistic, denoted by r,
is given by:


While r is a pure number without units, r always lies between -1 and
+1. Positive values of r indicate a tendency of x and y to move
together, that is, large values of x are associated with large values
of y, and small values of x are associated with small values of y.
When r is negative, large values of x are associated with small
values of y, and small values of x are associated with large values of
y. The closer to +1, the greater the degree of direct linear
relationship between x and y. The closer to -1, the greater the
degree of inverse linear relationship between x and y. Finally, when
r = 0, there is no linear association between x and y.

= =
=


=
T
i
T
i
i i
T
i
i i
y y x x
y y x x
r
1 2
2 2
1
) ( ) (
) )( (
Mode
The mode corresponds to the most frequent
observation in the data series x1, x2, , xr. The units
associated with the mode are the same as the units of
xi. In empirical applications, often the observations are
non-repetitive. Hence, this measure often is of limited
usefulness.
Data Example
Prices and quantities sold
of Prego Spaghetti Sauce
by week.
Time-Series Plot of the Volume of
Prego Spaghetti Sauce Sold by Week
Descriptive Statistics and the
Histogram Associated with the Volume
of Prego Spaghetti Sauce
Time-Series Plot of the Price of
Prego Spaghetti Sauce by Week
Descriptive Statistics of the Price
of Prego Spaghetti Sauce
PPRG versus QPRG
Weekly Scatter Plot of Prices and Quantities Sold of Prego
Spaghetti Sauce.
Correlation Matrix
The correlation between the price and quantity sold of
Prego Spaghetti Sauce is -0.73.
Another Example: Relationship between Real
Income and Real Consumption
-Question: What is the effect of real per capita income on real
per capita personal consumption expenditures?
-Known information:
-Dependent variable: real per capita consumption
expenditures (c)
-Explanatory variable: real per capita income (I)
Regression: C = 0 + 1I + u
-Interpretation:
-1 measures the change in real income on consumption; the marginal
propensity to consume (MPC).
-0 represents the autonomous level of real per capita consumption
expenditures.
Random Sampling
Explanatory
variable
I
Dependent
variable
C
I
1980:1
C
1980:1
I
1980:2
C
1980:2

I
2010:3
C
2010:3

-Randomly sample n
observations from a
population (1980:1 to 2010:3).
123 quarterly observations.
-For each observation,
Ct= |0 + |1It + ut
-Goal: Estimate 0 and 1.
Another goal: Forecasts for C
t
2010:4 and beyond
Estimation of the Simple
Linear Regression Model
Ordinary Least Squares, Regression
Line, Fitted Values, Residuals
.
.
.
.
y
4
y
1
y
2
y
3
x
1
x
2
x
3
x
4
}
}
{
{

4
x y
1 0

| | + =
OLS: choose
0
and
1

to minimize these sum
of squared prediction
errors.
y
x
Intuitive Thinking about OLS
-OLS is fitting a line through the sample points such that the
sum of squared prediction errors is as small as possible,
hence the term least squares.

-Residual , is an estimate of the error term, u, and is the
difference between sample point (actual value) and the
fitted line (sample regression line).
i = 1, 2, . . ., n.
i i
FV AV u =

Actual Value
Fitted Value
Minimizing Residual Sum of Squares
( )
( ) 0

2
0

2
1
1 0
1
1 0
=
=

=
=
n
i
i i i
n
i
i i
x y x
x y
| |
| |
( ) ( )

= =
=
n
i
i i
n
i
i
x y u
1
2
1 0
, 1
2
,
min min
1 0 1 0
| |
| | | |


( )( )
( )

=
=


=
n
i
i
n
i
i i
x x
y y x x
1
2
1
1

|
First order conditions:
x y
1 0

| | =
Interpretation: The slope estimate is the sample
covariance between x and y divided by the sample
variance of x.
Assumptions Behind the Simple
Linear Regression Model
y
i
=
0
+
1
x
i
+ u
i
Assumption 1: Zero Mean of u
E(u) = 0: The average value of u, the error term, is 0.
Assumption 2: Independent Error Terms
Each observed u
i
is independent of all other u
j
,
Corr(u
i
u
j
) = 0 for all i = j
Assumption 3: Homoscedasticity
Var(u|x) = , the variance of the regression is constant.
.
x

x
1
x
2
f(y|x)
x
3
.
.
x

.
.
f(y|x)
x
3
x
2
x
1
Assumption 4: Normality
-The error term u is normally distributed with mean zero and
variance .
-This assumption is essential for inference and forecasting.
-This assumption is not essential to estimate the parameters of
the regression model.
-We only need assumptions 1-3 to derive the OLS estimators
-OLS ordinary least squares
Properties of OLS Estimators
Unbiasedness: OLS estimators represent the
true population parameters.
( )
( ) |
|
|
|
0
0
1
1

=
=
E
E
Variance of OLS Estimators
-We know that the sampling distribution of our estimate
is centered around the true parameter (unbiasedness).

-Unbiasedness is a description of the estimatorin a
given sample we may be near or far from the true
parameter. But on average, we will cover the population
parameter.

-Question: How spread out is the distribution of the OLS
estimator? The answer to this question leads us to
examining the variance of the OLS estimator.
Estimating the Error Variance
-Variance of Population vs. sample variance .

-The error variance, , is unknown because we dont
observe the errors, u
i
.

-What we observe are the residuals,
i
.

-We can use the residuals to form an estimate of the error
variance.

o
The Residual Variance
Use the residuals to estimate the residual variance. This
variance represents the amount of dispersion about the fitted
model.
i
u

( )
( ) ( )
( )
( ) 2 /
2
1

is of estimator unbiased an Then,


2 2
2
1 1 0 0
1 0 1 0
1 0
=

=
=
+ + =
=

n SSE u
n
x u u
x u x u
x y u
i
i i i
i i i i
i i i
o
o
| | | |
| | | |
| |
* Note: SSE is the residual or error sum of squares
and (n-2) is the degrees-of-freedom.
Standard Error of OLS Estimates
( )
( ) ( ) ( )
( ) ( )
2
1
2
2
1
2
0
0 0
2
1
2
1
1 1
2

( se
)

( se by given is of error standard The


/

se

se by given is of error standard The


regression the of error standard

|
|
.
|

\
|
=
=
= =
x x
n
x
x x
i
i
i
o
|
| |
o |
| |
o o
Gauss-Markov Theorem
Under the following assumptions, the OLS procedure produces unbiased estimates
of the regression model population parameters.

Assumptions:
(1) The model is linear in parameters.
y
i
=
0
+
1
X
i
+u
i
ln y
i
= c
0
+ c
1
lnx
i
+ v
i
(2) E(ui) = 0
(3) Corr(u
i
u
j
) = 0 i j
(4)E(u
i
) = for all i (Homoscedasticity)
(5) the sample outcomes on x (xi, i = 1, 2, , n) are not all the
same values.
Also, in the class of linear unbiased estimators, the OLS Estimator is best (in the sense of
providing the minimum variance).
OLS Estimators are BLUE! (Best Linear Unbiased Estimators)
1 1 0 0
)

( and )

( | | | | = = E E
Goodness-of-Fit: Some Terminology
( )
( )
( )
SSE SSR SST Thus,
(SSE) squares of sum error or residual the is
(SSR) squares of sum regression the is
(SST) squares of sum total the is
: following the define then We

2
2
2
2
+ =
=

+ =

i i
i
i
i i i
u y y
y y
y y
u y y
-Goodness-of-fit: how well does the simple regression line fit the
sample data?
-Calculate R
2
= SSR/SST = 1 SSE/SST

continued . . .
Goodness-of-Fit
-Concept: measures the proportion of the variation in the
dependent variable explained by the regression
equation.
-Formula:
( )
( )
SST
SSE
SST
SSR
y y
y y
R
n
i
i
n
i
i
= =

= =

=
=
1

ity variabil sample Total


ity variabil sample Explained
1
2
1
2
2
-Range: between zero and one.
-Example: R = 0.78, the regression equation explains 78%
of the variation in y.
R and Adjusted R
-R

-Adjusted R
Questions:
(a)Why do we care about the adjusted R?
(b) Is adjusted R always better than R?
(c)Whats the relationship between R and adjusted R?
( )
( )
SST
SSE
SST
SSR
y y
y y
R
n
i
i
n
i
i
= =

= =

=
=
1

ty variabili sample Total


ty variabili sample Explained
1
2
1
2
2
) 1 /(
) 1 /(
1
2


=
n SST
k n SSE
R
Run SAS programs to demonstrate simple
linear regression
Prego Spaghetti sauce
Keynesian consumption function

What Have We Learned About
Regression Analysis Thus Far?
-Population parameters vs. sample parameters
-Getting a feel for the data
-Ordinary least squares (OLS)
(a) Assumptions
(b) Estimators
(c) Unbiasedness
(d) Interpretation of Estimated Parameters
-Goodness-of-fit
(a) R
(b) adjusted R

Coming Attractions
The Multiple Linear Regression Model
Estimation and Inference
Use of SAS to Conduct the Regression Analysis

Anda mungkin juga menyukai