Anda di halaman 1dari 18

Session 1A: Overview

Session 1A: Overview

John Geweke
Bayesian Econometrics and its Applicatoins

Session 1A: Overview

Motivation

Motivating examples
Drug testing and approval
Climate change
Mergers and acquisition
Oil rening

Motivation

1

Important aspects of information bearing on the decision, and

the consequences of the decision, are quantitative. The
relationship between information and consequences is not
deterministic.

Motivation

Investigators and clients

Investigator: Econometrician who conveys quantitative
information so as to facilitate and thereby improve decisions
Client
Actual decision-maker (known)
Another scientist (known or anonymous)
A reader of a paper (anonymous)

Motivation

1

assumptions.

Synthesize, or provide the means to synthesize, dierent

approaches and models.

to the client.

An example

An example: value at risk

pt : Market price of portfolio, close of day t
Value at risk:
Specify t = t + s, s xed
Dene vt,t : P (pt pt
vt,t ) = .05

Return at risk:
yt = log (1 + rt ) = log (pt /pt
rt = (pt pt 1 ) /pt 1

1)

iid

yt s N , 2

Session 1A: Overview

Observables, unobservables and objects of interest

Putting models in context

George Box: All models are wrong; some are useful.
John Geweke: And with inspiration and perspiration they can
be improved.
Well-known example: Newtonian physics
Works ne in sending people to the moon.
Doesnt work so ne using an electronic navigation system to
drive a few kilometers

Session 1A: Overview

Observables, unobservables and objects of interest

A rst pass at models (and notation)

y: a vector of observables.
: a vector of unobservables (think widely)
Part of the model
p (y j )
This may restrict behavior, but is typically useless you know
Examples: the gravitational constant, and the value at risk
simple model

Session 1A: Overview

Observables, unobservables and objects of interest

Representing what we know about :
p ()
Then, formally,
p (y ) =

p () p (y j ) d .

This is potentially useful.

Important part of our technical work this week:
How we obtain information about
How p () changes in response to new information

Session 1A: Overview

Observables, unobservables and objects of interest

Conditioning on a model
We have been implicitly conditioning on a model.
Lets make this explicit:
p (y j A , A)
p (A j A)
A 2 A RkA

Dierent models lead to dierent conclusions.

This week, we shall see how to avoid conditioning on a
particular model.
The overriding principle: Use distributions of the things you
dont know conditional on the things you do know.

Session 1A: Overview

Observables, unobservables and objects of interest

The vector of interest

: The vector of interest
Directly aects the consequences of a decision
(We will be more precise in the next session.)

The model must specify

p ( j y, A , A)
Otherwise, it cant be used for the decision at hand.
Example: : 5 1, value of the portfolio at the close of the
next 5 business days

Session 1A: Overview

Observables, unobservables and objects of interest

A complete model A
Three components:
p (A j A)

p (y j A , A)

p ( j y, A , A)

Implies the joint probability density

p (A , y, j A) = p (A j A) p (y j A , A) p ( j y, A , A) .

Session 1A: Overview

Conditioning and updating

Ex ante and ex post

A critical distinction
Before we observe the observable, y, it is random
After we observe the observable it is xed.

y: ex ante
yo : ex post

Implication: the relevant probability density for a decision

based on the model A is
p ( j yo , A)
This is the single most important principle in Bayesian
inference in support of decision making.

Session 1A: Overview

Conditioning and updating

Details and notation

Prior density:
p (A j A)
Observables density:
p (y j A , A)
The distribution of the unobservable A , conditional on the
observed yo , has density
p (A , yo j A)
p (A j A) p (yo j A , A)
=
p (y o j A)
p (y o j A)
o
p (A j A) p (y j A , A) .

p (A j yo , A) =

Session 1A: Overview

Conditioning and updating

Being explicit about time

For t = 0, . . . , T dene
Yt0 = y10 , ..., yt0
where Y0 = f?g

Then

p (y j A , A) =

p (yt j Yt

1 , A , A) .

t =1

This forward recursion is the way we construct dynamic

models in economics.
A generalization of time in this context: Information

Session 1A: Overview

Conditioning and updating

Bayesian updating
Suppose Yto 0 = (y1o 0 , ..., yto 0 ) is available, but yto+0 1 , ..., yTo 0 is
not. Then
p (A j Yto , A) _ p (A j A) p (Yto j A , A)
t

= p (A j A) p (yso j Yso 1 , A , A) .
s =1

When

yto+1

t +1

1 , A , A)

s =1

_ p (A j Yto , A) p (yto+1 j Yto , A , A) .

The concepts of prior (ex ante) and posterior (ex post) are
relative, not absolute.
Bayesian updating changes prior into posterior
Example: August 13, 2013 closing value of the S&P 500 index

Session 1A: Overview

Conditioning and updating

Concluding our rst session

The probability density relevant for decision making is
o

p ( j y , A) =

p (A j yo , A) p ( j A , yo , A) d A .

If youve only seen non-Bayesian econometrics, this is really

dierent.
Likelihood-based non-Bayesian statistics conditions on A and
A , and compares the implication p (y j A , A) with yo .
This avoids the need for any statement about the prior density
p (A j A), at the cost of conditioning on what is unknown.
Bayesian statistics conditions on yo , and utilizes the full
density p (A , y, j A) to build up coherent tools for decision
making, but demands specication of p (A j A).

driven by the actual availability of information,

fully integrated with economic dynamic theory

Session 1A: Overview

Conditioning and updating

1

...)?

Have you used mathematical applications software in

econometrics (e.g. R, Stata, SAS, ...)