We also assume that random sample can be drawn from the popu-
lation and sample observations are independently and identically
distributed (iid).
We assume away fixed regressors assumption common in the un-
dergraduate textbook because it does not allow us to study cases
where one or more explanatory variables may be correlated with the
error term – which is called endogenous explanatory variables in
the parlance of econometrics.
In such cases we nay not estimate the structural model, i.e., E(y|w, c),
directly. But with the help of auxiliary assumptions and algebraic ma-
nipulations we nay arrive at an estimable model. These assumptions
are called identifying assumptions. They help us recover parameters
of the original structural model from the model we estimate.
N
X p1j
E(y|x = c1) = yj
j=1 p1
...
...
N
X pM j
E(y|x = cM ) = yj
j=1 pM
Therefore,
N
X p1j N
X pM j
E[E(y|x)] = yj p1 · · · + yj pM
j=1 p1 j=1 pM
= y1p11 + y2p12 + · · · + yN p1N
+y1p21 + y2p22 + · · · + yN p2N
···
+y1pM 1 + y2pM 2 + · · · + yN pM N
E(y|x) = E[E(y|x)|w]
x = xi
y
HH
HH
H y1 ··· yN
z
HH
H
HH
...
N
X piM j
E(y|z = zM , x = xi) = yj
j=1 piM ·
Therefore,
N
X pi1j N
X piM j
= yj + ··· + yj
j=1 p i·· j=1 pi··
p p
= y1 i·1 + · · · + yN i·N
pi·· pi··
= E(y|x = xi)
4. If f (x) ∈ <j is a function of x such that E(y|x) = g[f (x)] for some
scalar function g(·); then E[y|f (x)] = E(y|x).
v = vj
u
HH
H
HH
u1 ··· uN
x
H
HH
HH
...
N
X p1jk
E(u|x = xM , v = vj ) = uk
k=1
p1j·
Therefore,
N
X p1jk 1 N
X p1jk 1
E[E(u|x, v = vj )|v = vj ] = uk + ··· + uk
k=1
p1j· M k=1
p1j· M
u p1j1 p1j1
= 1( + ··· + ) → M terms
M p1j· p1j·
+···
uN p1jN p1jN
+ ( + ··· + ) → M terms
M p1j· p1j·
p1j1 p1jN
= u1 + · · · + uN
p1j· p1j·
= E[u|v = vj ]
= E[u|x = xi, v = vj ]
Proof :
Now, by property 2,
E[g(x)u] = E(E[g(x)u]|x)
= E[g(x)E(u|x)] by property 1
= 0 as E(u|x) = 0
For the special cases, if J = 1 and g(x) = 1, then E(u) = 0.
Also, if g(x) = x,
E[g(x)u] = E[xu] = 0
⇒ cov(x, u) = 0
⇒ cov(xj , u) = 0, ∀j = 1 to K.
7. If c : < → < is a convex function defined on < and E[|y|] < ∞, then
c[E(y|x)] ≤ E[c(y)|x] → Conditional Jensen’s Inequality.
[E(y)]2 ≤ E[y 2]
−log[E(y)] ≤ E[−log(y)]
where u ≡ y − µ(x).
Properties:
1.
2.
4.