Anda di halaman 1dari 82

a

r
X
i
v
:
m
a
t
h
/
9
8
0
4
1
5
4
v
1


[
m
a
t
h
.
L
O
]


1
5

A
p
r

1
9
9
8
0.1 LAWS
SH550
Saharon Shelah
Institute of Mathematics
The Hebrew University
Jerusalem, Israel
Rutgers University
Department of Mathematics
New Brunswick, NJ USA
University of Wisconsin
Department of Mathematics
Madison, Wisconsin USA
Abstract. We give a framework for dealing with 0-1 laws (for rst order logic)
such that expanding by further random structure tend to give us another case of the
framework. From another perspective we deal with 0-1 laws when the number of
solutions of rst order formulas with parameters behave dichotomically.
We thank Alice Leonhardt for the beautiful typing. First version - Fall 91
Written 94/April/7
Partially supported by the United States-Israel Binational Science Foundation and by the NSF
Grant [Keisler?] NSF Grant 144-EF67
Publ.550
Latest Revision 98/Apr/10
Typeset by A
M
S-T
E
X
1
2 SAHARON SHELAH
Anotated Content
0 Introduction
1 The context: probability and model theory
[We dene the context: random models M
n
for which we may ask if the 0-1
law holds. Having in mind cases the random structure there are relations
which were drawn with constant probability as well as one decaying with n.
We give sucient condition for the satisfaction of the 0-1 laws, semi-nice].
2 More accurate measure and drawing again
[We restrict ourselves further by xing, when A <
pr
B, the number of copies
of B over a copy of A in M
n
, up to some factor. The clearest case is when
this approximate value is |M
n
|
(A,B)
(the polynomial case). We also
give the framework for redrawing].
3 Regaining the context for K
+
: reduction
[We deal with K
+
; i.e. having a 0-1 context K we expand M
n
randomly to
M
+
n
. Assume in random enough M
+
n
, all relevant cases behave close enough
to the expected value, we can see what should be
+
a
,
+
i
,
+
s
,
+
pr
, so we
dene such relations called for the time being
+
b
,
+
j
,
+
t
,
+
qr
respectively
and investigate them, proving they behave similarly enough to what we
hope to prove for K
+
. Then we restrict ourselves to the polynomial case
and phrase a sucient condition for succeeding:
+
b
=
+
a
, etc. and for
preservation of semi-niceness. We point out two extreme cases: when the
probabilities are essentially constant and when the probabilities are of the
form |M
n
|

, (0, 1)
R
. The restriction to the polynomial case is for
simplicity.]
4 Clarifying the probability problem
[We deal more explicitly with what is needed for showing that the 0-1 con-
tent we get by redrawing is again nice enough. We restrict ourselves to what
is needed.]
5 The probability argument
[We replace the content of 4 by a slightly more general one and then prove
the required inequality.]
6 Free amalgamation
[We axiomatize the edgeless disjoint amalgamation used in earlier works
to free amalgamation.]
7 Variant of niceness
[We consider some variants of semi-nice (and semi-good) and their relations.
In earlier verion 7, 8 we done inside 1, 2.]
0.1 LAWS SH550 3
0 Introduction
Here we continue Shelah Spencer [ShSp 304], Baldwin Shelah [BlSh 528], and (in
the model theory) Shelah [Sh 467] (see earlier [GKLT], [Fa76], Lynch [Ly85], see
the survey [Sp]); we are trying to get results of reasonable generality. In particular,
we want: if our random model M
+
expands a given M, where the class of Ms
is in our context, then also the class of M
+
s is in our context. We shall be most
detailed on the nice polynomial case (see denitions in the text).
Let us turn to trying to explain the results.
K is a 0-1-context (see Denition 1.1) give for each n, a probability distribution for
the n-th random model M
n
. The paper is self contained and describes a reasonable
result of drawing (nite) models, where a nite sequence a has a closure, c
k
( a, M
n
)
in the model M
n
, presumably random enough relative to k+g( a), and the number
of elements of c
m
( a, M) has an a priori bound; i.e. depending on g( a) and m only.
And as far as rst order formulas are concerned, this is all there is (see 1). However,
here we allow relations with constant probability; this implies that the limit theory
are not necessarily stable (as proved in [BlSh 528] on the theories from [ShSp 304]
and more), but they are simple ([Sh:93]).) Model theoretically our approach allows
non-symmetric relations R( x); i.e. (R(. . . , x
i
, . . . )
i=1,n
R(. . . , x
(i)
, . . . )
i=1,n
)
for permutation of 1, . . . , n.
An extreme case is when for some m = m
g( a)
, k > m c
k
( a, M) = c
m
( a, M)
(when M is random enough relative to g( a) + k). Still a very nice case is when
for every m and k for some , for M random enough and a
k
M we have: if A
0
=
Rang( a) and A
i+1
= c
m
(A
i
, M) then A

= A
+1
can be proved for = (k, g( a)).
But we may have a successor function to begin with and then this is impossible.
Note that if there are |M
n
| uniformly denable sets with
_
log|M
n
|
elements and we draw a two-place relation with the probability of each pair being
e.g. 1/2 we get example of all two-place relations on such denable small sets (up to
isomorphism), not what we want. So it is natural to ask that denable sets (x, a)
with no apriori bound, are quite large, say of size [h
d
, a
(M
n
), h
u
, a
(M
n
)] (with d
for down, u for up. The size of h
d
, a
is discussed below. Actually instead of dealing
with such s, we deal with Bs with A <
s
B; for each copy of A in M
n
we look
at the number of copies of B above it. In addition, concerning the drawing we can
weaken the natural demand of independency to: if A M
n
([A[ small compared
to n) the probability of M
+
n
A = A
+
is
_
p
d
A/

=
[M
n
], p
u
A/

=
[M
n
]
_
even knowing
M
+
n
B for every B M
n
, B A (i.e. the conditional probability). Again the
probabilities should be such that the phenomena described in the rst sentence of
this paragraph do occur.
We can ask more: the weak independence described in the last paragraph is even
knowing all other instances of relations. We also can ask for true independence.
We may look at more strict cases:
Case 1 - Polynomial Case.
1
h
d
A/

=
[M
n
] and h
u
A/

=
[M
n
] are

= |M
n
|
(A/

=)
(for
A K

) and p
d
A/

=
[M
n
] and p
u
A/

=
[M
n
] are

= |M
n
|
(A/

=)
where A K

.
1
note: here n

, R, is considered polynomial, 2

log n
is not, so not only integer powers are
considered polynomial
4 SAHARON SHELAH
In this case the main danger in drawing is that some (A)s and (A)s are linearly
dependent (looking at R as a vector space over Q) similar to [ShSp304]; i.e. for
having a 0 1 law (or convergence) we need the so-called irrationality condition.
Note that here the resulting framework also falls under Case 1.
The reader can concentrate on Case 1, and we usually do it here.
Case 2. The hs are near 1.
Here h
d
A/

=
[M
n
], h
u
A/

=
[M
n
] are constants (0, 1)
R
or at least are in
(
1
h(Mn)
, 1
1
h(Mn)
) where h goes to innity more slowly than any n

(where
R
>0
).
Here we may allow the p
d
A/

=
[M
n
], p
u
A/

=
[M
n
] more freedom - just to be large
enough (and/or small enough than 1) compared to log(|M
n
|). Here the irrational-
ity condition includes requiring that the p
d
A/

=
[M
n
], p
u
A/

=
[M
n
] are close enough so
that we get denite answers.
Case 3. The ps are near 1.
Here p
d
A/

=
[M
n
], p
u
A/

=
[M
n
] are constant (0, 1)
R
or at least are in
(
1
h(Mn)
, 1
1
h(Mn)
) where h goes to innity more slowly than any n

(where
R
>0
).
Here we may allow the h
d
A/

=
[M
n
], h
u
A/

=
[M
n
] more freedom - just the fraction of
|M
n
|
|A|
they are far enough from 0 and 1 compared to log(|M
n
|. The irrationality
condition will include requirements that the h
d
A/

=
[M
n
], h
u
A/

=
[M
n
] are close enough
to get denite answers.
To carry the probability argument (and get that the resulting class K
+
is also in
our framework), we have to deal with the following problem.
For M
n
large enough it is natural to dene a tree T (i.e. a set T and a partial
order <=<
T
such that x T y : y <
T
x is linearly ordered by <
T
and have
lev(x) N elements). Let T

= x T : lev(x) = , assume T has m levels,


and for < m, and T

, the number k

= [Suc
T
()[ is [k
d

, k
u

]. Suppose
further we have A
0
<
pr
A
1
<
pr
<
pr
A
m
(see Denition 1.3(2)(d), A

is a small
model in the original vocabulary, increasing with ) and we have f

: T)
such that f

embeds A
g()
into M
n
, and <
T
T f

. Assume further
A
+
i
is an expansion of A
i
to the larger vocabulary, increasing with i, so formally
A
+
i
A
+
i+1
, A
+
i
= A
i
.
Lastly assume that for each c M
n
the set [ Suc
T
() : c Rang(f

) Rang(f

)[
has an apriori bound (i.e. not depending on n). Under the condition that f
Min(T)
embeds A
+
0
into M
+
n
, what is the number of T
m
for which f

embeds A
+
m
into
M
+2
n
? (at least approximately).
We can easily give bounds to the expected value, and, working harder, for vari-
ance. But we want to show that the probability of large deviation from the
expected value are < |M
n
|

for every R
+
. This is enough to really ignore
those cases (not just say they occur very rarely but show that, for M
n
random
enough they do not occur at all). This is easy if m = 1, and so we have, essentially,
many independent events.
Note the following obstruction: the independence is violated as possibly
1
,=

2
,
1
Suc
T
(
1
),
2
Suc
T
(
2
) and Rang(f
1
)Rang(f
1
), Rang(f
2
)Rang(f
2
)
0.1 LAWS SH550 5
are not disjoint. Particularly disturbing case is when for some x ,= y from
A
g(1)
, A
g(2)
respectively we have f
1
(x) = f
2
(y). However, this chaotic obstruc-
tion can be overcome by shrinking somewhat the tree, so we can get:

f
1
(a
1
) = f
2
(a
2
) a
1
= a
2
.
Still we have two extreme cases which are quite dierent

1
for each a
1
there is such that a
1
A
+1
and for every a
2
:
f
1
(a
1
) = f
2
(a
2
) (a
1
= a
2
A
+1
&
1
( + 1) =
2
( + 1))

2
T =
_
m

i<
m
i
and: if a A
+1
A

&
1
,
2
T
+1
then:

1
() =
2
() f
1
(a) = f
2
(a).
What we need are good upper bounds on deviation uniformly under the circum-
stances (including, in particular, those two cases).
We deal with probability only as required: anything with probability
< |M
n
|
c
(i.e. for every c R
+
, for every random enough M
n
) can be discarded;
also we need to eliminate the extreme cases (= large deviation) but do not care
about the exact distribution in the middle.
Having explained the probability side problems, let us turn to the model theoretic
ones. First, we want to include cases like the successor function so that, possibly
there are no set A M
n
satisfying A / , [M
n
[ with c
k
(A, M
n
) = A. Hence
we have to suce ourselves with semi-niceness (similarly to [Sh 467]) rather than
niceness (as is done in [ShSp 304] and more generally in [BlSh 528]).
Secondly, the free amalgamation is no longer the disjoint amalgamation with no
new edges (or new cases of the relations) as in [ShSp 304], [BlSh 528], [Sh 467].
This change enables us to deal, e.g. with the random graph structure with two
kinds of edges (= two 2-place relations) red with probability n

(0 < < 1) and


green with edge probability 1/2. But, as in [ShSp 304],[BlSh 528] (and unlike
[Sh 467]), c
k
(A, M
n
) has, for random enough M
n
, a bound depending on [A[, k
only. We could generalize our framework so that [Sh 467] is included but decide
not to do it here. But there is a price: the denition of nice (see 1.7) becomes
more complicated; having both is dealt with in [Sh 637].
An interesting phenomena is the dichotomy: either the limit theory is very sim-
ple, analyzable or is complicated (and related to [Sh:c]). Lately Tyszkiewicz proves
related theorem for monadic theories of for the classes of groups; I think the right
starting point should be the parallel innite problem with no probability, what
can be the (monadic) innitary theory of models of rst order T monadically
expanded, see Baldwin Shelah [BlSh 156], survey [Baldwin, handbook of model
theoretic logics]. I do not know enough to conjecture the dividing line, but can
note that all the complicated limit theories T

are complex which we de-


ne as: there is a formula
t
( x, y, z, t),
x
( x, y, z, t) such that T

for some
t,
+
( x, y, z, t),

( x, y, z, t) dene a model which satises the axioms of PA includ-


ing induction but adding there is a last element, there are n elements (can
add induction scheme for formulas of quantier depth n. See more in Baldwin
[B][nearly model complete and 0-1]. See Baldwin Shelah [Sh 639].
We can deal with the example of directed graphs, edge probability n

both
directions has same probability.
6 SAHARON SHELAH

Baldwin has asked me to explicate here how the theory of ([n], S, R) where S is the
successor relation, R a random graph with edge probability being n

, i.e. how
the fact that every element has an immediate successor, is reected in the present
treatment (that is, got into the axiomatization we get; this is unlike [BlSh 528]).
We can consider three cases: with successor being modulo n, with usual successor
and (e.g.) with successor being (i, i + 1) for i not divisible by the square root of n
(rounded). The limit models are: in the rst case it has copies of Z, in the second
case one copy of , one copy of

and copies of Z, and in the third case many


copies of , many copies of

and many copies of Z. Now in the semi-nice case


(see Denition 1.9) we should look at the set of semi-(k, r)-good quadruples, now
the pairs of (A, A
+
) which appear in such quadruples (so A A
+
) gives us the
distinctions.
For x A let

+
(x, A) = Min : there are no x
0
, . . . , x

A such that x = x
0
and
A [= S(x
i
, x
i+1
) for i <

(x, A) = Min : there are no x


0
, . . . , x

A
+
such that x = x

and
A [= S(x
i
, x
i+1
) for i < .
Let m(k, r) be large enough.
Now in the rst case, in any such pair for every x A we have
+
(x, A
+
)
m(k, r) and

(x, A
+
) m(k, r), in the third case for no x A,
+
(x, A
+
) <
m(k, r) &

(x, A
+
) < m(k, r) and in the second case, there may be at most one
x A with
+
(x, A) < m(k, r) and at most one x A with

(x, A
+
) < m(k, r)
(but they are not the same).
In the strict polynomial (or even less) case we can also deal with properties sug-
gested by Lynch [Ly90]. He asks for the results in Shelah Spencer [ShSp 304] for
more accurate numerical (asymptotic) results, particularly in the case the proba-
bility is zero he proved
()
1
for every rst order sentence such that Prob(M
n
[= ) converge to zero
one of the following occurs:
() Prob(M
n
[= ) = c|M
n
|

+ 0(n

) for some c, , R
+
,
() Prob(M
n
[= ) = 0(|M
n
|

) for every R
+
.
Conrming his conjecture in [Sh 551] we prove
()
+
Prob(M
n
[= ) = 0(e
Mn

) for every R
+
.
We shall explicate this elsewhere.
The starting point of this research was a question of Lynch communicated to me
by Spencer in Fall 91 on whether we can do [ShSp 304] starting with a successor
0.1 LAWS SH550 7
function; but I thought the real problem was to have a general framework and I
lectured on it in Rutgers Fall 95; see [BlSh 528].
See more [Sh 467], [Sh 581] and [Baldwin, near model complete and 0-1 laws].
I thank Shmuel Lefschas and John Baldwin for comments and corrections. Earlier
we have used

and version of niceness from the beginning of the paper.


0.1 Notation. R set of reals, R
>0
= R : > 0, R
0
= R
>0
0,
n
the nth
probability (= distribution). Here n is the index for M
n
, which is always used for
the model chosen
n
- randomly (we do not assume M
n
necessarily has exactly n
elements). N is the set of natural numbers. We use k, , m, n, i, j, r, s for natural
numbers and we use , , for reals, , for positive reals.
Let A, B, C, D denote nite models M, N models and f, g denote embeddings.
Let h denote a function with range R.
0.2 Notation. 1) We use for vocabularies, consisting of predicates only (for
simplicity), n(R) the number of places of R (= the arity of R).
2) In general treatment we can demand that each R will be interpreted as
irreexive relation; i.e. a R
M
a without repetition; (by this demand we do
not lose any generality as we can add suitable predicates). We call such irreexive,
but we do not require symmetry (so directed graphs are allowed).
We use A, B, C, D for models which are nite, if not explicitly said otherwise, and
M, N for models; we notationally do not strictly distinguish between a model and its
universe. Those are -models and A
+
, . . . , N
+
are
+
-models if not explicitly said
otherwise. We use a, b, c, d, e for elements of models, bar signies a nite sequence.
3) We call locally nite if for every n the set R : n(R) = n is nite. Note:
the number of -model with the nite universe A, is nite when is nite or locally
nite irreexive.
4) Let f : A B mean both are models with the same vocabulary, and f is an
embedding, i.e. f is one to one and for any predicate R (in , the vocabulary
of A and B) which is k-place we have: a
1
, . . . , a
k
A [a
1
, . . . , a
k
) R
A

f(a
1
), . . . , f(a
k
)) R
B
]. Let id
A
be the identity map on A. Let A B mean
id
A
: A B and we say: A is a submodel of B.
5) If A, B are submodels of C then A B means C (A B).
6) We say A, C are freely amalgamated over B in M if B A M, B C M, A
C = B and: if R is a predicate of M, for no a R, do we have Rang( a) B A,
Rang( a) BC and Rang( a) BAC; we also say A, C are free over B inside
M. (But

-free means according to the denition of

, but this generalization is


done only in 6, 7).

0.3 Notation. 1) If f is an embedding of A into M and A B, we say
g = g
i
: i = 1, k) are disjoint extensions of f (for (A, B)) if:
(a) g
i
is an extension of f to an embedding of B into M
(b) 1 i < j k Rang(g
i
) Rang(g
j
) = Rang(f).
8 SAHARON SHELAH
We say g is a disjoint k-sequence of extensions of f if the above holds; we also
say: g is a sequence disjoint over A, g of length k.
2) ex(f, A, B, M) where A B, f an embedding of A into M is the set of extensions
g of f to embedding of B into M.
nu(f, A, B, M) is the number of elements in ex(f, A, B, M).
3) Let N and also denote the set of natural numbers.
0.1 LAWS SH550 9
1 The Context: probability and model theory
We start by dening a 0-1 context (in Denition 1.1), dening the derived A
i
B
(B algebraic over A), A
s
B (dual), c
k
(A, M), (in Denition 1.3, 1.4) and point
out the basic properties (in 1.6). We dene K is weakly nice and state its main
property, (see 1.7, 1.8). Then we dene our main version of nice, (Denition 1.9,
semi-nice) and investigate some variant (1.10, 1.11) and dene the 0-1 laws and
variants (1.13, 1.14, 1.15). We prove that K is semi-nice implies elimination of
quantiers and phrase what this gives for 0-1 laws (in 1.16, see Denition 1.13,
1.14).
1.1 Denition. 1) K is a 0 1 context if it consists of , K , and

K = K
n
,
n
: n N) satisfying (a)-(c) below where:
(a) a vocabulary consisting of predicates only (for simplicity, irreexive, see
0.2), nite or at least locally nite.
(b) K a family of nite -models, closed under isomorphisms and submodels; we
denote members by A, B, C, D (sometimes M, N) and for notational simplicity the
empty model belongs to K .
(c) K
n
K ,
n
is a probability measure on K
n
, M
n
varies on K
n
; for notational
simplicity assume n
1
,= n
2
K
n1
K
n2
= and M K
n
|M| > 1; also we
sometime forget the possibility M
n1
K
n1
& M
n2
K
n2
& n
1
,= n
2
&
|M
n1
| = |M
n2
| but no confusion should arise.
1.2 Denition. Let: every random enough M
n
satises mean
1 = Lim inf
n
Prob
n
(M
n
[= ). Similarly almost surely M
n
[= and a.s. M
n
satises .
1.3 Denition. For K as in 1.1(1) we dene:
1) K

= A K : 0 < Lim sup


n
Prob
n
(A is embeddable into M
n
).
2) We dene some two place relations on K (mostly on K

):
(a) A B if A B (being submodels) and B K

(hence A K

, see
1.6(1))
(b) A
i
B if A B K

and for some m N we have 1 = Lim


n
Prob
n
(for
every embedding f of A into M
n
, there are at most m extensions of f to
an embedding of B into M
n
)
[the intention is: B is algebraic over A]
(c) A
s
B if A B K

and for no B

do we have A <
i
B

B
[the intention is: strongly in some sense A is very closed inside B]
(d) A <
pr
B if A, B K

and A <
s
B and for no C do we have A <
s
C <
s
B
(e)

A is a decomposition of A <
s
B if

A = A

: k) and
A = A
0
<
pr
A
1
< <
pr
A
k
= B
[the intention of pr is primitive, cannot be decomposed]
(f) A
a
B means A K

, B K

, A B and A = B or for some m N we


have 1 = Lim
n
Prob
n
(for every embedding f of A into M
n
, there is no
sequence g

: < m) of embeddings of B into M


n
, pairwise disjoint over
A)
10 SAHARON SHELAH
[the intention is: B is algebraic in a weak sense over A; more accurately A
is not strongly a submodel of A].
1.4 Denition. 1) T

= : a rst order sentence such that


1 = Lim
n
Prob
n
(M
n
[= ).
2) K
lim
= M : M a model of T

.
3) If A M K and k N let
c
k
(A, M) = B : B A
i
B M and [B[ k.
4) For A M K , A K

and k, m N we dene c
k,m
(A, M) by induction on
m as follows c
k,0
(A, M) = A and c
k,m+1
(A, M) = c
k
(c
k,m
(A, M), M). Also let
c
k,
(A, M) =
_
m
c
k,m
(A, M) and c

(A, M) =
_
k
c
k
(A, M).
5) For any (A, B, D) satisfying A D K

, B D and k N and embedding


f : A M let
ex
k
(f, A, B, D, M) =
_
g : g is an embedding of D into M extending f
such that c
k
(g(B), M) = g(c
k
(B, D))
_
nu
k
(f, A, B, D, M) = [ex
k
(f, A, B, D, M)[
(compare with 0.3(2)).
1.5 Remark. We have chosen the present denition of c
k
so as to have more cases
where iterating the operation lead shortly to a x point and to be compatible with
[Sh 467] where the possibility c
k
(A, M) / K

exists.
1.6 Claim. 1) K

K is closed under isomorphisms and submodels.


[why? reread Denition 1.3(1)].
2)
i
is a partial order on K

; if A B C, A
i
C then B
i
C. Also A
i
B
i B K

, A B and for some m N, for no D K

, A D, is there a
sequence of m (distinct) embeddings of B into D over A.
[why? reread Denition 1.3(2)(b)].
3) If A
1

i
A
2
C K

and B C K

and A
1
B then B
i
B A
2
[why? see Denition 1.3(2)(b)].
4) If A C are in K

then for one and only one B K

we have A
i
B
s
C
[why? let B C be maximal such that A
i
B, it exists as A satises the demand
and C is nite, now B
s
C by 1.6(2) + Denition 1.3(2)(c). Hence at least one
B exists, so suppose A
i
B


s
C for = 1, 2 and B
1
,= B
2
so without loss of
generality B
2
B
1
,= . Now by 1.6(3), B
1

i
B
1
B
2
hence B
1
<
i
B
1
B
2
C,
but this contradicts B
1

s
C (see Denition 1.3(2)(c))].
5) If A <
s
B then there is a decomposition

A of A <
s
B
[why? see Denition 1.3(2)(e) and Denition 1.3(2)(d), remembering B is nite].
6) If A
s
B and C B then C A
s
C; (note also
A
s
C & A B C A
s
B)
0.1 LAWS SH550 11
[why? otherwise for some C

we have C A <
i
C

C, then by 1.6(3) we have


A <
i
A C

, contradiction to A
s
B. The second phrase holds by Denition
1.3(2)(c)].
7) The relations
i
,
s
,
pr
,
a
are preserved by isomorphisms.
[Why? read Denition 1.3(2)].
8) If A <
pr
B then for every b BA we have (Ab)
i
B; also A < C B
C
i
B
[why? if not, then by 1.6(4) for some C, (A a)
i
C <
s
B, but
A <
pr
B A <
s
B A <
s
C (by Denition 1.3(2)(b) and 1.6(6) respectively) so
A <
s
C <
s
B contradicting A <
pr
B. The second phrase is proved similarly.]
9) A
i
B i A B and for every A

we have A A

< B (A

<
a
B)
[why? trivially A <
i
B A <
a
B; now the implication by 1.6(2) second
phrase + Denition 1.3(2)(b),(f ); the implication by the -system lemma and
the denitions].
10)
s
is a partial order on K

[why? by the denition A


s
A, as A
s
B A B and clearly A
s
B
s
A
A = B, so the problem is transitivity. So assume A
s
B
s
C but (A
s
C) and
we shall derive a contradiction. As (A
s
C) by Denition 1.3(2)(c) there is B
1
such that A <
i
B
1
C. If B
1
B we get contradiction to A
s
B by Denition
1.3(2)(c). If B
1
B, then by 1.6(3) we get B
i
(B
1
B), but as B
1
B we
have B <
i
(B
1
B), but clearly (B
1
B) C so we get contradiction to B
s
C
by Denition 1.3(2)(c), so in any case we have gotten the desired contradiction].
11) A <
s
B if for every m, 0 < Lim sup Prob
n
(for some embedding f of A into
M
n
there are m disjoint extensions g : B M
n
of the embedding f). If A <
pr
B
then the inverse statement holds.
[why? read Denition 1.3(2)(b),(c), see details in the proof of 1.8(1)].
12) If A <
pr
B D, A C D, then C <
pr
B C or C
i
B C.
[why? by 1.6(4) for some C
1
we have C
i
C
1

s
B C. If C
1
B ,= A, then
A < C
1
B B hence by 1.6(8) we have C
1
B
i
B so by 1.6(3) we have
C
1

i
B C
1
and, of course, B C
1
= B C, so C
i
C
1

i
C B so by 1.6(2)
we have C
i
C B, one of the possible conclusions. So assume BC
1
= A hence
C = C
1
, so C
s
B C, now if C = B C clearly C
i
B C. Hence we assume
C ,= BC, so C <
s
BC and if C <
pr
BC we get one of the possible conclusions
as above. So assume (C <
pr
B C), necessarily for some C
2
, C <
s
C
2
<
s
B C.
By 1.6(6) we have C B <
s
C
2
B <
s
B so as C B = C
1
B = A clearly
A <
s
C
2
B <
s
B hence we get a contradiction nishing the proof.]
13) For every , k N there is m(k, ) N such that:
if A B K

and [A[ then c


k
(A, B) has m(k, ) elements.
[why? read the denitions noting that (even if is only locally nite) the number of
pairs (C
1
, C
2
), C
i
A, C
1
C
2
, [C
2
[ k up to isomorphism over C
1
has a bound
depending only on C
1
].
14) (a) c
k
(A, M) M and
c
k
(A, M) K

A
i
c
k
(A, M)
(b) A B M c
k
(A, M) c
k
(B, M)
12 SAHARON SHELAH
(c) if c
k
(A, M) N M then c
k
(A, N) = c
k
(A, M)
(d) if A N M then c
k
(A, N) c
k
(A, M)
(e) for k we have c
k
(A, M) c

(A, M)
(f ) for every A K

and k for every random enough M


n
and
embedding f : A M
n
we have
M
n
c
k
(f(A), M
n
) K

(g) for every k, m for some for every A M K

we have c
m
(c
k
(A, M), M) c

(A, M).
[Why? Just check.]
15) T

is a consistent (rst order) theory which has innite models if


0 < lim sup Prob
n
(|M
n
| > k) for every k.
Remark. Note that not necessarily in 1.6(11), we have i. Why? e.g. if
= P
1
, P
2
with P
1
, P
2
unary predicates, B = b
1
, b
2
K

, A = , P
B

= b

,
and for every M K , [P
M
1
[ = 0 [P
M
2
[ = 0 and M K
n
& n even [P
M
1
[ n
and M K
n
& n odd [P
M
2
[ n.
1.7 Denition. For K as in Denition 1.1, we say K is weakly nice if we have:
()
1
for every A <
pr
B (from K

) and m N we have
1 = Lim
n
Prob
n
_
for every embedding f of A into M
n
there are m
disjoint extensions of f to embedding of B into M
n
_
.
1.8 Claim. Assume K is weakly nice.
1) If A < B K

, then the following are equivalent:


(a) A <
s
B
(b) for every m <
1 = Lim
n
Prob
n
_
for every embedding f of A into M
n
there are
embeddings g

: B M
n
extending f for
< m such that g

: < m) is disjoint over A


_
.
(For (b) (a), K weakly nice is not needed.)
Proof. 1) The direction (a) (b) holds as K is weakly nice, more elaborately, we
prove (b) assuming (a) by induction on m where A

: m) is a decomposition of
(A, B) (which exists by 1.6(5)); for m = 0 this is trivial and for m + 1 by straight
combinatorics. Next we prove (a) (b) even without using K weakly nice. So
0.1 LAWS SH550 13
assume (b) & (a) and we shall get a contradiction. As (a), by 1.6(4) for some
A
1
we have A <
i
A
1
B, hence by Denition 1.3(2)(b) for some m

N we have:
()
1
1 = Lim
n
Prob
n
_
for every embedding f of A into M
n
there are
at most m

extensions of f to an embedding of
A
1
into M
n
_
.
As Clause (b) holds, the limit there is 1 also for the m

we have just chosen. The


contradiction is immediate.
1.8
1.9 Denition. 1) We say (A
+
, A, B, D) is a semi-(k, r)-good quadruple if:
()
k,r
A
+
,A,B,D
A A
+
K

and A D, B D K

and for every random enough M


n
we have:
() for every embedding f : A
+
M
n
satisfying
c
r
(f(A), M
n
) f(A
+
) there is an extension g of f A, embedding
D into M
n
such that
c
k
(g(B), M
n
) = g(c
k
(B, D)).
If r = k we may write k instead of (k, r).
2) We say that K is semi-nice if it is weakly nice and for every A K

and k for
some , m, r we have:
() for every random enough M
n
, and embedding f : A M
n
and b M
n
we
can nd A
0
, A
+
, B, D such that:
[note that we can have nitely many possibilities for (, m, r); does not
matter]
() f(A) A
0
A
+
c
m
(f(A), M
n
)
() f(A) b B D M
n
() [D[
() (A
+
, A
0
, B, D) is semi-(k, r)-good
() c
r
(A
0
, M
n
) A
+
() c
k
(B, M
n
) D.
1.10 Claim. 1) Assume
(a) (A
+
, A, B, D) is semi-(k, r)-good
(b) A A
+
1

s
A
+
(c) B
1
B, A D
1
D, B
1
D
1
(d) c
r
(A, A
+
) A
+
1
(follows from (b))
(e) c
k
(B
1
, D) D
1
.
14 SAHARON SHELAH
Then (A
+
1
, A, B
1
, D
1
) is semi-(k, r)-good.
2) If (a) of part (1) and k

k, r

r and A
+
A

satises c
r
(A, A

)
A
+
, then (A

, A, B, D) is semi-(k

, r

)-good. (We can combine parts (1) and (2)).


3) If (a) of part (1) and r, r
1
, m satises the statement () below and A
1
A
A
+
A
+
1
K

, A c
m
(A
1
, A
+
1
) and c
r
(A, A
+
1
) A
+
, then (A
+
1
, A
1
, B, D) is
semi-(k, r
1
)-good where
() = ()
r1
m,r
if A

then
c
r
(c
m
(A

, B

), B

) c
r1
(A

, B

).
Proof. 1) Let M
n
be random enough and f : A
+
1
M
n
be such that
c
r
(f(A), M
n
) f(A
+
1
). As A
+
1

s
A
+
and K is semi-nice there is an embedding
f

: A
+
M
n
extending f. By monotonicity, c
r
(f(A), M
n
) f

(A
+
). As
(A
+
, A, B, D) is semi-(k, r)-good, there is g : D M
n
such that: g f

A and
c
k
(g(B), M
n
) = g(c
k
(B, D)) g(D).
But by 1.6(14) clause (c) always
() A

, c
k

(A

, C

) B

c
k

(A

, C

) = c
k

(A

, B

)
so in our case (see assumption (e))
c
k
(g(B
1
), M
n
) = c
k
(g(B
1
), g(D)).
As g embeds D into M
n
, c
k
(g(B
1
), g(D)) = g(c
k
(B
1
, D)), and by () above and
assumption (e) we have c
k
(B
1
, D) = c
k
(B
1
, D
1
). So together with earlier equality
c
k
(g(B
1
), M
n
) = g(c
k
(B
1
, D
1
))
as required, that is g D
1
is as required.
2) Easier (by 1.6(14),clause(e)).
3) Let M
n
be random enough and f : A
+
1
M
n
be such that c
r1
(f(A
1
), M
n
)
f(A
+
1
). We know A c
m
(A
1
, A
+
1
). Now by the assumption on r
1
, m, r for every
A

, A
+
1
A

we have c
r1
(A
1
, A

) c
r
(A, A

) hence c
r
(f(A), M
n
)
c
r1
(f(A
1
), M
n
) f(A
+
1
). So we can apply the property (A
+
, A, B, D) is semi-
(k, r)-nice.
1.10
1.11 Claim. In the Denition of semi-nice, 1.9(2), we can equivalently omit , m
(just r suce) and replace () by
()

for every random enough M


n
and f : A M
n
and b M
n
letting B =
A b we have:
(c
r
(f(A), M
n
), f(A), B, f(A) c
k
(B, M
n
)) is semi-(k, r)-good.
Proof. Original denition implies new denition
Let , m, r be as guaranteed by the original denition. Without loss of gener-
ality m r. For the new denition we choose r
1
> r, m such that A

c
r
(c
m
(A

, B

), B

) c
r1
(A

, B

), they exist by 1.6(14)(e),(g). Let us


check ()

of the new denition, so M


n
is random enough and f : A M
n
. So by
0.1 LAWS SH550 15
the old denition there are A
0
, A
+
, B, D satisfying () () of (). In particular
(A
+
, A
0
, B, D) is semi-(k, r)-good. As A
+
c
m
(f(A), M
n
) and f(A) A
0
by
1.10(3) also the quadruple (c
r1
(f(A), M
n
), f(A), B, D) is semi-(k, r
1
)-good. (Re-
member r
1
m, r). Let B
1
= f(A) b, D
1
= f(A) c
k
(B
1
, M
n
). Now apply
1.10(1) and get that (c
r1
(f(A), M
n
), f(A), f(A) b, f(A) c
k
(f(A) b, M
n
))
is semi-(k, r
1
)-good as required.
New denition implies old denition
Immediate, letting m = r: in () of 1.9(2) let A
0
= f(A), A
+
=
c
r
(f(A), M
n
), B = f(A) b and D = c
k
(f(A), M
n
). What about ? It exists
by 1.6(13).
1.11
1.12 Conclusion. The denition of semi-nice is equivalent to:
for every k and for some r we have
()

If A K

, [A[ and M
n
is random enough and f : A M
n
and b M
n
then (c
r
(f(A), M
n
), f(A), f(A) b, c
k
(f(A) b, M
n
)) is semi-(k, r)-
good.
Proof. old new.
Let A
i
: i < i

list the A K

with elements up to isomorphism.


For each A
i
there is r
i
N as guaranteed in 1.11. Let r = Maxr
i
: i < i

.
So let A K

, [A[ be given so for some i, A



= A
i
; if M
n
is random enough
and f : A M
n
and b M
n
and B = f(A) b, then (c
ri
(f(A), M
n
), f(A), B,
c
k
(B, M
n
)) is semi-(k, r
i
)-good.
(Why? By the choice of r
i
.) Now by 1.10(2) as r
i
r we know that (c
r
(f(A), M
n
),
f(A), B, f(A)c
k
(B, M
n
)) is semi-(k, r)-good, as required because f(A) c
k
(B, M
n
).
New old
Easier (and not used).
1.12
1.13 Denition. Let K be a 0-1 context.
1) K is complete if for every A K , the sequence
Prob
n
(A is embeddable into M
n
) : n N)
converges to zero or converges to one.
2) K is weakly complete if the sequence above converges.
3) K is very weakly complete if for every A K , the sequence
[Prob
n
(A embeddable into M
n+1
) Prob
n
(A embeddable into M
n
)] : n < )
converges to zero.
So if h(n) = n + 1, we get very weakly complete (similarly in 1.14(4)).
4) K is h-very weakly complete if for every A K , the sequence
Lim
n
Max
n1,n2[n,h(n))
[Prob
n
2
(A embeddable into M
n2
)
Prob
n
1
(A embeddable into M
n1
)] : n < ) converges to zero.
16 SAHARON SHELAH
1.14 Denition. Let K be a 0-1 context.
1) K satises the 0-1 law for the logic L if for every sentence L() (i.e. the
logic L with vocabulary ) the sequence
Prob
n
(M
n
[= ) : n N)
converges to zero or converges to one.
2) K satises the weak 0-1 law or convergence law for the logic L if for every
sentence L(), the sequence
Prob
n
(M
n
[= ) : n N)
converges.
3) K satises the very weak 0-1 law for L if for every sentence L() the
sequence
Prob
n+1
(M
n+1
[= ) Prob
n
(M
n
[= ) : n N)
converges to zero.
4) K satises the h-very weak 0-1 law for L if for every sentence L(), the
sequence
max
{n1,n2}[n,n+h(n)]
[Prob
n
1
(M
n1
[= ) Prob
n
2
(M
n2
[= )[ : n N)
converge to zero.
5) If the logic L is rst order logic, we may omit it.
1.15 Fact
1) If K is complete, then it is weakly complete which implies it is very weakly
complete.
2) If h
1
, h
2
are functions from N to N and (n)(h
1
(n) h
2
(n)) and K is h
2
-very
weakly complete, then K is h
1
-very weakly complete.
3) Similarly for 0-1 laws.
1.16 Lemma. 1) Assume K is semi-nice. Modulo the theory T

, every formula
of the form (x
0
, . . . , x
m1
) is equivalent to a Boolean combination of formulas of
the form (x
m
, . . . , x
k1
)(x
0
, . . . , x
m1
, x
m
, . . . , x
k1
), where for some
A
i
B K

we have A = a

: < m, B = a

: < k (so m k) and


=

_
R(. . . , x

, . . . , )
<k
:B [= R(. . . , a

, . . . )
<k
,
R an atomic or negation of atomic formula
_
.
1A) Another way of saying it, is: there is k computable from such that:
for every random enough M
n
and a
0
, . . . , a
m1
M
n
, the truth value of
0.1 LAWS SH550 17
M
n
[= (a
0
, . . . , a
m1
) is computable from
(M
n
c
k
(a
0
, . . . , a
m1
, M
n
), a
0
, . . . , a
m1
)/

=.
2) If K is semi-nice and weakly complete, then the weak 0-1 holds (i.e. convergence
see Denition 1.14(2)).
3) If K is semi-nice and complete (see Denition 1.13) then T

is a complete theory;
and K satises the 0 1 law for rst order sentences (see 1.14(1)).
4) If T

is a complete theory, then K is complete.


5) The parallel of 2), 3) holds for h-very weak.
Proof. 1) By (1A).
1A) We prove it by induction on the quantier depth of . For atomic, or
a conjunction or a disjunction or a negation this should be clear. So assume
(x
0
, . . . , x
s1
) = (x
s
)(x
0
, . . . , x
s
), by the induction hypothesis there is a func-
tion F

and number k

such that:
()
,F
for every random enough M
n
for every a
0
, . . . , a
s
M
n
we have: the truth
value of M
n
[= (a
0
, . . . , a
s
) is
F

_
(M
n
c
k
(a
0
, . . . , a
s
, M
n
), a
0
, . . . , a
s
)/

=
_
.
By Denition 1.9(2) (of semi-nice) for any A K

and k there are = (A, k)


and m = m(A, k) and r = r(A, k) as there. Let m

= maxm(A, k

) : A K

and [A[ s +1 and

= max(A, k

) : A K

and [A[ s +1 and see below


()
5
(ii) and ()
6
(ii) and let r

= maxr(A, k

) : A K

and [A[ s + 1. Let


k = k

be such that A K

& [A[ s c
k
(A, M
n
) c
k
(c
m

(A, M
n
), M
n
)
(such k exists by 1.6(14(g)).
Now for M
n
random enough, for any a
0
, . . . , a
s1
M
n
, we shall prove a se-
quence of conditions one implying the next (usually also the inverse), then close
the circle thus proving they are all equivalent:
()
1
M
n
[= (a
0
, . . . , a
s1
)
()
2
M
n
[= (x
s
)(a
0
, . . . , a
s1
, x
s
).
[Clearly ()
1
()
2
]
()
3
for some b M
n
we have M
n
[= (a
0
, . . . , a
s1
, b).
[Clearly ()
2
()
3
]
()
4
(i) for some b c
m

(a
0
, . . . , a
s1
, M
n
) we have
M
n
[= (a
0
, . . . , a
s1
, b)
or
(ii) for some b M
n
c
m

(a
0
, . . . , a
s1
, M
n
)
we have M
n
[= (a
0
, . . . , a
s1
, b).
[Clearly ()
3
()
4
]
()
5
(i) letting N = M
n
c
k
(a
0
, . . . , a
s1
, M
n
) the following holds:
for some b c
m

(a
0
, . . . , a
s1
, N) = c
m

(a
0
, . . . , a
s1
, M
n
) we have
truth = F

_
(N c
k
(a
0
, . . . , a
s1
, b, N), a
0
, . . . , a
s1
, b)/

=
_
.
18 SAHARON SHELAH
[Clearly ()
4
(i) ()
5
(i) by the choice of k as: A N M & c
t
(A, M) N
c
t
(A, N) = c
t
(A, M), by 1.6(13)(c) and the induction hypothesis].
()
5
(ii) letting N = c
k
(a
0
, . . . , a
s1
, M
n
) and A = a
0
, . . . , a
s1
we have:
for some A
0
, A
+
we have A A
0
A
+
c
m(A,k)
(a
0
, . . . , a
s1
, N), and
c
r(A,k)
(A
0
, M
n
) A
+
and there are B
+
, b such that c
k
(A
0
b, M
n
)
B
+
M
n
, [B
+
[

and (A
+
, A
0
, Ab, B
+
) is semi-(k

, r(A, k

))-good
and M [= (a
0
, . . . , a
s1
, b).
[Clearly ()
4
(ii) ()
5
(ii) again by 1.6(14) and K being semi-nice (and see 1.10(1))
which implies that we can use A b as B.].
Next let
()
6
(ii) letting N = c
k
(a
0
, . . . , a
s1
, M
n
), A = a
0
, . . . , a
s1
we have:
for some A
+
, A
0
we have: A A
0
A
+
c
m(A,k)
(a
0
, . . . , a
s1
, M
n
)
and c
r(A,k)
(A
0
, M
n
) A
+
and there are B
+
, b such that N (Ab)
B
+
K

, [B
+
[

and (A
+
, A
0
, Ab, B
+
) is semi-(k

, r(A, k

))-good
and truth = F

[B
+
c
k
(a
0
, . . . , a
s1
, b, B
+
), a
0
, . . . , a
s1
, b)/

=].
Clearly by the induction hypothesis ()
5
(ii) ()
6
(ii).
Lastly ()
6
(ii) ()
3
by Denition 1.9(1) + the induction hypothesis thus we
have equivalence. So ()
1
[()
5
(i) ()
6
(ii)], but the two later ones depend just
on M
n
c
k
(a
0
, . . . , a
s1
, M
n
), a
0
, . . . , a
s1
))/

=, thus we have nished.
2) By 1.16(1) it is enough to prove that the sequence
Prob
n
(A is embeddable into M
n
) : n < )
converge. This holds by weak completeness.
3),4),5) Left to the reader.
1.16
1.17 Remark. Note:
If K is complete, then T

has a unique (up to isomorphisms) countable model M


such that for some A
n
: n N) we have: M =
_
nN
A
n
, A
n
<
s
A
n+1
K

and
every A K

can be embedded into some A


n
, and if n N, A A
n
, A
s
B,
then for some m there is an embedding f of B into A
m
such that f A = id
A
and
f(B)
s
A
m
(see Baldwin, Shelah [BlSh 528], not used).
1.18 Claim. 1) A sucient condition for K is weakly nice is
() for every A < B, if (A
a
B) then for some k < we have
1 = Lim
n
Prob
n
_
for every embedding f of A into M
n
there are no
embedding g

: B M
n
extending f for
< k, disjoint over f
_
.
Proof. Easy.
0.1 LAWS SH550 19
2 More accurate measure and drawing again
We dene when a pair of functions

h = (h
d
, h
u
) giving, up to a factor, the values
nu(f, A, B, M) for M random enough, A <
pr
B and f : A M. We also dene
when

h obeys h (i.e. h bounds the error factor). From h
A,B
for A <
pr
B we dene
h
A,B
also for the case A <
s
B and then dene a good case when the functions are
polynomial in |M
n
| (see Denition 2.1, 2.3).
We then see how large is the factor error for the derived cases and deduce some
natural properties (in 2.4).
Then we deal with the polynomial case.
Lastly, (2.10-2.15) we introduce our framework for adding random relations to
random M
n
. Reading, you may assume every A K

is embeddable into every


random enough M
n
.
2.1 Denition. 1) We say the 0 1 context K obeys

h = (h
d
, h
u
) with error h
e
where d is for down, u is for up and e is for error if:
(a) for A <
pr
B we have h
d
A,B
and h
u
A,B
and h
e
are functions from
_
n<
K
n
to
R
0
(b) for some R
>0
for every random enough M
n
we have
(h
e
[M
n
])

h
d
A,B
[M
n
] h
u
A,B
[M
n
] and h
e
[M
n
] 1
(c) for every R
>0
we have
1 = Lim
n
Prob
n
_
for every embedding f of A into M
n
,
we have h
d
A,B
[M
n
](h
e
[M
n
])

nu(f, A, B, M
n
) h
u
A,B
[M
n
](h
e
[M
n
])

_
(see 0.3(2)).
1A) If h
e
is identically 1 we may omit it. If h
u
= h
d
, then we may write h
u
instead
of

h. If h
e
[M
n
] = |M
n
| we may say simply.
2) We say

h is uniform if h
x
A,B
[M] (for x u, d) depends on |M| (and x, A, B)
only but not on M and then we write h
x
A,B
(|M|) = h
x
A,B
[|M|]. Similarly for h
e
, h
used above and below. We say h goes to innity if for every m for every random
enough M
n
, h[M
n
] > m.
3) We say that

h is bounded (or bounded
+
) by h (for

h as above) if:
(a) h is a function from
_
nN
K
n
to R
+
(remember that the K
n
s are
pairwise disjoint)
(b) for every random enough M
n
we have h[M
n
] 1
(c) for every > 0 and A <
pr
B for every random enough M
n
we have
1 h
u
A,B
[M
n
]/h
d
A,B
[M
n
] (h[M
n
])

(d) for every A <


pr
B and m N0 for some > 0 for every random enough
M
n
we have
20 SAHARON SHELAH
h
d
A,B
[M
n
] > (h[M
n
])

m
(e) for
2
every A K

for every > 0 for every random enough M


n
we have
Prob
n
(A is embeddable into M
n
) 1/(h[M
n
])

.
3A) In the context of (3), let M
n
random enough mean that for every , the
probability of failure is 1/(h[M
n
])

. We say h is standard if for each m, for every


random enough M
n
, h[M
n
] > m.
3B) We
3
say

h is bounded

by h (for

h) as above if: clauses (a), (b) as above, but
in clauses (c),(e) we replace every > 0 by for some m = m(A, B) N in
clause (d) we replace some > 0 by every m N, i.e.
(a) h is a function from
_
nN
K
n
to R
+
(remember that the K
n
s are
pairwise disjoint)
(b) for every random enough M
n
we have h[M
n
] 1
(c) for every A <
pr
B, for some m = m(A, B) N0 for every random
enough M
n
we have
1 h
u
A,B
[M
n
]/h
d
A,B
[M
n
] (h[M
n
])
m
(d) for every A <
pr
B and m N for some m(A, B) N0 for every random
enough M
n
we have
h
d
A,B
[M
n
] > (h[M
n
])
m(A,B)
, m
(e) for
4
every A K

for some m(A) N0 for every random enough M


n
we have
Prob
n
(A is embeddable into M
n
) 1/(h[M
n
])
m
(part (3B) is an alternative to 2.1(3)).
4) Assume K obeys

h. For A <
s
B and M
_
n
K
n
we let
h
+u
A,B
[M] =: Max
_

<h
h
u
A

,A
+1
[M] :

A = A

: k)
is a decomposition of (A, B)
_
2
this, of course, will not suce for 0-1 law
3
this is an alternative to part (3), this does not matter really so we shall use the one
of part (3), the same applies to other cases
4
this, of course, will not suce for 0-1 law, and though more natural, we shall not follow it
here
0.1 LAWS SH550 21
h
u
A,B
[M] =: Min
_

<h
h
u
A

,A
+1
[M] :

A = A

: k)
is a decomposition of (A, B)
_
h
d
A,B
[M] =: Min
_

<h
h
d
A

,A
+1
[M] :

A = A

: k)
is a decomposition of (A, B)
_
h
+d
A,B
[M] =: Max
_

<h
h
d
A

,A
+1
[M] :

A = A

: k)
is a decomposition of (A, B)
_
Let h
u
A,B
[M] = h
u
A,B
[M] and h
d
A,B
(M) = h
+d
A,B
(M).
2.2 Discussion. For the semi-nice case, we may consider it natural to have the
functions h below be h
x,k
A
+
,A,B,D
giving information on
nu
k
(f, A, B, D, M) = [ex
k
(f, A, B, D, M)[ where ex
k
(f, A, B, D, M) = g B : g
embeds D into M, it extends f (which embeds A into M) and c
k
(g(B), M)
g(D) and we restrict ourselves to the case that there is an embedding f
+
of A
+
into M extending f such that c
r
(f(A), M) f(A
+
). So we may write h
x,k,r
A
+
,A,B,D
and ex
k,r
(f, A, A
+
, B, D, M). Note that the variables here of ex, nu are dierent
than in the usual case.
2.3 Denition. 1) We say K obeys the polynomial

h over (or modulo) h if

h =
h
u
, h
d
) and h
u
= h
d
, h are functions from
_
n
K
n
to R
0
and h
u
, h are uniform
(see Denition 2.1(2)) and for every A <
pr
B a real (A, B) R
>0
is well dened
and we have:
(a) h :
_
n<
K
n
R
+
(b) for every m, for random enough M
n
we have h[M
n
] > 1 and |M
n
| m
(c) for every > 0 for random enough M
n
we have h[M
n
] < |M|

(d) if A <
pr
B and m N, then for every M
n
random enough
5
h
d
A,B
[M
n
] = h
u
A,B
[M
n
] = |M
n
|
(A,B)
; and if f embeds A into M
n
then
h[M
n
]
m
h
d
A,B
[M
n
] nu(f, A, B, M
n
) h[M
n
]
m
h
u
A,B
[M
n
]
(e) for some > 0 for every M
n
random enough |M
n
| > m(h[M
n
])

(f) if A K

then for each k for some m


Prob
n
(A is embeddable into M
n
assuming |A| = k) 1/h[M
n
]
m
.
5
note this is not as in 2.1(3)(c)
22 SAHARON SHELAH
2) We say

h is strictly polynomial if
(a) if A <
pr
B then for some c = c(A, B) R
>0
for some > 0, for every
random enough M
n
and every f : A M
n
we have
c(A, B)|M
n
|
(A,B)
(1 |M
n
|

) h
d
A,B
(M
n
)
h
u
A,B
(M
n
)
c(A, B)|M
n
|
(A,B)
(1 +|M
n
|

).
3) We say

h is a polynomial if

h is polynomial over some h. We say K is polynomial
over h (strictly polynomial), if this holds for some

h.
2.4 Fact. 1) Assume K obeys

h with error h
e
.
If A
0
<
pr
A
1
<
pr
< <
pr
A
k
and > 0, then every random enough M
n
satises:
() for every embedding f of A
0
into M
n
,

<k
h
d
A

,A
+1
[M
n
] (h
e
[M
n
])

nu(f, A
0
, A
k
, M
n
)

<k
h
u
A

,A
+1
[M
n
] (h
e
[M
n
])

.
2) Assume K obeys

h with error h
e
, and for clause () assume also that

h is bounded
by h and h h
e
.
() if A
s
B, then for every random enough M
n
we have
6
h
d
A,B
[M
n
] h
+d
A,B
[M
n
] = h
d
A,B
[M
n
] h
u
A,B
[M
n
]
= h
u
A,B
[M
n
] h
+u
A,B
[M
n
]
() if A <
s
B and R
>0
, then for every random enough M
n
and embedding
f : A M
n
we have:
h
d
A,B
[M
n
] (h
e
[M
n
])

nu(f, A, B, M
n
) h
u
A,B
[M
n
] (h
e
[M
n
])

() if A <
pr
B, then
h
+d
A,B
[M] = h
d
A,B
[M] = h
d
A,B
[M] and
h
+u
A,B
[M] = h
u
A,B
[M] = h
u
A,B
[M]
() if A <
s
B then for every > 0, for every random enough M
n
we have:
(h[M
n
])

h
u
A,B
[M
n
]/h
d
A,B
[M
n
] (h[M
n
])

, moreover
(h[M
n
])

h
+u
A,B
[M
n
]/h
d
A,B
[M
n
] (h[M])

() if A
0
<
s
A
1
<
s
A
2
then for any random enough M
n
:
h
d
A0,A1
[M
n
] h
d
A1,A2
[M
n
] h
d
A0,A2
[M
n
]
h
u
A0,A2
[M
n
] h
u
A0,A1
[M
n
] h
u
A1,A2
[M
n
]
6
if A is embeddable into Mn of course as otherwise h
x
A,B
[Mn] is not actually well dened,
we tend to forget to state this
0.1 LAWS SH550 23
Proof. 1) Easy by induction on k.
2) Clause ():
The rst and last inequality holds as Min(X) Max(X) for X R nite non-
empty (by 1.6(5)) as in this case. The equalities hold by Denition 2.3(1). The
middle inequality holds by clause () below.
Clause ():
By () of 2.4(1).
Clause ():
As A <
pr
B implies (A, B) has a unique decomposition.
Clause ():
Let

A = A

: k) be a decomposition of (A, B) and R


>0
, hence for every
random enough M
n
for every embedding f of A into M

<k
h
d
A

,A
+1
[M
n
] (h[M
n
])

h
d
A,B
[M
n
] (h[M
n
])

nu(f, A, B, M
n
) h
u
A,B
[M
n
] (h[M
n
])

<k
h
u
A

,A
+1
[M
n
] (h[M
n
])

.
This gives the rst inequality part of the inequalities. Let > 0 be given.
Now for each < k for every random enough M
n
we have
1 h
u
A

,A
+1
[M
n
]/h
d
A

,A
+1
[M
n
] (h[M
n
])
/k
.
Hence for every random enough M
n
we have
h
u
A,B
[M
n
]/h
d
A,B
[M
n
]
_

<k
h
u
A

,A
+1
[M
n
]
_
/
_

<h
h
d
A

,A
+1
[M
n
]
_
=

<k
_
h
u
A

,A
+1
[M
n
]/h
d
A

,A
+1
[M
n
]
_

<k
(h[M
n
])
/k
= h[M
n
]

.
For the second phrase of clause () (the moreover) note that for every random
enough M
n
for every f : A M
n
we have: for some decomposition

A of (A, B)
1 h
+u
A,B
[M
n
]/nu(f, A, B, M
n
) =

<k
h
u
A

,A
+1
[M
n
]/nu(f(A, B, M
n
))

<k
h
u
A

,A
+1
[M
n
]/

<k
h
d
A

,A
+1
[M
n
]
((h[M
n
])
/k
)
k
= (h[M
n
])

and for possibly other decomposition



A of (A, B)
24 SAHARON SHELAH
1 nu(f, A, B, M
n
)/h
d
A,B
[M
n
] = nu(f(A, B, M
n
))/

<k
h
d
A

,A
+1
[M
n
]

<k
h
u
A

,A
+1
[M
n
]/

<k
h
d
A

,A
+1
[M
n
]
(h[M
n
]
/k
)
k
= h[M
n
]

.
Together we get the desired inequality (well, for 2).
Clause ():
Easy (using 2.1(4)).
2.4
2.5 Claim. Assume the 0-1 law context K obeys

h = (h
d
, h
u
) with error h
e
.
1) A sucient condition for K is weakly nice is
()
1
if A <
pr
B and m

N and > R
>0
small enough, then for every random
enough M
n
we have h
d
A,B
[M
n
] (h
e
[M
n
])

> m

.
2) If

h is bounded by h or

h is polynomial over h, then ()
1
from above holds hence
K is weakly nice.
Proof. 1) Assume A <
pr
B and we show that ()
1
of Denition 1.7 holds in this
case.
If [BA[ = 1, as A <
pr
B, for every m, for every random enough M
n
, for any
f : A M
n
, by ()
1
of 2.5(1) there are distinct g
1
, . . . , g
m
which are embedding
of B into M
n
extending f, now they are necessarily pairwise disjoint over A so the
demand in 1.3(1) holds.
So assume [BA[ > 1 and m

be given. For each b BA by 1.6(8) we know that


(A b)
i
B but [Ab[ [A[ +1 < [B[ so Ab <
i
B. Hence by Denition
1.3(2)(b) for some n
b
, m
b
N for every n n
b
we have 1/([B[+1) < Prob
n
(E
b
n
)
where E
b
n
is the event:
for every embedding f of (A b) into M
n
there are at most
m
b
extensions of f to an embedding of B into M
n
.
Let m

= [BA[ [BA[ ( max


bB\A
m
b
) m

.
Also by ()
1
above and clause (c) of 2.1(1) for some n

N for every n n

the event E

n
E

n
has probability 1 /([B[ + 1) where E

n
= [h
d
A,B
[M
n
]
nu(f, A, B, M
n
) for every embedding f : A M
n
] and E

= [h
d
A,B
[M
n
] > (m

+
1)].
Let n

= Max(n
b
: b BA n

).
Now suppose n n

, then with probability 1 all the events E


b
n
for b BA
and E

n
= [h
d
A,B
[M
n
] > m

] and E

n
occurs for M
n
. It suces to show that then
()
1
of 1.7 occurs. So let f be an embedding of A into M
n
, so as both E

n
and E

n
occur necessarily there are distinct extension g
1
, . . . , g
m
of f embedding B into
M
n
. For i 1, . . . , m

let u
i
= j : j 1, . . . , m

and Rang(g
j
) Rang(g
i
) ,=
0.1 LAWS SH550 25
Rang(f), and for b BA and c M
n
Rang(f) let v
b,c
= i : i 1, . . . , m

and g
i
(b) = c. Now clearly [v
b,c
[ m
b
as the event E

b
occurs and
u
i
=
_
bB\A
_
c Rang(gi)\Rang(f)
v
b,c
hence
[u
i
[ [BA[ [BA[ max
bB\A
m
b
m

/m

.
So easily we can nd w 1, . . . , m

such that [w[ = m

and i w & j w &


i ,= j j / u
i
. So g
i
: i w is as required.
2) Check (see in particular 2.1(3)(b),(d)).
2.5
2.6 Claim. 1) Assume K obeys

h with error h
e
and: A <
s
B D and A C
s
D
and D = B C. For every R
>0
, for every random enough M
n
, if C is
embeddable into M
n
, then
h
u
A,B
[M
n
] h
d
C,D
[M
n
] (h
e
[M
n
])

.
2) If in addition,

h is bounded by h and h h
e
, then for every > 0, for every
random enough M
n
and x u, d
h
x
A,B
[M
n
] h
x
C,D
[M
n
] (h[M
n
])

3) If A
0
A
1
. . . A
k
, A
0

s
A
k
and R
>0
, then for every M
n
random
enough into which A
k1
is embeddable
h
d
A0,A
k
[M
n
] h
u
Ai,Ai+1
[M
n
] : A
i
<
s
A
i+1
(h
e
[M
n
])

.
2.7 Denition. 1) We say that (K,

h, h
e
) is semi-nice if
(a) K is a 0-1 context
(b) K obeys

h with error h
e
(c) for every A K

and k for some r we have:


() for every random enough M
n
, and embedding f : A M
n
and b
M
n
, (c
r
(f(A), M
n
), f(A), f(A) b, c
k
(f(A) b, M
n
)) is semi

(k, r)-good for (K,

h, h
e
), see below.
(d) Condition ()
1
of 2.5(1) holds.
2) (A

, A, B, D) is semi

(k, r)-good for (K,

h, h) if
7
for some A
0
we have:
() A A
1
A

, A
0

s
D K

, B D K

and
() for every > 0, for every random enough M
n
, for every embedding f

of A

into M
n
satisfying c
r
(f

(A), M
n
) f

(A

), we have, letting f = f

A
1
7
note that because of A s D, this does not copy the denition in 1 even in nice cases
26 SAHARON SHELAH
the inequality
(h
e
[M
n
])

h
d
A,D
[M
n
] nu
k
(f, f(A
0
), B, D, M
n
) h
u
A,D
[M
n
](h
e
[M
n
])

(on nu
k
see below).
2A) We say (A

, A, B, D) is semi

-nice-(k, r)-good for (K,

h, h) if: A
1
= A

in part
(2).
3) If A
s
D, B D, k N and f : A M we let nu
k
(f, A, B, D, M) =
[ex
k
(f, A, B, D, M[ where
ex
k
(f, A, B, D, M) =
_
g :g is an embedding of D into M
extending f and satisfying c
k
(g(B), M) = g(c
k
(B, D))
_
.
4) We say that (K,

h, h) is polynomially semi-nice if a), b), c), d) of part (1) holds


and
(e)

h is polynomial over h
e
.
We can list some obvious implications.
2.8 Claim. 1) Assume (K,

h, h
e
) is semi-nice and h
e
goes to innity and (A
1
, A, B, D)
is semi

(k, r)-good for (K,

h, h). Then we can nd B

, D

, g

such that
(a) A A
1

s
D

and B

(b) g

is an embedding of D into D

(c) B

= g

(B), D

= A
1
g

(D) and c
k
(g

(B), D

) = g

(c
k
(B, D))
(d) for every random enough M
n
and f : A
1
M
n
satisfying c
r
(f(A), M
n
)
f(A
1
) there is g

: D

M
n
extending f such that c
k
(g

(B

), M
n
) =
g

(c
k
(B

, D

)), that is (A
1
, A, B

, D

) is semi

(k, r)-nice
(e) if (K,

h, h
e
) is polynomially semi-nice then
(A, D) = (A
1
, D

).
2) Assume K obeys

h with error h
e
, h
e
going to innity. If (A, A
0
, B, D) is semi

(k, r)-good for (K,

h, h) then it is semi-(k, r)-good (see Denition 1.9(1)).


3) If (K,

h, h
e
) is semi-nice, then K is semi-nice.
Proof. 1) Straight by counting.
2) By part (1).
3) By part (2).
2.8
2.9 Claim. 1) Assume K is semi-nice. Then for every A K

and k and , for


some r we have:
() for every random enough M
n
, for every f : A M
n
and B M
n
, [B[ ,
we have
0.1 LAWS SH550 27
(c
r
(f(A), M
n
), f(A), B, c
k
(f(A) B, M
n
)) is semi-(k, r)-good.
2) Similarly for semi

-nice for (K,

h, h).
Proof. 1) We prove it by induction on . Now for = 0 this is trivial by 1.11, so let
us prove it for + 1 (assuming we have proved it for ). So let A K

be given.
Let r(1) be such that (exists by 1.12 applied to k

k and

= +[A[)
()
1
if M
n
is random enough, A

, [A

[ [A[ + , A

M
n
and b M
n
,
then (c
r(1)
(A

, M
n
), A

, A

b, c
k
(A

b, M
n
)) is semi-(k, r(1))-good.
Similarly by the induction hypothesis, for some r(2)
()
2
if M
n
is random enough, A

, [A

[ [A[, B

M
n
, [B

[ then
(c
r(2)
(A

, M
n
), A

, A

, c
r(1)
(A B

, M
n
)) is semi-(r(1), r(2))-good.
We shall show that r(2) is as required. So let M
n
be random enough and f : A
M
n
and B M
n
, [B[ = + 1. Let B = B
0
b, [B
0
[ . So by ()
1
, the
quadruple (c
r(1)
(f(A) B
0
, M
n
), f(A) B
0
, B
0
b, c
k
(A B
0
b, M
n
)) is
semi-(k, r(1))-good.
Similarly by ()
2
the quadruple
(c
r(2)
(f(A), M
n
), f(A), B
0
, c
r(1)
(f(A) B
0
, M
n
))
is semi-(r(1), r(2))-good.
By transitivity of the property easily the quadruple
(c
r(2)
(f(A), M
n
), f(A), B, c
k
(A B, M
n
))
is (k, r(2))-good.
2) Similar to the proof of part (1), using Denition 2.7 instead of 1.12.

2.9

We now turn to redrawing.
2.10 Denition. Assume K, K
+
are 0-1 contexts.
1) We say K
+
expands K if:
(a)
+
a vocabulary extending , (hence consisting of predicates only,
+
locally
nite, of course)
(b) K
+
is the family of
+
-models satisfying M
+
K and
K
+
n
= M
+
K
+
: M
+
K
n

(c) we let
[]
= R : R
+
has places and
<
=
_
m<

[m]

(d) for M K
n
we have

n
(M) =

+
n
(M
+
) : M
+
K
+
n
and M
+
= M.
28 SAHARON SHELAH
For simplicity ,
+
are irreexive and M K
n

n
(M)(=
n
(M)) > 0.
2) For M
n
K
n
we dene
n

+
Mn
, a distribution on
K
+
n
[M
n
] = M
+
n
K
+
n
: M
+
n
= M
n
by
(
n

+
Mn
)(M
+
n
) =
+
n
(M
+
n
)/
n
(M
n
), we write
+
Mn
when n is clear from context
(this is even formally clear when the K
n
s are pairwise disjoint).
We will be mostly interested in the case M
+
n
is drawn as in Denition 2.12(2)
below, but rst dene less general cases.
2.11 Denition. 1) We dene when 0-1 context K
+
is independently derived
from K by the function p (everything related to K
+
has superscript +, below

+
, K
+
, K
+
n
are as in 2.1(1), and for x a, i, s, pr,
x
is dened by 1.3(2)).
The crux of the matter is dening
+
n
; it suces to dene each
+
Mn
. We can think
of it as choosing a
+
n
-random model M
+
n
by expanding M
n
, dening M
+
n

<
by
induction on by iping coins: for = 0, M
+
n

<0
is chosen
n
-randomly fromK
n
(i.e. is M
n
). By induction on , for each set A [M
n
]

(i.e. A M
n
, [A[ = ): we
choose A
+

= (M
+
n

) A, each possibility A
+

has probability p
A
+

,M
+
n

<
=
p
A
+

[M
+
n

<
] depending on M
+
n

<
and (M
+
n
A)
<
(not just on the
isomorphism type), note that the second one, (M
+
n
A)
<
is determined by A
+

as A
+


<
. Lastly, the drawings above (in stage ) are done independently for
distinct A (for each M
+
n

<
).
2) We say K
+
is derived uniformly and independently if in addition p
A
+

[M
+
n

<
] depends on M
n
, A
+

/

= only and is derived very uniformly if it depends on
A
+
/

= and |M
n
| (and n) only.
2.12 Denition. 1) Suppose the 0-1 context K
+
is independently derived from K
by the function p (see Denition 2.11). We say p has uniform bounds p if:
(a) p = (p
d
, p
u
)
(b) for A
+
K
+
and = [A
+
[ N, p
d
A
+

, p
u
A
+

are functions from


K
n
to [0, 1]
R
(depending only on A
+

up to isomorphism) such that


for every random enough M
n
:
() for every embedding f of A = A
+
into M
+
n
,
letting B
+
= f

(A
+
) and A
+

= f(A
+
) we have
p
d
A
+

[M
n
] p
B
+

[M
n

<
] p
u
A
+

[M
n
]
and p
x
A
+

[M
n
] depends on (A
+

)/

= only, (and, of course, M
n
and n); independently for the relevant distinct A
+

s.
So we can write p
x
A
+

(for M
n
) for p
x
A
+

[M
n
].
(Note that essentially A
+
= A
+

as [A
+
[ = and the relation are assumed to
be irreexive, so we can waive the

abusing notation.
2) Suppose the 0-1 context K
+
is an expansion of K. We say that the drawing of
M
+
n
(or for K
+
) obeys (the pair of functions) p = (p
d
, p
u
) with error h
e
over K if for
every R
>0
and A
+
K
+
, letting = [A
+
[, A = A
+
and given M
n
K
n
,
and an embedding f : A M
n
, assuming M
+
n
B was already drawn for every
0.1 LAWS SH550 29
B [M
n
]

such that B ,= f(A), and f is an embedding of A


+

<
to M
+
n
then
the probability (by
+
Mn
, modulo the assumptions above) that f embeds A
+

into M
+
n
is at least p
d
A
+

[M
n
](h
e
[M
n
])

and at most p
u
A
+

[M
n
](h
e
[M
n
])

(so we assume always p


d
A
+

[M
n
] p
u
A
+

[M
n
] (h
e
[M
n
])

, at least for random


enough M
n
.)
3) If p
x
A
+

[M
n
] depends only on |M
n
| and n, we may write p
x
A
+

[|M
n
|, n]; if
|M
n
| determines n we may omit the latter (and when the intention is clear from
context also in other cases).
4) For A
+

+
B
+
K
+

and x d, u we let:
p
x
A
+
,B
+[M
n
] =: p
x
C
[M
n
] : C B
+
and C A
+
.
5) We omit h
e
if h
e
= 1, we say simply if h
e
[M
n
] = |M
n
|.
6) Let h
1
, h
2
, h
e
be functions from
_
n
K
n
to R
>0
, R
1
, respectively. We say h
1

h
h
2
if for every random enough M
n
, (h
e
[M
n
])

h
1
[M
n
]/h
2
[M
n
] (h
e
[M
n
])

.
2.13 Remark. 1) Obeys (Denition 2.12(2)) means we have independence but
only approximately, so we shall be able to give later other distributions in which
the drawing are independent and which give lower and upper bounds to the situa-
tion for K.
2) Among those variants we use mainly Denition 2.12(2) and even more the poly-
nomial case.
2.14 Denition. In Denition 2.12 we say that p is polynomial over h if:
(a) h is a function from
_
n<
K
n
to N converging to innity
(b) for every R
>0
for every R
>0
for M
n
random enough
> h[M
n
]/|M
n
|

(c) for every A


+
= B
+

, = [B[, B
+
K
+
, for some (A
+
) R we have:
() there are constants c
d
A
+
, c
u
A
+
such that:
p
d
A
+(M) = (c
d
A
+)|M|
(A
+
)
/h[M]
p
u
A
+(M) = (c
u
A
+)|M|
(A
+
)
/h[M].
so this is not necessarily the very uniform case.
Remark. Of course, we can replace constant by any slow enough function.
30 SAHARON SHELAH
2.15 Claim. 1) Denition 2.11(1) is a particular case of Denition 2.12(2). Also
Denition 2.11(2), is a particular case of Denition 2.12(1), (with p
u
= p
d
), and
all of them are particular cases of Denition 2.10(1). Also, if we have 2.11(1) +
2.12(2),then we have 2.12(1).
2) In 2.10(1) necessarily
(a) K

= A
+
: A
+
K
+

(b) if A
+
B
+
K
+

and (A
+
)
i
(B
+
) then A
+

i
B
+
(c) if A
+
B
+
K
+

and A
+

s
B
+
then (A
+
)
s
(B
+
).
Proof. Check.
0.1 LAWS SH550 31
3 Regaining the context for K
+
: reduction
We write down the expected values for nu(f, A
+
, B
+
, M
+
n
) (see Denition 0.3(2)
and 3.3 below) then we dene
+
b
as what will be a variant of
+
a
if things are close
enough to the expected value and derive
+
j
(the parallel to
+
i
),
+
t
(the parallel
to
s
) and
+
qr
(the parallel to
+
pr
) (all done in Denition 3.4). We phrase a
natural sucient condition to (the probability condition part for) A
+

+
a
B
+
(in
?) and show that when it is equivalent to a natural strengthening then A
+
<
+
b
B
> scite4.5 undened
A
+
<
+
a
B and moreover
+
j
=
+
i
,
+
t
=
+
s
(and some obvious fact, all this in 3.7).
We then prove that
+
b
, has the formal properties of
+
a
, . . . , (in 3.6).
We then concentrate on the polynomial case, ending with sucient conditions
for K being semi-nice (3.12 - 6.13).
3.1 Context. K,

h as in 2.1 and K
+
, p as in 2.12(2) and we use 2.12(4).
3.2 Denition. Assume f : A M
n
(an embedding),
A
0
<
pr
A
1
<
pr
<
pr
A
k
and A = A
0
, B = A
k
and

A = A

: = 0, . . . , k).
Let for i k:
T
[]
= T
[]
(f, A, B,

A, M
n
) =: g : g an embedding of A

into M
n
extending f
T(f, A, B,

A, M
n
) =
_
k
T
[]
(f, A, B,

A, M
n
).
in fact, we can omit A, B as they are determined by

A.
3.3 Claim. Assume
() K obeys

h with error h
e
1
() the drawing of M
+
n
obeys p with error h
e
2
(see 2.12(2))
()

A = A

: k) is a decomposition of A <
s
B (for K)
() R
>0
and M
n
is random enough
() f : A M
n
K
n
an embedding, and A
+
= A, B
+
= B, A
+
B
+
and A
+

=: B
+
A

() R
>0
.
Then the expected value of nu(f, A
+
, B
+
, M
+
n
) under the distribution
+
Mn
(see
1.4(1)) and the assumption f will be an embedding of A
+
into M
+
n
and M
n
random enough is:
at least

<k
_
p
d
A
+

,A
+
+1
[M
n
] h
d
A

,A
+1
[M
n
]
_
(h[M
n
])

32 SAHARON SHELAH
and at most

<k
_
p
u
A
+

,A
+
+1
[M
n
] h
u
A

,A
+1
[M
n
]
_
(h[M
n
])

.
Proof. Straight.
3.3
So by Claim 3.3, if the upper and lower bounds are close enough, we can show that
other decompositions of (A
0
, A
k
) give similar results.
In the following, for the interesting case (here),
+
j
,
+
t
,
+
qr
will be proved to be
equal to
K
+
i
,
K
+
s
,
K
+
pr
respectively (but
+
b
will not be
K
+
b
).
3.4 Denition. 1) K

= A
+
: A
+
K
+
and A
+
K

and, of course,
K
+

= A
+
: A
+
K
+
and 0 < limsup
n
Prob

+
n
(A
+
embeddable into M
+
n
).
2) For A
+
, B
+
K

let A
+

+
b
B
+
holds i (A
+
)
a
(B
+
) or
(A
+
)
s
(B
+
) and

0
A
+
,B
+
for some k N, for every > 0 we have
1 = Lim
n
Prob
n
_
we have is larger than the probability that for some embedding
f of (A
+
) into M
n
, the number of extensions g of
f to embedding of B
+
into M
+
n
is k, by the distribution
+
n
[M
+
n
]
under the assumption that f embeds A
+
into M
+
n
_
.
3) A
+

+
j
B
+
if for every A
+
1
, we have A
+

+
A
+
1
<
+
B A
+
1
<
+
b
B.
4) For A
+
, B
+
K

let A
+

+
t
B
+
if A
+

+
B
+
and for no C
+
K

do we
have A
+
<
+
j
C
+

+
B
+
.
5) For A
+
, B
+
K

let A
+

+
qr
B
+
if A
+
<
+
t
B
+
but for no C
+
do we have
A
+
<
+
t
C
+
<
+
t
B
+
.
6) We say

A
+
is a
+
qr
decomposition of A
+
<
+
s
B
+
if

A
+
= A
+

: k), A
+

<
+
qr
A
+
+1
, A
+
0
= A
+
, A
+
k
= B
+
.
7) K

=
_
A
+
: A
+
K
+
and for some

A = A

: k)
and

A
+
= A
+

: k) we have:
A

= A
+

, A

<
qr
A
+1
,
i
A
0
and for some R
>0
we have
0 < Lim sup
n
Prob
n
_
<

<k
(p
u
A
+

,A
+
+1
[M
n
]h
u
A
+

,A
+
+1
[M
n
])
__
.
3.5 Claim. Assume A
+
B
+
D
+
, A
+
C
+
D
+
are in K
+

and D
+
=
B
+
C
+
.
1) If A
+

+
b
B
+
, then C
+

+
b
D
+
.
2) If A
+

+
j
B
+
, then C
+

+
j
D
+
.
Proof. 1) Reect.
2) Follows from part (1).
3.5
0.1 LAWS SH550 33
3.6 Claim. 1) K
+

K
+
are closed under submodels and
isomorphisms
[why? reread Denition 1.3(1),3.4(1), 3.4(7)].
2) If A
+
1

+
j
A
+
2

+
C
+
K
+

and B
+

+
C
+
K
+

and A
+
1
B
+
then
B
+

+
j
B
+
A
+
2
. If A
+

+
B
+

+
C
+
, A
+

+
j
C
+
then B
+

+
j
C
+
and
B
+

+
b
C
+
[why? by Denition 3.4(3) and 3.5].
3) On K

the relation
+
b
is a partial order and also the relation
+
j
is a partial
order
[why? rst we prove that
+
b
is a partial order so assume A
+
0

+
b
A
+
1

+
b
A
+
2

K
+

. For = 1, 2 let k

N
>0
be such that the condition in
0
A
+

,A
+
+1
of 3.4(2)
holds. Let k =: k
0
+k
1
N. So now check.
Suppose that A
+

+
j
B
+

+
j
C
+
but assume toward contradiction that (A
+

+
j
C
+
), so by the denition of
j
for some D
+
, A
+

+
D
+
<
+
C
+
, (D
+
<
+
b
C
+
).
Now if B
+

+
D
+
we get contradiction to B
+

j
C
+
, so assume (B
+
D
+
).
By monotonicity A
+

+
j
B
+
implies D
+
<
+
j
D
+
B
+
hence by the denition of

+
j
we have D
+

+
b
D
+
B
+
. Also as B
+

+
j
C
+
, necessarily D
+
B
+

+
j
C
+
,
so as
+
b
is transitive clearly D
+

+
b
C
+
as required.]
4) If A
+

+
C
+
are in K
+

then for one and only one B


+
K
+

we have
A
+

+
j
B
+

+
t
C
+
.
[why? let B
+

+
C
+
be maximal such that A
+

+
j
B
+
, it exists as C
+
is nite
and A
+

+
j
A
+
(because A
i
A where A = A
+
), now B
+

+
t
C
+
by 3.6(3)
+ Denition 3.4(4). Hence at least one B
+
exists, so suppose A
+

+
j
B
+


+
t
C
+
for = 1, 2 and B
+
1
,= B
+
2
so without loss of generality B
+
2
B
+
1
,= , by 3.6(2),
B
+
1

+
j
B
+
1
B
+
2
hence B
+
1
<
+
j
B
+
1
B
+
2

+
C
+
, but this contradicts B
+
1

+
t
C
+
(see Denition 3.4(3))].
5) If A
+
<
+
t
B
+
then there is a <
qr
-decomposition

A
+
of A
+
<
+
t
B
+
[see Deni-
tion 3.4(5), remembering B is nite].
6) If A
+

+
t
B
+
and C
+

+
B
+
then C
+
A
+

+
t
C
+
[why? otherwise for some C
+
1
, C
+
A
+
<
+
j
C
+
1

+
C
+
, then by 3.6(2) we have
A
+
<
+
j
A
+
C
+
1
, contradiction to A
+

+
t
B
+
].
7) The relations
+
b
,
+
j
,
+
t
,
+
qr
are preserved by isomorphisms.
8) If A
+
<
+
qr
B
+
then for every b B
+
A
+
we have (A
+
a)
+
j
B
+
[why? if not, then for some C
+
, (A
+
a)
+
C
+
<
+
t
B
+
, but
A
+
<
+
qr
B
+
A
+
<
+
t
B
+
A
+
<
+
t
C
+
(by Denition 3.4(5),3.6(6) respectively)
so A
+
<
+
t
C
+
<
+
t
B
+
contradicting A
+
<
+
qr
B
+
].
9) T
+

is a consistent (rst order) theory.


3.7 Claim. Assume that K
+
obeys p (over K), K obeys

h and
() if A
+
, B
+
K

, (A
+
) <
s
(B
+
),

A = A

: n), a decomposition
so A = A
0

s
. . .
s
A
n
= B, A
+

= B
+
A

then

0
A
+
,B
+ of 3.4(2)
and

1
A
+
,B
+
,

A
and

2
A
+
,B
+
,

A
below are equivalent where

1
A
+
,B
+
,

A
for some > 0 we have
34 SAHARON SHELAH
1 = Lim
n
Prob
n
_
>

<k
_
p
u
A
+

,A
+
+1
[M
n
] h
u
A

,A
+1
[M
n
]
_
_
.

2
A
+
,B
+
,

A
for some R
>0
we have
8
1 = Lim
n
Prob
n
_
|M
n
|

>

<k
(p
u
A
+

,A
+
+1
[M
n
] h
u
A

,A
+1
[M
n
])
_
.
1) If A
+
<
+
b
B
+
(in K

) then for some m N we have:


1 = Lim
n
Prob

+
n
_
there is no f : A
+
M
+
n
and g

: B
+
M
+
n
for < m
such that g

: < m) is a disjoint sequence of extensions of f


_
.
2) If A
+
<
+
j
B
+
(in K

) then for some k N we have:


1 = Lim
n
Prob

+
n
( there is no f : A
+
M
+
n
and g

: B
+
M
+
for < k
such that g

: < k) is a sequence of distinct extensions of f).


3) If A
+

+
b
B
+
K
+

, then A
+

+
a
B
+
.
4) If A
+

+
j
B
+
K
+

, then A
+

+
i
B
+
.
5) If (A
+
)
a
(B
+
) and A
+

+
B
+
K
+

, then A
+

+
b
B
+
.
6) If (A
+
)
i
(B
+
) and A
+
B
+
K
+

, then A
+

+
j
B
+
.
3.8 Remark. 1) Are there cases we may be interested in which are not covered by
this claim? If K obeys

h, and
h
u
A,B
(n) n
1/(logn)
1/2
h
d
A,B
(n).
2) We may rephrase the assumption in 3.7 to cover those cases.
3) Note: if all is polynomial, then
2
A
+
,B
+
,

A
is equivalent to
1
A
+
,B
+
,

A
.
4) We can use h as in 2.1(3),2.1(3A); see latter.
Proof. 1) Let > 0 be as in
2
A
+
,B
+
,

A
, and let m N be such that m > [A
+
[ so
=: k[A
+
[ R
>0
. Let A

: k), A
+

: k), be as in ? and let A = A


+
;
> scite4.5 undened
considering our aim, without loss of generality A
0
= A (see Denition 3.4(1)).
8
of course, we can think of cases that there are few copies of A
+

, then Mn can be replaced


by such upper bounds; this has no inuence in the polynomial case
0.1 LAWS SH550 35
So for every M K , |M|
m
[f : f an embedding of A into M[ is
|M|
(m|A|)
= |M|

, hence it suces to prove:


() if M
n
is random enough and f is an embedding of A into M
n
then
|M
n
|
m
Prob
n[Mn]
_
there are a sequence of k disjoint extensions of f
to an embedding of B
+
into M
+
n
,
under the assumption f is an embedding of
A
+
into M
+
n
_
.
Now if M
n
is random enough then by 7.7(1) we know
nu(f, A, B, M
n
)

<k
h
u
A

,A
+1
[M
n
], i.e. ex(f, A, B, M
n
) has

<k
h
u
A

,A
+1
[M
n
]
members. Let F = F
m
= F
m
[M
n
] =

f :

f = f

: < m) and
f

ex(f, A, B, M
n
) and

f is disjoint over A.
So [F
m
[ [ex(f, A, B, M
n
)[
m

<k
h
u
A
+

,A
+
+1
[M
n
]
_
m
.
Hence, if we draw M
+
n
(by the distribution
n
[M
n
]) under the assumption f is
an embedding of A
+
into M
+
n
, then the expected value of
[

f F
m
[M
n
] : for < m, f

is an embedding of B
+
into M
+
n
[ is

fF
Prob

+
n
[Mn]
_
for < m, f

is an embedding of B
+
into M
+
n

f is an embedding of A
+
into M
n
_

fF
_

<k
p
u
A
+

,A
+
+1
[M
n
]
_
m
= [F[
_

<k
p
u
A
+

,A
+
+1
[M
n
]
_
m

<k
h
u
A
+

,A
+
+1
[M
n
]
_
m

<k
p
u
A
+

,A
+

[M
n
]
_
m
=
_

<k
_
p
u
A
+

,A
+
+1
[M
n
] h
u
A
+

,A
+
+1
[M
n
]
__
m
but if M
n
is random enough
2
A
+
,B
+
,

A
(i.e. by the assumption of 3.7 and
0
A
+
,B
+
which holds as we are assuming A
+

+
j
B
+
) this last number is

_
|M
n
|

_
m
= |M
n
|
m
.
As said above, this suces.
2) By the -system lemma and 3.7(1) (and the denitions).
3) Follows by 3.7(1).
4) Follows from 3.7(2).
5) Read the denitions.
3.7
36 SAHARON SHELAH
3.9 Claim. 1) A sucient condition for
+
i
=
+
j
K
+

is:
() like () of 3.7 but we add another equivalent condition

1,d
A
+
,B
+
,

A
for some > 0 we have
1 = Lim
n
Prob
n
_
>

<k
_
p
d
A
+

,A
+
+1
[M
n
] h
d
A

,A
+1
[M
n
]
_
_
.
2) Note that

2
A
+
,B
+
,

A

1
A
+
,B
+
,A
+

1,u
A
+
,B
+
,

A
and

2
A
+
,B
+
,A
+

0
A
+
,B
+
,

A

1,u
A
+
,B
+
,

A
.
Proof. By 3.7(4), it suces to prove, assuming A
+

+
B
+
& (A
+

+
j
B
+
),
that (A
+

+
i
B
+
). As (A
+

+
j
B
+
) necessarily there is A
+
1
such that A
+

+
A
+
1

+
B
+
, (A
+
1

+
b
B
+
).
The rest is easy, too.
3.9
3.10 Denition. 1) Assume
() K obeys

h with error h
e
, K
+
is drawing obeys p with error h
e
1
.
We say that (A
+
, A
+
0
, B
+
, D
+
) is a pretender to semi

(k, r)-good for


(K
+
, K,

h, h
e
0
, p, h
e
1
) if: for some witness A
+
1
,
(a) A
+
0

+
j
A
+
1

+
j
A
+
, A
+
1

+
t
D
+
, B
+

+
D
+
(b) (A
+
, A
+
0
, B
+
, D
+
) is semi

(k, r)-good for (K,

h, h) witnessed
by A
+
1

(c) if D
+
<
+
D
+
1
K

, A
+
1
<
t
D
+
1
and D
+
<
i
D
+
1
and C
+
1
<
+
j
C
+
2
D
+
1
and C
+
1

+
A
+
1
and [C
+
2
[ k and h
+
A
+
0
,D
+
1

h
+
h
+
A
+
0
,D
+
, then
C
+
2

+
D
+
.
2) We in part one write semi

if A
+
1
= A
+
.
3.11 Claim. 1) Assume K obeys

h which is bounded by h, K
+
drawn by p which
obeys p which is bounded by h. Sucient conditions for ()
1
+()
2
+()
3
are
1
+
2
where
()
1
(K
+

,
+
j
K
+

,
+
t
K

,
+
qr
K

) = (K
+

,
+
i
,
+
s
,
+
pr
)
()
2
K
+
is weakly nice
()
3
K
+
obeys

h
+
with error h
e
and

h
+
=
+
h
u
,
+
h
d
), if A
+
<
+
qr
B
+
then
()
+
h
u
A
+
,B
+
[M
+
n
] =
0.1 LAWS SH550 37
+
h
u
A
+
,B
+
[M
n
] = h
u
A
+
,B
+
[M
+
n
] p
u
A
+
,B
+
[M
+
n
]
()
+
h
d
A
+
,B
+
[M
+
n
] =
+
h
d
A
+
,B
+
[M
n
] =
h
d
A
+
,B
+
[M
+
n
] p
d
A
+
,B
+
[M
+
n
]
() h
e
[M
+
n
] h
1
[M
+
n
] h
2
[M
+
n
] goes to innity

2
for A
+
<
qr
B
+
and R
>0
, for random enough M
n
we have
if f embeds A
+
into M
n
, under the assumption that f embeds
A
+
into M
+
n
, the probability that nu(f, A
+
, B
+
, M
+
n
) is not in the
interval (
+
h
d
A
+
,B
+
[M
n
] (h
e
[M
n
])

,
+
h
d
A
+
,B
+
[M
+
n
] (h
e
([M
n
])

)
is < 1/|M
n
|
1/

3
for A
+
<
+
qr
B
+
, R
>0
and m N for random enough M
n
and
f : A
+
M
n
we have
+
h
d
A
+
,B
+[M
n
]/(h
e
3
[M
n
])

m.
2) In part (1) if in addition
+
2
then in addition ()
4
+ ()
5
()
4
if (A
+
, A
+
0
, B
+
, D
+
) is a pretender to being semi

(k, r)-good for


(K
+
, K,

h, h
e
0
, p, h
e
1
), then it is semi

(k, r)-good for (K


+
,

h
+
, h
+
)
()
5
K
+
is semi-nice, morever even (K
+
,

h
+
, h
+
) is semi-nice, M
+
n
random
enough and

+
2
if (A
+
, A
+
0
, B
+
, D
+
) is a pretender to semi

(k, r)-good, then it is


f : A
+
M
+
n
and [C
+
1
<
+
j
C
+
2
M
+
n
& C
+
1
f(A) &
[C
+
2
[ r C
+
2
f(A
+
)], then the inequalities in

2
holds for
nu
k
(f, A
+
, B
+
, D
+
).
Proof. Straight.

Now we deal with the polynomial case.
3.12 Denition. Assume
()
ap
(a) K is a 0-1 law context
(b) K obeys

h = (h
d
, h
u
) (see Denition 2.1)
(c)

h is polynomial over h
1
with the function (, ) (see Denition 2.3)
(d) the 0-1 law context K
+
is an expansion of K obeying the pairs of functions
p = (p
d
, p
u
) (see Denition 2.12(2))
(e) p is polynomial over h
2
(see Denition 2.14) by the function ().
Here let A
+

B
+
mean A
+
B
+
K

.
(f) for > 0, for random enough M
n
, h
1
[M
n
], h
2
[M
n
] |M
n
|

.
38 SAHARON SHELAH
1) For A
+

+
B
+
satisfying
(A
+
) <
s
(B
+
) we dene
(A
+
, B
+
) =: (A
+
, B
+
) +

(C
+
) : C
+
B
+
, C
+
A
+
.
2) For A
+

+
B
+
let A
+

+p
b
B
+
mean:
for some A
+
1
we have:
(i) A
+

+
A
+
1

+
B
+
(ii) (A
+
)
i
(A
+
1
)
s
(B
+
)
(iii) A
+
,= A
+
1
or for some A
+
2
we have A
+
1
<
+
A
+
2

+
B
+
and
(A
+
1
, A
+
2
) < 0.
3) For A
+

+
B let A
+

+p
j
B
+
mean:
for every A
+
1
we have A
+

+
A
+
1
<
+
B
+
A
+
1
<
+p
b
B
+
.
4) For A
+

+
B let A
+

+p
t
B
+
mean:
for no A
+
1
do we have A
+
<
+p
j
A
+
1

+
B.
5) For A
+

+
B let A
+

+p
qr
B
+
mean:
A
+

+
t
B
+
but for no A
+
1
do we have A
+
<
+p
t
A
+
1
<
+p
t
B
+
.
6) K
+p

=
_
A
+
K
+
: letting (A
+
)
i
A
1
<
s
A
+
and A
+
1
=: A
+
A
1
,
we have: [A
+
2
A
+
1
(A
+
2
) = 0] and (A
+
1
, A
+
) 0 and
0 < Lim inf
n
Prob
n
(we can embed A
+
1
into M
n
)
_
.
3.13 Discussion: Note that there can be A
+
such that the sequence
Prob
n
(A
+
embeddable into M
+
n
) : n N) does not converge, but essentially this
occurs only when ((A
+
) )
i
(A
+
). More exactly if A
+

i
(A
+
1

)
s
(A
+
) assuming the answer for (A
+
1
embeddable into M
n
), we almost
surely know if A
+
can be embedded into M
+
n
.
We rst note
3.14 Fact. Assume ()
ap
of 3.12 and

1
the irrationality assumption for (K,

h, K
+
, p) which means: if A
+
<
+p
qr
B
+
,
(A
+
) <
s
(B
+
) then (A
+
, B
+
) ,= 0.
1) If A
+
B
+
C
+
and (A
+
) <
s
(B
+
) <
s
(C
+
) then
(A
+
, B
+
) +(B
+
, C
+
) = (A
+
, C
+
), (of course,
1
is not used).
2) If A
+

+
B
+
K

and (A
+
) <
s
(B
+
) and (A
+
, B
+
) < 0 then for
some m for every random enough M
+
n
and any embedding f : A
+
M
+
n
, there
are no m disjoint extensions g : B
+
M
+
n
of f.
2A) The four conditions in () of 3.9 (see also () of 3.7) are equivalent (for any
A
+
, B
+
,

A as there).
3)
+p
x
=
+
x
for x b, j, t, qr and K
+

= K

= K
p

.
0.1 LAWS SH550 39
4) Assume A
+
B
+
D
+
, A
+
C
+
D
+
, D
+
= B
+
C
+
, B
+
C
+
=
A
+
, A
+
<
+p
t
B
+
and (C
+
)
s
(D
+
) then (A
+
, B
+
) (C
+
, D
+
).
Proof. 1)
(A
+
, C
+
) = (A
+
, C
+
) +

(D
+
) : D
+
C
+
and D
+
A
+

= (A
+
, B
+
) +(B
+
, C
+
) +

(D
+
) : D
+
C
+
and D
+
A
+

= (A
+
, B
+
) +(B
+
, C
+
) +

(D
+
) : D
+
B
+
, D
+
A
+

(D
+
) : D
+
C
+
, D
+
B
+

= (A
+
, B
+
) +(B
+
, C
+
).
2) For M
n
which is random enough and f : (A
+
) M
n
, the set
Y = ex(f, A, B, M
n
) has h
u
A,B
(M
n
) members (see Denition 2.1). Hence the set
Y
m
= g : g = g

: < m) is a sequence of members of Y disjoint over f has


(h
u
A,B
(M
n
))
m
members which is (h
1
[M
n
])
t1
|M
n
|
(A,B)m
. Now under the
assumption f : A
+
M
+
n
, for each g Y
m
, the probability that

<m
(g

embeds
B
+
into M
+
n
) for appropriate t
2
N is:

_
h
2
[M
n
]
t2
|M
n
|
{(C
+
):C
+
B
+
,C
+
A
+
}
_
m
.
So the expected value is
h
1
[M
n
]
t1
h
2
[M
n
]
mt2

_
|M
n
|
(A
+
,B
+
)+{(C
+
):C
+
B
+
,C
+
A
+
}
_
m
= h
1
[M
n
]
t1
h
2
[M
n
]
mt2
|M
n
|
m(A
+
,B
+
)
.
So as (A
+
, B
+
) < 0, and the assumptions on h
1
, h
2
we have: for m large enough,
this probability is < |M
n
|
|A|+1
so the conclusion follows.
2A) Straight.
3) Assume A
+

B
+
and we shall prove that A
+

+
b
B
+
A
+

+p
b
B
+
.
Let A
+
1
be such that A
+

+
A
+
1

+
B
+
and (A
+
)
i
(A
+
1
)
s
(B
+
).
Now if A
+
,= A
+
1
then both A
+

+
b
B
+
, A
+

+p
b
B
+
hold and we are done so
assume A
+
= A
+
1
. Now compare condition
0
A
+
,B
+
of 3.4(2) and (A
+
, B
+
) < 0,
(as in the proof of part (2)) they are the same. So
+p
b
=
+
b
, and the other equalities
follow.
4) Let M
n
be random enough but such that some f
1
embeds C
+
into it (note
C
+
K
+p

and see Denition 3.12(7)). Assume f


1
embeds C
+
into M
+
n
and
let f
0
= f A
+
; now the expected value of the number of g
0
: B
+
M
+
n
is
|M
n
|
(C
+
,D
+
)
, so the desired inequality follows.
3.14
40 SAHARON SHELAH
3.15 Claim. 1) Assume ()
ap
of Denition 3.12.
A sucient condition for ()
1
+ ()
2
+ ()
3
below is
1
+
2
+
3
below where:
()
1
(K
+p

,
+p
j
K
+

,
+p
t
K

,
+p
qr
K

)
= (K
+

,
+
i
,
+
s
,
+
pr
)
()
2
K
+
is weakly nice
()
3
K
+
obeys a pair

h which is polynomial over some h
e
with the function (A
+
, B
+
) (A
+
, B
+
) when A
+
<
+
s
B
+
.

1
for some function h
3
: K N satisfying R
+
for random
enough
M
n
we have [h
3
(M
n
)/|M
n
|

] = 0, (h
3
somewhat above)
h
1
[M
n
] h
2
(M
n
]

2
the irrationality assumption: for (K,

h, K
+
, p): if A
+
<
+p
qr
B
+
,
(A
+
) <
s
(B
+
) then (A
+
, B
+
) ,= 0

3
if A
+
<
qr
B
+
, then for every M
+
n
random enough, for every
f
0
: A
+
M
+
n
we have:
|M
n
|
(A
+
,B
+
)
/h
3
[|M
n
|] [f
1
: f
1
is an embedding of B
+
into
M
+
n
extending f
0
[ |M
n
|
(A
+
,B
+
)
h
3
[|M
n
|].
2) In part (1) we add ()
4
, ()
5
if we assume also
+
3
()
4
if (A
+
, A
+
0
, B
+
, D
+
) is a pretender to semi

(k, r)-good, then it is semi-


(k, r)-good
()
5
(K
+
,

h
+
, h
+
) is semi-(k, r)-good
where

+
3
if (A
+
, A
+
0
, B
+
, D
+
) is a pretender to semi-(k, r)-good, then for every ran-
dom enough M
n
and f : A
+
M
n
letting f
0
= f A
+
0
we have
|M
n
|
(A
+
0
,D
+
)
/h
3
(M
n
) nu
k
(f
0
, A
0
, B, D, M
n
)
|M
n
|
(A
+
0
,D)
h
3
(M
n
).
Proof. Straightforward.
3.16 Conclusion. Assume ()
ap
of 3.12 and

1
of 3.14 (the irrationality) and for
simplicity let h
e
(M) = |M|.
Then
(a) the conditions of 3.15(1), 3.15(2) hold hence their conclusions
(b) for A
+
B
+
K

, we have
(i) A
+
B
+
K
+

and
(ii) A
+

+
b
B
+
A
+

a
B
+

(iii) A
+

+
j
B
+
A
+

+
i
B
+
A
+

i
B
+

(iv) A
+

+
t
B
+
A
+

+
s
B
+
A
+

s
B
+

(v) A
+

+
b
B
+
A
+

+
a
B
+
A
+

a
B
+

0.1 LAWS SH550 41
(c) if A
+
<
+
qr
B
+
, then for every random enough M
n
for every
f
+
: A
+
M
+
n
, we have, for some constant c
h
d
A
+
,B
+

[M
n
] p
d
A
+
,B
+[M
n
]
_
h
d
A
+
,B
+

[M
n
] p
d
A
+
,B
+
[M
n
]
c (log|M
n
|)
nu(f

, A
+
, B
+
, M
+
n
)
h
u
A
+
,B
+

[M
n
] p
u
A
+
,B
+[M
n
]
+
_
h
d
A
+
,B
+

[M
n
] p
u
A
+
,B
+
[M
n
] c (log|M
n
|)
(d) Assume B
+
0

+
B
+
, A
+

+
A
+
0

+
b
B
+
. Then (A
+
, A
+
0
, B
+
0
, B
+
) is a
pretender to semi

(k, r)-good for (K


+
,

h, h
0
, p, h
1
) i (A
+
, A
+
0
, B
+
0
, B
+
)
is a semi

(k, r)-good for (K


+
,

h
+
, h
e
)
(e) inequalities for (d) parallel to those in (c).
Proof. All is reduced to the case of the binomial distribution by 4,5.
In more details,
Stage A: Clauses (b)(i)-(v) and (d), the rst i
The rst by 3.14, the second (i.e. clause (d)) left to the reader.
Stage B: Clauses (c) + (e)
Let > 0 and let M
n
be random enough. We need to consider all f F = f :
f an embedding of A
+
into M
n
, for each of them the appropriate inequality
should hold. So it is enough if the probability of failure is < |M
n
|
1/
/[F[ so
|M
n
|
|A|11/
suce. Success means that under the hypothesis f : A
+
M
+
n
,
we should consider the candidates g G = g : g an embedding of B
+
into M
n
extending f, how many of them will be in G
+
= ex(f, A
+
, B
+
, M
n
). Well the
events g G
+
are not independent (in
+
Mn
, of course).
Note: We should prove that the probability of deviating from the expected number
of extensions of f : A
+
M
+
n
, is much smaller than the number of fs which
is |M
n
|
|A|
. By 4.11 we can restrict ourselves to the separated case (see De-
nition 4.10(3)). By 4.7 we can ignore the case that in G[M
+
n
] (for our f, A, B or
f, A, A
0
, B
0
, B, k as in 4.1(2)) every connectivity component has < c element for
some c depending on A, B only.
Now our problem is a particular case of the context in 5.1 and by the previous
sentence we can deal separately giving an interval into which the number of com-
ponents of isomorphism type t, for the t with < c elements (see 5.3). Actually
each one is another instance of 5.1 only separated is replaced by weakly separated.
So clearly it is enough to deal with L
t
[M

n
] (t

the isomorphism type of a single-


ton). With well known estimates, 5.8(2) gives very low probability for the value
L
t
[M

n
] being too small. The dual estimate is given by 5.10 (remember in our case
having instances of the new relations has low probability so we can use 5.8(2) to
show that there is small even for < 1 very near to 1).
For clause (e) just note that the number of extensions violating the desired
conclusion is much smaller (the denition of pretender is just made for this).
42 SAHARON SHELAH
Stage C: Rest.
Straight by now.
3.16
0.1 LAWS SH550 43
4 Clarifying the probability problem
We are still in the context 3.1. Our aim is to clarify the probability problem to
which we reduce our aim in 3.15 (in the polynomial case).
So our aim is to get good enough upper bounds and lower bounds to the h
A
+
,B
+(M
n
).
4.1 Hypothesis. 1) M
n
K
n
, A
+

qr
B
+
,

A = A

: k), A = A
+
,
B = B
+
, A = A
0
<
pr
A
1
<
pr
<
pr
A
k
= B, A
+

= A
+
A

and an embedding f

: A M
n
.
We try to approximate nu(f

, A
+
, B
+
, M
+
n
) under the condition f

embed A
+
into M
+
n
.
2) (a variant) Assume further (A
+
, A
+
0
, B
+
0
, B
+
) is a pretender to being semi-(k, r)-
good B

1
= B B
1
, f

1
: A M
n
. We try to approximate nu
k
(f

, A
+
, B
+
1
, B
+
, M
+
n
)
under the assumption c
r
(f(A
+
0
), M
+
n
) A
+
.
Notation: Lastly let x d, u.
We shall speak mainly for 4.1(1), and then indicate the changes for 4.1(2).
4.2 Notation. T

= T

[f

, M
n
] = ex(f

, A, A

, M
n
),
T

[f

, M
+
n
] = T

ex(f

, A
+
, A
+

, M
+
n
),
T =
_
k
T

and T[f

, M
+
n
] =
_
k
T

[f

, M
+
n
]. For g T

, let lev(g) = .
For g
1
, g
2
T, let g
1
g
2
be g
1
A

where k is maximal such that


g
1
A

= g
2
A

. Let
m
d

=: h
d
A

,A
+1
[M
n
]
m
u

=: h
u
A

,A
+1
[M
n
]
p
d

=: p
d
A
+

,A
+
+1
[M
n
]
p
u

=: p
u
A
+

,A
+
+1
[M
n
].
Lastly let
K
+
Mn,f

=: M
+
n
K
+
n
: M
+
n
expand M
n
and f

is an embedding of
A
+
into M
+
n
.
Let
+
n
[f

, M
n
] be the distribution that
+
n
[M
n
] induce on K
+
Mn,f

.
4.3 Hypothesis. M
n
is random enough, so that: for < k, f T

the number of
g, f g T
+1
, is in the interval [m
d

, m
u

] where m
x

= h
x
A

,A
+1
[M
n
].
4.4 Observation. For M
+
n
random enough and f

: A
+
M
+
n
:
44 SAHARON SHELAH
(a) each f T

[M
n
] with < k, has at least m
d

immediate successors and at


most m
u

immediate successors
(b) if f T

[M
n
], x M
n
then
[g T
+1
: f g, x Rang g Rang f[ c
r
j
(where c
r
j
is a constant depending on (A

, A
+1
) only)
(c) the set of immediate successors of f can be represented as
_
i<c
T
,f,i
,
(c depending on (A

, A
+1
) only and) Rang gRang f : g T
,f,i
) are
pairwise disjoint.
4.5 Observation. : So when M
n
is random enough if f T

embed A
+

into M
+
n
,
then the number of f

T
+1
extending f which embed A
+
+1
into M
+
n
is in the
expected case in the interval [p
d

m
d

, p
u

m
u

] except for the case = k 1 (then this


interval is [0, 1)
R
, so the number of f

has a bound, but its expected value is


< 1).
Of course, for < k1 we expect that for various fs the number will deviate (from
the expected value), but for M
+
n
random enough none will deviate too much.
4.6 Denition. For M
+
n
K
+
Mn,f

, we dene a graph
G[M
+
n
] = G[f

, M
+
n
] = G[f

, M
+
n
, T].
Its nodes are G[M
+
n
] = f T
k
: f embed B
+
into M
+
n
.
Its set of edges is
R[M
+
n
] =
_
g
1
, g
2
:g
1
G[M
+
n
], g
2
G[M
+
n
],
and Rang(g
1
) Rang(g
2
) ,= Rang(g
1
g
2
)
_
.
If c = f
1
, . . . , f
m
is a component, its domain is
m
_
=1
Rang f

Rang f

.
4.7 Claim. Assume that the tuple (K, K
+
,

h, p) satises () of 3.12 and the irra-


tionality inequality, i.e.
1
of 3.15.
For any c R
+
, for some m

(c) N, if M
+
n
is random enough, then every compo-
nent of G[M
+
n
] has m

(c) members, with the probability of failure |M


+
n
|
c
.
Proof. Choose > 0 such that ( < 1 and)
()
1
if A
+
0
C
+
< A
+
k
, C
+
<
s
A
k
then (C
+
, A
+
k
)
()
2
p(A
+
0
, A
+
k
)
0.1 LAWS SH550 45
(as A
+
0
<
qr
A
+
k
each (C
+
, A
+
k
) < 0 by the irrationality condition and p(A
+
0
, A
+
k
) <
0 as otherwise A
+
0
<
t
A
+
1
<
t
A
2
<
t
. . . <
t
. . . ; so has just to be below nitely
many reals which are > 0).
Next we choose m
0
such that
()
3
(c(, A
0
), A
k
) m
0
< e
(clearly possible). Next choose m
1
such that
()
4
if B K and f

: A
k
B for < m
0
1 are embeddings f

A
0
= f
0
A
0
and C = c
|A
k
|
(
_

(A
k
), B), then [C[
|A
k
\A0|
< m
1
(actually, somewhat less is needed).
So
()
5
if f

: < m
1
G[M
+
n
] is connected then reordering we have:
for every (0, m
0
] one of the following occurs:
(a) Rang(f

(A
k
A
0
))
_
m<
Rang(f
m
) ,= but
Rang(f

) c
|A
k
|
(
_
m<
Rang(f
m
), M
n
)
(b) Rang(f

(A
k
A
0
))
_
m<
Rang(f
m
) = but
Rang(f

) c
|A
k
|
(
_
m<
Rang(f
m
), M
n
) ,= .
[Why? Suppose this holds for (0, m

), with m

maximal and assume that


m

< m
0
and we shall derive a contradiction; note that for m

= 0 this holds
trivially. So if we can nd (m

, m
1
) satisfying (a) or (b) we get contradiction
to m

maximal so there is no such .


Let S =: < m

: Rang(f

(A
k
A
0
))
_
m<m

Rang(f
m
) ,= .
Note that
() m

S Rang(f

(A
k
A
0
))
_
m<m

Rang(f
m
) Rang(f


(A
k
A
0
)) c
|A
k
|
(
_
m<m

Rang(f
m
), M
n
) and
() S & > m

fails clause (a) Rang(f

) c
|A
k
|
(
_
mm

Rang(f
m
), M
n
).
By ()
4
we have [S[ < m
1
, so S ,= : < m. Also 0 < m

so 0 S, hence
S ,= & S ,= : < m

. So by the connectivity of f

: < m, i.e. of the graph


G[M
+
n
], for some
1
S,
2
< m
1
,
2
/ S we have Rang(f
1
(A
k
A
0
)) Rang(f
2

(A
k
A
0
)) ,= . Now by () +() above Rang(f
1
) is c
|A
k
|
(
_
mm

Rang(f
m
), M
n
)
hence Rang(f
2
(A
k
A
0
)) has an element in this set, but as
2
/ S it has no
46 SAHARON SHELAH
element in
_
mm

Rang(f
m
), so
2
satises clause (b) above. But this contradicts
the maximality of m

. So we are done.]
Now it is enough to x the isomorphismtype of (M
n

_
m
0
Rang(f

), f

(d))
<m
0
,dA
k
,
call it t (as their number is xed not depending on n). Let for g : A
0
M
n
F
t
(M
n
, g) =
_
f

: m
0
) :f

embed A
k
into M
n
,
f

extends g and t is the isomorphism type of


(M
n

_
m
0
Rang(f

), f

(d))
<m
0
,dA
k
_
.
(Note: the m
0
rather than < m
0
is intentional).
Let F
t
[M
+
n
, g] =

f F[M
n
, g] : for m
0
, f

embeds A
+
k
into M
+
n
. We will
show that for each t, for random enough M
n
, the expected value of [F
t
[M
+
n
, g][ un-
der the assumption that g embeds A
+
0
into M
n
is |M
n
|
m
0

|M
n
|
(c(,A0),A
k
)
<
|M
n
|
e
this clearly suces.
The rest is straight, still we rst note
4.8 Observation: 1) Assume A
0
A
1
B
1
, A
0
B
0
B
1
, A
1
B
0
= B
1
. Then
(c(A
0
, A
1
), A
1
) (c(B
0
, B
1
), B
1
).
Proof. 1) Let > 0 and let M
n
be random enough. We can nd g : B
1
M
n
and
so let
F
1
= f : f embed A
1
into M
n
, f A
0
g
F
2
= f : f embed A
1
into M
n
, f c(A
0
, A
1
) g
F
3
= f : f embed B
1
into M
n
, f B
0
g
F
4
= f : f embed B
1
into M
n
, f c(B
0
, B
1
) g.
Clearly
(a) [F
2
[ |M
n
|
(c(A0,A1),A1)+
(b) [F
4
[ |M
n
|
(c(B0,B1),B1)
(c) [F
3
[ [F
1
[
(d) [F
1
[ c[F
2
[ where c > 0 is a real depending on A
0
, A
1
only
(e) [F
4
[ [F
3
[.
0.1 LAWS SH550 47
Together (if |M
n
|

> c) we get ((c(A


0
, A
1
), A
1
) + (c(B
0
, B
1
), B
1
) 2
but was any positive real so we are done.
4.8
Continuation of the proof of 4.7:
B
+

=
_
m
Rang(f
m
) and B

= B

, so B
0
= A = A
0
.
Let D

= c(B

, B
+1
) and C
+

= a B
+
: f

(a) D
+

and C

= C
+

.
() (D

B
+
+1
) (C

, B)
[why? apply observation 4.8 with A
0
, A
1
, B
0
, B
1
there standing for (Rang f
+1
)
D

, (Rang f
+1
), D

, B
+1
(note: (Rang f

) D

B
+1
, (Rang f

)
D

Rang(f

) B
+1
and B
+1
= D

(Rang f
+1
),
B
+1
K

, so the assumption of observation). So (D

Rang f
+1
,
Rang f
+1
) (c(D

, B
+1
), B
+1
). But c(D

, B
+1
) = D

by the deni-
tion of D

and f
+1
is an embedding mapping B onto Rang f
+1
and C

onto
D

Rang f
+1
so (D

Rang f
+1
, Rang f
+1
) = (C

, B). Together
we get ().]
Now
(A
+
, B
+
+1
) = (A, B
+1
) +

(C
+
) : C
+
B
+
+1
, C
+
A
+

= (A, D

) +(D

, B
+1
) +

(C
+
) : C
+
B
+
+1
and C
+
A
+

(A, B

) +(D

, B
+1
) +

(C
+
) : C
+
B
+
+1
and C
+
,= A
+

= (A, B

) +

(C) : C B

, C A
+(D

, B
+1
) +

(C
+
) : C
+
B
+
+1
, C
+
B
+


(A
+
, B
+

) +(B
+

, B
+
+1
) +

(C
+
) : C
+
B
+
+1
, C
+
B
+


(A
+
, B
+

) + ((C

, B) +

(C
+
) : C
+
B
+
+1
, C
+
B
+


[by () above]
(A
+
, B
+

) +(C
+

, B
+
) +

(C
+
) : C
+
B
+
+1
, C
+
B
+

but C
+
f

(C
+

) C
+
f

(B).
Case 1: C
+

,= B
+
.
Hence (A
+
, B
+
+1
) (A
+
, B
+

) +(C
+

, B
+
) (A
+
, B
+

)
(why? rst inequality as the third summand above is a sum of reals 0, the second
inequality by denition of <
+
qr
and choice of ).
Case 2: C
+

= B
+
.
So (C
+

, B
+
) = 0, so we get
(A
+
, B
+
+1
) (A, B

) +

(C
+
) : C
+
B
+
+1
, C
+
B
+

but C
+
f

(C
+

)
C
+
f

(B
+
) (A
+
, B
+

) .
4.7
48 SAHARON SHELAH
The following denition points to the fact that there may be quite a dierent
situation in spite of our treating them together as they are similar enough for our
aims.
4.9 Denition. 1) We say T is simple of the rst kind if:
g
1
, g
2
T
i
Rang(g
1
) Rang(g
2
) = Rang(g
1
g
2
).
2) We say T is simple of the second kind if:
for every g
1
, g
2
T

, we have
g

1
(A
+1
A

) : g
1
g

1
T
+1
= g

2
(A
+1
A

) : g
2
g

2
T
+1
.
3) T is separated when:
g
1
, g
2
T, y
1
, y
2
A
k
, g
1
(y
1
) = g
2
(y
2
) y
1
= y
2
.
4) T is locally disjoint if for f T

, f g
1
T
+1
, f g
2
T
+1
, g
1
,= g
2
we have
Rang(g
1
) Rang(g
2
) = Rang(f).
4.10 Remark. Note the separated case assumption helps as it gives monotonicity
in the probability.
The following indicates that we can assume T is separated.
4.11 Claim. Each problem (i.e. from 4.1) we can split T
k
[f

, M
n
] to
(log
2
|M
n
|)
|B
+
|+1
sets, each of them separable. (So if our estimates can absorb
the inaccuracy involved we have reduced our problem to a separable one).
Proof. Just choose for each c M
n
a sequence
c
of zeroes and ones of length
] log
2
|M
n
|[ such that c ,= d
c
,=
d
. For each g T
k
[f

, M
n
], let
w
g
= Mini :
c
(i) ,=
d
(i) : c ,= d Rang gRang f

, and let
v
g
= c,
g(c)
(i)) : c B
+
A
+
, i w
g
. Dene an equivalence relation e on
T
k
[f

, M
n
] as follows: g
1
e g
2
i w
g1
= w
g2
& v
g1
= v
g2
. Now e gives a division as
required.
4.11
4.12 Discussion. Now 4.11 is sucient for our aims, but we can get better: divi-
sion to constant number, and preserving the order of magnitude of the splitting of
the tree.
4.13 Claim. Consider the probability space of all c where c is a function from
A

Rang(f) : f T
k
Rang(f

) to A
k
A
0
; all of the same probability. Let
the space measure be called
sep
. Let T
[c]
= : c f

is the identity on A
g()

and T
[c]

= T
[c]
T

.
Then
(1) The probability that all the splitting in T
[c]
are nearly the expected value
(meaning if the expected value is v, error is v
1
2
+
) is very near to 1
assuming the largeness condition, (e.g. in the polynomial case)
0.1 LAWS SH550 49
(2) Let e

= (([A
k
A
0
[)!)[A

k
A

0
[
|A

k
\A

0
|
(0, 1)
R
and a
dn
(or a
up
) be the
natural lower (or upper bound) of the expected value of [T[M
+
n
][, then for
any > 0 we have
sep
- almost surely the chosen c satises
Prob
n
_
[T
k
T[M
+
n
][) a
dn
a
1
2
+
dn
_

Prob
n
_
[T
[c]
k
T[M
+
n
][ e

a
dn
a
1
2
+
_
+|M
n
|

.
(3) Prob

_
[T
k
T[M
+
n
][ a
up
+a
1
2
+
)
_

Prob
_
T
[c]
k
T[M
+
n
] e

a
up
+a
1
2
+
up
_
+|M
n
|

.
Proof. If we rst draw M
+
n
then ignoring an event with probability |M
n
|
e
, the
components of T[M
+
n
] are all of size m

(e)( N, from 4.7). So the number we


get after drawing a
sep
-random c, behave by a multinomial distribution, so almost
surely for c, we get the expected number with small error. By commutativity of
probability, this implies the conclusion.
4.13
50 SAHARON SHELAH
5 The probability arguments
We relax our framework, forget about the tree (from 4), and just have a family
F of one-to-one functions from [m] to [n] (thinking n, [F[ are much larger than
m), F seperative for simplicity (i.e. f
1
() = f
2
(
2
)
1
=
2
), and A

-
model with set of elements [m] with vocabulary
+
. Now we draw some relations
on [n] to get M

n
independently and want to know enough on the number L of
f F such that all appropriate relations were chosen. The easiest case is when
f
1
,= f
2
F Rang(f
1
) Rang(f
2
) = , then we get a binomial distribution.
Still we are interested in the case that for every successful f F, the number
of succsesful f

Ff not disjoint to f is small; i.e. the expected number is


1. So we dene components of the set of successful f, and look what is their
number. We rst show that for L which is larger than the expected value the
probability of having the number sucesses is L to decrease with L as in the
binomial distributions. Then we get a slightly worse statement on what occurs for
L smaller than the expected value; the error term comes from the number of
f F such that in f([m]) there is no relation but if we change M

n
such that f
is succsesful, it is not a singleton. But by the rst argument (or direct checking in
the cases from 4) this is small. Of course, we have larger components, but for each
isomorphic type the problem of the distribution of their number is like the original
one except being only weakly separative. Clearly this framework is wide enough to
include what is needed in 4.
Note: Clearly the higher components contribute little but we do not elaborate as
there is no need: we may restrict ourselves to nitely many components.
5.1 Denition. 1) We say y = (m, p
n
, n, F
n
) = (m, p, n, F) = (m
y
, p
y
, n
y
, F
y
) is
a system (or m-system or (m, n)-system) if:
(a) F is a family of one-to-one functions with domain [m] = 1, . . . , m into the
set [n] = 1, . . . , n
(b) P u : u [m], u ,= , e an equivalent relation on P,
(c) p
n
= p
u/e
: u/e P/e), where p
u/e
is a probability and let p
u
= p
u/e
.
Let
R
u
= f(u) : f F, R
u/e
=
_
R
u
: u

u/e
(we can look at them as symmetric relations)
(d) we choose for each u P, R
u/e
R
u/e
by drawing for each v R
u/e
a
decision for v R
u/e
, independently (for distinct (u, v)) with probability
p
u/e
. The distribution (on K

n
(see below)) is called

n
.
2) We call M

n
= ([n], . . . , R
u/e
, R
u/e
, . . . )
uP
a
n
[ y]-random model; we may
omit y. Let K

n
= K

n
[ y] = K
y
be the set of all possible M

n
. Note that P, e
can be dened from p
n
, so we write P = P
y
, e = e
y
or P
n
, e
n
. Note that from
M
n
K

y
we can reconstruct m
y
, n
y
and F
y
(though not p
y
) and from F
y
we can
reconstruct m
y
, n
y
if [m] =

u : u P.

0.1 LAWS SH550 51
It is natural to demand F is separative (see Denition 5.2(1) below), as we can
reduce the general case to this one (though increasing the error term see 4.11,
4.13). But why do we consider weakly separative (see Denition 5.2(2) below)?
The main arguments here gives reasonable estimates if we have estimated the num-
ber of occurances of non-trivial components, so we need to estimate them. In order
to bound the error gotten by making the estimation of the number of occurances
of a component, we weaken the Denition to include this (we could somewhat fur-
ther weaken weakly separative, but do not, just as it gives no reasonable gain
now).
5.2 Denition. 1) We say F = F
n
(or y = (m, p
n
, n, F
n
)) is separative if:
f
1
, f
2
F
n
&
1
,
2
[m] & f
1
(
1
) = f
2
(
2
)
1
=
2
.
2) We call y semi-separative if:
there is an equivalence relation e

on [m] such that


(i) u P (, k)( u & k u & ,= k e

k) and
(ii) (u
1
, u
2
P)((k)[u
1
(k/e

) ,= u
2
(k e

)] u
1
e u
2
);
i.e. e renes e

=: (u
1
, u
2
) : u
1
, u
2
P and
(k)[u
1
(k/e

) ,= u
2
(k/e

) ,= 0]
(iii) there is an equivalence relation e

on [n] such that:


if f
1
, f
2
F, m
1
, m
2
[m], then
m
1
e

m
2
f
1
(m
1
)e

f
2
(m
2
)
(iv) if f
1
, f
2
F
n
, u
1
, u
2
P and f
1
(u
1
) = f
2
(u
2
) then
u
1
e u
2
& f
1
u
1
= f
2
u
2
.
3) We say y is weakly separative if (i), (ii), (iii) of part (2) holds.
4) For X P let
q
X
=

uX
p
u

uP\X
(1 p
u
).
Remark. Note: we are thinking of the cases the p
u
s are small. If some are essen-
tially constant, we treat them separately.
5.3 Denition. 1) For a system y = (m, p, n, F) and M

n
K

y
, we dene a graph
G[M

n
] = G
y
[M

n
] = (F[M

n
], E[M

n
]).
Its set of nodes is
F[M

n
] = f F : f(u) R
M

n
u/e
for every u P
E[M

n
] = E[F] F[M
n
]
where E[F] = (f
1
, f
2
) : f
1
F, f
2
F, f
1
,= f
2
and Rang(f
1
) Rang(f
2
) ,= .
For a component C of G[M

n
] let V [C] =
_
fC
Rang(f).
52 SAHARON SHELAH
2) We say that two components C
1
, C
2
of G[M

n
] are isomorphic if there is a
f : C
1
11

onto
C
2
such that for any a
1
, a
2
[m] and f
1
, f
2
C
1
we have:
(f (f
1
))(a
1
) = (f (f
2
))(a
2
) f
1
(a
1
) = f
2
(a
2
)
2A) We say that f is an embedding of a possible component C
1
into a possible
component C
2
(possible means it is a component in some M

n
)
if: f is a one to one function from C
1
into C
2
satisfying the demand in (2). The
isomorphism type C/

= of a component C, is naturally dened.
3) Let T
k
= T
m
k
= C/

=: for some y an m-system and some M

n
K

y
, C F
y
is
connected component in the graph G[M

n
] and
[C[ = k.
4) Assume y is semi-separative, f

: [m] [m

] for = 1, . . . , k.
Let T = T
m
=
_
k
T
m
k
. Let t

be the isomorphic type of singleton. Normally m is


constant so we may omit it.
5) Let L
t
[M

n
] be [L
t
[M

n
][ where
L
t
[M

n
] = C : C a component of G[M

n
] such that the isomorphic type of C is t.
6)

L =

L[M

n
] = L
t
[M

n
] : t
_
k
T
k
).
7) K

n
[F,

L] = M
n
K

n
[F] :

L[M

n
] =

L.
8) F
X
[M
n
] = f F : u P [f(u) R
u
u X] for X P.
9) F

[M

n
] = F

[M

n
] where for X P we let:
F

X
[M

n
] =
_
f F :f F
X
[M

n
] and for no f

Ff do we have:
u P & f

(u) f([m]) f

(u) R
u/e
[M

n
]
_
.
10) F

[M

n
] = F

[M

n
] where for X P we let F

X
[M

n
] =: f F : f
L
t
[M

n
].
5.4 Claim. 1) Separative implies semi-separative which implies weakly separative.
2) If y is weakly separative, f

: [m] [m

] for = 1, . . . , k, f

: [k] is weakly
separative,
_
[k]
Rang(f

) = [m

] and
F

= g : g : [m

] [n] is one to one, [k] g f

F
y
, p

(u)
= p
u
,
P

= f

(u) : u P then y

= (m

, p

, n, F

) is weakly separative.
3) Similarly with semi-separative instead weakly separative.
Proof. Straight.
Remark. The following claim says that above the expected value, the probability
goes down fast enough.
0.1 LAWS SH550 53
5.5 Claim. Assume F is weakly separative and

L

= L

t
: t T), for = 1, 2 and

L
2
t
=
_
L
1
t
+ 1 if t = t

(is singleton)
L
1
t
otherwise.
Then (q

, q
P
are dened in 5.2(4)):
Prob

n
(

L[M

n
] =

L
1
) q

_
L
2
t

q
P
[F[
_
Prob

n
_

L[M

n
] =

L
2
_
.
5.6 Remark. 1) On F
P
see Denition 5.3(7).
2) Note: Exp(F
P
[M

n
]) = q
P
[F[ and [F
P
[M

n
][ =

tT
[t[L
t
[M

n
] where [t[ is the
number of f C for any C of isomorphism type t.
Proof. Let us consider for X P,
W
X
=:
_
(M
1
, M
2
) :M
1
K

n
[F,

L
1
], M
2
K

n
[F,

L
2
],
for t Tt

, L
t
[M
1
] = L
t
[M
2
] and
L
t
[M
1
] L
t
[M
2
] and if f = L
t
[M
2
]L
t
[M
1
]
(note that necessarily there is one and only one such f)
then f(u) R
M1
u/e
u X for each u P
and: if f

F, u P
+
, f

(u) Rang(f)
then u R
M1
u/e
u R
M2
u/e
_
W =:
_
XP
W
X
.
Clearly
()
1
for every X P and M
2
K

n
[F,

L
2
] the set
M
1
: (M
1
, M
2
) W
X
has exactly L
2
t
members
(here we use F is weakly separative) and
()
2
for every M
1
K

n
[F,

L
1
], the number of M
2
: (M
1
, M
2
) W
X
has at
most [F[

tT
[t[ L
1
t
[F[ members
[note: this is a quite crude bound as it does not take into account that not
only each f C L
1
t
is disqualied as the possible member of L
2
t
L
1
t
,
but also each f

F such that (u P)(f(u) = f

(u)); but at present the


eect does not disturb us].
()
3
for every (M
1
, M
2
) W
X
and X P, we have (see Denition 5.2(4))
Prob
n
(M

n
= M
1
)/q
X
= Prob
n
(M

n
= M
2
)/q
P
.
54 SAHARON SHELAH
Now
[F[ Prob
n
(

L[M

n
] =

L
1
)/q

[by the denition of K


n
[F,

L
1
]]
=

M
1
Kn[F,

L
1
]
[F[ Prob
n
(M

n
= M
1
)/q

[by ()
2
]

M
1
Kn[F,

L
1
]

M
2
satises (M
1
,M
2
)W

Prob
n
(M

n
= M
1
)/q

[by interchanging sums]


=

M
2
Kn[F,

L
2
]

M
1
satises (M
1
,M
2
)W

Prob
n
(M

n
= M
1
)/q

[by ()
3
]
=

M
2
Kn[F,

L
2
]

M
1
satises (M
1
,M
2
)W

Prob
n
(M

n
= M
2
)/q
P
[by ()
1
]
=

M
2
Kn[F,

L
2
]
L
2
t
Prob
n
(M

n
= M
2
)/q
P
[by the denition of K
n
[F,

L
2
]]
= L
2
t
Prob(

L
t
[M

n
] =

L
2
)/q
P
.
Now the conclusion follows.
5.5
5.7 Claim. Assume F is weakly separative. If L
1
+ 1 = L
2
, then
Prob

n
_
L
t
[M

n
] = L
1
_
q

_
L
2
q
P
[F[
_
Prob

n
_
L
t
[M

n
] = L
2
_
.
Proof. By 5.5, dividing the event to cases.
5.7
0.1 LAWS SH550 55
5.8 Conclusion. Assume F is weakly separative.
1) If L

2(q
P
[F[/q

) then Prob
n
(L
t
[M
n
] > L

) Prob
n
(L
t
[M
n
] = L

).
2) If L

[q
P
[F[/q

] then
Prob
n
(L
t
[M

n
] = L

)
L

L=[q
P
|F|/q

]
L
[q
P
[F[/q

]
.
Proof. Iterate 5.5, i.e. by induction on L

.
5.8
Remark. 1) In the cases we are interested in, q

is near to one or at least above a


constant > 0 and q
P
is small, so (q
P
[F[)/q

is near the expected value of [F[M


n
][.
2) This is enough for the bounds in the case above the expected value for 4
modulo separability as we do it for each component.
5.9 Claim. Let F be weakly separative. Assume

L

for = 1, 2 are as in 5.5 and


(see Denition 5.3(8))

(0, 1)
R
and (0, 1)
R
and
Prob

n
_

L[M

n
] =

L
1
and

[M

n
]

< [F[
_
(on F

[M

n
] see Denition 5.3(8)). Then
Prob
n
(

L[M

n
] =

L
1
) q

L
2
t

q
P
[F[
Prob

n
(

L[M

n
] =

L
2
) +.
Remark. In the cases we have in mind, for n going to innity, goes to 1, goes
to 0, very fast indeed.
Proof. Start as in the proof of 5.5 getting ()
1
, ()
2
, ()
3
but then:
56 SAHARON SHELAH
[F[ Prob
n
(

L[M

n
] =

L
1
)/q

[by

in the assumption and by the Denition of K


n
[F,

L
1
]]
[F[ +

M
1
Kn[F,

L
1
],|F[M

n
]||F|
[F[ Prob
n
(M

n
= M
1
)/q

[by ()
2
]
[F[+

M
1
Kn[F,

L
1
]

M
2
satises (M
1
,M
2
)W

Prob
n
(M

n
= M
1
)/q

[by interchanging sums]


= [F[+

M
2
Kn[F,

L
2
]

M
1
satises (M
1
,M
2
)W

Prob
n
(M

n
= M
1
)/q

[by ()
3
]
= [F[+

M
2
Kn[F,

L
2
]

M
1
satises (M
1
,M
2
)W

Prob
n
(M

n
= M
2
)/q
P
[by ()
1
]
= [F[ +

M
2
Kn[F,

L
2
]
L
2
t
Prob
n
(M

n
= M
2
)/q
P
[by the denition of K
n
[F,

L
2
]]
= [F[ +L
2
t
Prob(

L
t
(M
n
) =

L
2
)/q
P
.
Dividing by [F[/q

we get the desired conclusion.


5.9

5.10 Claim. Assume F is weakly separative. Assume L
2
= L
1
+ 1 are given and
() (0, 1)
R
and (0, 1)
R
and
Prob

n
_
L
1
t
[M

n
] = L
1
and

[M

n
]

< [F[
_
.
0.1 LAWS SH550 57
Then
Prob

n
_
L
t
[M

n
] = L
1
_

+q

L
2
t

q
P
[F[
Prob

n
_
L
t
[M

n
] = L
2
_
.
Proof. By 5.9, dividing the even to case, noting that:
Prob

n
_
L
t
[M

n
] = L
1
and

[M

n
]

< [F[
_
=
_
Prob

n
_

L[M

n
] =

L and

[M

n
]

< [F[
_
:

L
= L
t
: t), L
t
= L
1
_
.

5.10
58 SAHARON SHELAH
6 Free Amalgamation
We like to axiomatize free amalgamation in its connection to 0-1 laws (in
previous cases the edgeless disjoint amalgamation serves).

We rst dene a context having free amalgamation. The idea is that it is not
necessarily a disjoint amalgamation with no additional relations as we may allow
say a two-place relation with probability
1
2
, so cases of this relation has no inuence
on the amalgamation being free.
6.1 Denition. 1) We say (K,

) is a 0-1 context [or weakly 0-1 context] (no con-


tradiction to 1.1(1)) if K satises (a),(b),(c) as in 1.1(1) and (K

, <
s
are dened
as in 1.3(1),1.3(2)(c) above and):
(d)

is a four-place relation on K

written as B
D

A
C or

(A, B, C, D). This re-


lation is preserved under isomorphism and we say: B, C are

-freely amalgamated
over A inside D (and omit D if clear from the context)
(e) B
D

A
C implies A
s
B D, A C D, B C = A.
(f) (base increasing): if B
D

A
C and A C
1
C then B C
1
D

C
1
C.
(g) monotonicity: A B

s
B, A C

C and B
D

A
C implies B

A
C

.
(h) monotonicity: if B D

D, C D

D then B
D

A
C B
D

A
C but in
the weakly version for we add the assumption
D

,D
where we let
A1,A2
be
dened as

A1,A2
A
1
A
2
K

and if
i
A

s
A
2
then
9
A

A
1
(i) existence: if A
s
B and A C then for some D and f we have: C D, f is an
embedding of B into D over A and

(A, f(B), C, D) (but in the weakly version


this is omitted)
(j) Right Transitivity: if B
0
B
2

A
0
A
1
and B
1
B
2

A
1
A
2
and B
0
B
1
then B
0
B
2

A
0
A
2
9
of course, if A K s A this condition holds trivially. We expect that in reasonable
cases such assumption can be removed (in clauses (g) and (h))
0.1 LAWS SH550 59
(k) Left Transitivity: If A
1
B
2

A
0
B
0
and A
2
B
2

A
1
B
1
and B
0
B
1
then
A
2
B
2

A
0
B
0
(l) (K,

) has symmetry which means


() if B
D

A
C and A
s
C then C
D

A
B
(m) Smoothness: B
D

A
C implies C
s
B C (of course, (d) + (f) implies this).
2) We say (K,

) has the strong nite basis property when


(n) for every for some m
() if B D, C D, [B[ then for some A C, [A[ m and
B A
D

A
C.
or at least
(n)

for every , r for some m


() if [B[ , B D K

, A
i
D for i m such that A
i
A
i+1
, then for
some i and C A
i
we have:
B A
i+1
C A
i
and B C
D

C
c
r
(C, A
i+1
)
3) We say

(or (K,

)) has the uniqueness if the

-free amalgamation is unique,


that is: suppose that for = 1, 2 B

and D

= B

and f is an
isomorphism from B
1
onto B
2
and g is an isomorphism from C
1
onto C
2
and
A
1
= A
2
, f A
1
= g A
2
, then f g is an isomorphism fom D
1
onto D
2
.
4) We say

(or (K,

) has dual transitivity when:


() A
1
C
1

A
0
C
0
, A
2
C
2

A
0
C
1
then A
2
C
2

A
0
C
0
but in the weak case assume
C1,C2
.
6.2 Fact
1) If K is a 0-1 context (see 1.1, with <
s
as in Denition 1.3(2)(c)) and

=
(A, B, C, D) : A
s
B D, A C D, B C = A and the quadruple is freely
60 SAHARON SHELAH
amalgamated in the sense of 0.2(6) (no new instances of the relations) then (K,

)
is a 0-1 context except possibly (f)(base increasing),(i)(existence),
(m)(smoothness); with uniqueness (see 6.1(3)).
2) Assume (K,

) is a 0-1 context
(a) if B
D

A
C, D = B C, B

B, A A
+
C,
r = k +[A[, c
r
(A, C) A
+
then c
k
(B

, D) B A
+
(b) B
D

A
C, A
i
D

D, D

B C then A
i
D

C.
Proof. 1) Straight.
clauses (a)-(e): By the denitions.
clause (g): Check the denitions, and by 1.6(6) we have A
s
B

.
clauses (h),(j): Check.
clause (k): The point is that A
0

s
A
1

s
A
2
A
0

s
A
2
by 1.6(10).
clause (l): Read the denitions.
uniqueness: Reect on the meaning of

.
2) clause (a): By monotonicity of

(see Denition 6.1(1)(g) we may assume that


B = B

. Assume toward contradiction that the conclusion fails. So there are


C

D, [C

[ k, d C

BA
+
CA
+
and C

B <
i
C

, hence (see 1.6(3)),


B <
i
B C

. Let C
1
= A (C

C), and let C


0
be such that A
i
C
0

s
C
1
(exists by 1.6(4)). By clauses (f) + (g) of Denition 6.1(1) we have B C
0
D

C
0
C
1
,
by symmetry (i.e. 6.1(1) (l)) we have C
1
D

C
0
BC
0
hence by smoothness 6.1(1)(m)
B C
0

s
B C
1
. So as C

B <
i
C

B C
1
(as C
1
= A (C

C) and
D = B C) by 1.6(3) we necessarily have C

B C
0
, so as d C

B necessarily
d C
0
, but [C
0
[ [A[ + [C

CA[ [A[ + k = r. Remember A


i
C
0
, so
C
0
c
r
(A, C) A
+
, hence d A
+
a contradiction.
clause (b): If the conclusion fails, then for some C
1
, A
i
C
1
<
s
D

C. By mono-
tonicity (= clause (g) of Denition 6.1(1)) we have B
D

A
D

C, so by smoothness (=
clause (m) of Denition 6.1(1) we have D

C
s
B(D

C) but
s
is transitive
(by 1.6(10)) so C
1
<
s
B (D

C) but C
1
(B C) D

B (D

C) (as
C
1
(D

C)) so C
1
<
s
(B C) D

= D

) (as D

B C) but this contradicts


A
i
D

, A C
1
.
6.2
0.1 LAWS SH550 61
6.3 Denition. Let (K,

) be a 0-1 context (or weakly 0-1 context). If


A B

D for < m, we say B

: < m is free over A inside D (or the B

s
are free over A inside D) if for every , A
s
B

and B

A
_
k<
B
k
.
Justication is:
6.4 Claim. The order (in Denition 6.3) is immaterial.
Proof. if +1 < m, then (letting B

i
=
_
j<i
B
j
A), B

B
+1

+1
(by clause (f)
of Denition 6.1) hence B

+1

B
+1
(by clause (l), symmetry, of Denition
6.1) that is B

B
+1
. We have B

A
B

so B

+2

A
B

(by clause (g) of


Denition 6.1(1)), and similarly B

+2

B
+1
hence we get B

+2

A
B


B
+1
(using clause (j), right transitivity from Denition 6.1 with A, B

, B

, B

, B

B
+1
, B

+2
here standing for A
0
, B
0
, A
1
, B
1
, A
2
, B
2
there).
We get that we can permute the order in B
k
: k < m) of and + 1, but such
permutations generates the permutation group on 0, . . . , m1 so we are done.

6.4
Now concerning 1.8 we can say more
6.5 Claim. Assume (K,

) is a 0-1 context which is weakly nice.


1) If A < B K
i
nfty then the following are equivalent
(a) A <
s
B
(b) 1 = Lim
n
Prob
n
(for every embedding f of A into M
n
there are embeddings g

: B M
n
extending
f for < m such that g

: < m) is disjoint over A)


(c) for every m < , for every k <
1 = Lim
n
Prob
n
_
for every embedding f of A into M
n
there are
embeddings g

: B M
n
extending f, for < m
such that g

: < m) is disjoint over A and for


< m we have c
k
(f(A), M
n
) and the g

(B)s are

-free over g

(A) inside M
n
_
.
2) If A
0
A
1
K

and A
0

s
B then there are C K

and f such that


A
1
C, f is an embedding of B into C over A, say B
1
= f(B) and B
1
A
1
= A
0
and A
1

s
B
1
A
1
.
Proof. 1) By 1.8 clearly (a) (b).
1) Trivially (c) (b). Let us prove (a) (c).
62 SAHARON SHELAH
Let m, k N. We now choose by induction on m, D
m
and if < m also
g

such that: D
0
= A, D

D
+1
, g
0

is an embedding of B over A into D


+1
such that g
0

(B)
D
+1

A
D

, the induction step is done by clause (i)(existence) of


Denition 6.1(1). By clause (h) of Denition 6.1(1) without loss of generality
D
+1
= D

(B). By clause (m)(smoothness) of Denition 6.1(1) we know that


D


s
D
+1
and by clause (e) we know that g
0

(B) D

= A. So A
s
D
m
so by
K weakly nice, if M
n
is random enough and f : A M
n
an embedding then we
can nd an embedding g : D
m
M
n
extending f. We let g

= g g
0

for < m.
Now for
1
<
2
< m, g
1
(B) g
2
(B) = g(g
0
1
(B) g
0
2
(B)) g(g
0
1
(B) D
2
) =
g(g
0
1
(B)) = g
1
(B), so the disjointness demand holds. The freeness holds by the
construction, and 6.3(1).
2) Without loss of generality, (not embedded yet in one model!) B A
1
= A
0
. Let
r

be the number of structures C K

with set of elements B A


1
such that
B C & A
1
C. Now let n
0
() be such that as A
1
K

clearly for some

> 0
()
2
for arbitrarily large natural number n:

Prob
n
_
there is an embedding f of A
1
into M
n
_
.
Let (the function) n
1
() be such that:
()
3
for every R
+
, if n n
1
() then:
1 /4 Prob
n
_
if C is embeddable into M
n
and C has at most
[B[ + [A
1
[ elements then C K

_
.
As A
0

s
B by part (1) for some function n
2
(, m) we have:
()
4
for every R
+
and m N, for every m n
2
(, m) we have:
1 /4 Prob
n
_
every embedding f of A
0
into M
n
has m extensions g
to embeddings of B into M
n
, pairwise disjoint
over A
_
.
Now by 1.6(11)
()
5
for C K with the set of elements of C being B A
1
, such that
(A
1
C) & (A
1

s
C) there is
3
=
3
(C) such that for each R
+
for
some n = n
3
(C, ) we have: for every n n
3
(C, ),
1 /4 Prob
n
_
there is no sequence g

: <
3
) of embedding of C
into M
n
pairwise disjoint over A
1
_
.
0.1 LAWS SH550 63
Let R
+
be given.
Let

= max
3
(C) : C K has set of elements B A
1
and extends B and A
1
but (A
1

s
C).
Let m

= r

+[A
1
A
0
[ + 1.
Let n

() = Maxn
1
(), n
2
(, m

), n
3
(C, ) : C K
has universe B A
1
and extends B and A
1
but (A
1

s
C).
Let <

and let n Y =: n : the statement ()


2
holds for n.
For each M K
n
choose if possible an embedding f
M
of A
1
into M.
Now if f
M
is dened, choose if possible a sequence g
M
m
: m < m

) of embeddings
of B into M extending f
M
A
0
and pairwise disjoint over A. As g
M
m
(BA
0
) :
m < m

) is a sequence of pairwise disjoint subsets of M, the set u =: m < m

:
g
M
m
(BA
0
) is disjoint to f
M
(A
1
A
0
), equivalently g
M
m
(B) f
M
(A
1
) = f
M
(A
0
)
has at least m

[A
1
A
0
[ members which is r

+ 1.
For m u let C
m
be the model with universe B A
1
such that g
M
m
f
M
is
an isomorphism from C
m
into M (g
M
m
(B) f
M
(A
1
)) (note: this function is one
to one as n u and by the denition of u). By the choice of r

for some model


C with set of elements B A
1
we have [m u : C = C
m
[ [u[/r

. But
[u[ m

[A
1
A
0
[ = r

+ 1 hence u

=: m u
m
: C = C
m
has >

members.
Now with probability

> 0, M
n
satises demands ()
2
()
5
above hence
f
Mn
is well dened (by ()
2
) and g
Mn
m
: m < m

) is well dened (by ()


4
and the
choice of n()). By ()
3
, we know C K

, obviously A
1
C, B C (as each C
m
satises this). Lastly by ()
5
the sequence g
Mn
m
: m < m

) witnesses A
1

s
C, so
we have nished.
1.8
6.6 Comment
Our real interest is in instances of K but

is central in the following way


(a) it is a way to express assumptions on K helping to analyze the limit be-
haviour (for which having a 0-1 law is a reasonable criterion)
(b) assuming the random enough M
n
satises () below we can dene

and
prove (K,

) is a 0-1 context where


() for quantiers free ( x, y), the numbers [(M
n
,

b)[ for

b
g( y)
(M
n
)
behave regularly enough; where
(M
n
,

b) = a
g( x)
(M
n
) : M
n
[= [ a, b].

64 SAHARON SHELAH
6.7 Claim. 1) If K
n
= ([n]) (so
n
trivial) and B
D

A
C means
B C = A & B C D then (K,

) is explicitly nice 0-1 context. Also A


s
B
i A B and K

is the family of nite models.


2) Let K
n
= ([n], S, c
f
, c

) with c
f
= 1, c

= n and S the successor relation (so

n
-trivial). Then A <
s
B means A B & (x A)(y BA)[xS
B
y &
yS
B
y & yS
B
x] and A K

i A is isomorphic to ([n], S, c
f
, c

) X for some
X [n].
3) Let in 2), B
D

A
C means A
s
B D K

, A C D, B C = A and
(x C)(y BA)[xS
D
y & yS
D
x]. Then (K,

) is 0-1 context, K is
explicitly almost nice.
Proof. Straight, e.g. Why 6.1(f)?
3) Let us check clause (f) of Denition 6.1(1), (base increasing) so we have B
D

A
C
and A C
2
C and we have to prove B C
1
D

C
1
C. Now C
1
B C
1
trivially,
and for C
1

s
B C
1
see the denition of

; B C
1
D as B D, C
1

C D, and C
1
C holds by assumption as C D as B
D

A
C. Lastly, assume
x C
1
, y B C
1
C
1
then x C, y BC so by the denition of B
D

A
C we have
xS
D
y & yS
D
x, as required.
6.7

We connect the free amalgamation with 2, i.e. context K which obeys

h which
is bound by h
1
, this gives a natural denition for free amalgamation.
Continuing Denition 2.1 let
6.8 Denition. 1) For a 0-1 context K obeying

h and function h from
_
n
K
n
to
R
+
we dene a four place relation

h
=

[h] =

h, h] on K

h
(A, B, C, D) i
(a) A
s
B D, A C D, A = B C and,
(b) letting D
1
= D B C we have C
s
D
1
and
(c) for every > 0 for every random enough M
n
into which D is embeddable
we have
10
10
inequality is the other direction follows from the denitions as proved in clause () of 2.4(2)
0.1 LAWS SH550 65
h
u
C,D1
[M
n
] h[M
n
]

h
u
A,B
[M
n
].
note that (b) follows by (a), (c)).
We dene

h
=

[h] =

h,

h] similarly:

[A, B, C, D] i
(a) A
s
B D, A C D, A = B D and
(b) letting D
1
= D (B C) we have C
s
D
1
(c) for some m = m(A, B, C, D) N0, for every random enough M
n
we
have
h
u
C,D1
[M
n
] h[M
n
]
m
h
u
A,B
[M
n
].
2) Let

h] =

h, h

] for h

(M) = |M| (see 7.8(3) below).


Continuing 2.4 we note
6.9 Claim. In 2.4(2) we can add if B
D

A
C (so A <
s
B D, A C
s
D)
and D = B C then for every > 0 for every random enough M
n
we have
h[M
n
]

h
u
A,B
[M
n
]/h
u
C,D
[M
n
] 1/(h[M
n
])

(when C is embeddable into M


n
of
course).
6.10 Claim. Assume K obeys

h which is bounded by h and

h, h
.
1) (K,

) is a weak 0-1 context (see Denition 6.1(1) second version).


2) If in addition () below holds, then (K,

) is a 0-1 context
() if A <
pr
B D, A < C <
pr
D, B C = D and B, C are not

-free
by amalgamation over A inside D, then for some > 0 for every random
enough M
n
, we have h
d
A,B
[M
n
]/h
u
C,D
[M
n
] (h[M
n
])

.
3) If condition () from part (2) holds then also
()
+
if A <
s
B D, A C
s
D, D = B D and B, C are not

-free by
amalgamation over A inside D then for some > 0 for every random
enough M
n
we have
h
d
A,B
[M
n
]/h
u
C,D
[M
n
] (h[M
n
])

,
66 SAHARON SHELAH
Remark. 1) Note that () of 6.10(2) just say a dichotomy; i.e. the condition (c)
in Denition 2.1(5), either holds for every random enough M
n
, or fails for every
random enough M
n
.
2) Note that () of 6.10 exclude the case of having a successor relation.
Proof. The order is that rst we prove 6.10(3) (and inside it a restricted version of
transitivity (clause (j) of Denition 1.1(2))), and only then we prove 6.10(1)+(2) by
going over all the clauses of Denition 1.1(2), so in some clauses there is a dierence
between 6.10(1) and 6.10(2), and then in the second case we may use () of 6.10(2)
(and so 6.10(3)).
3) Let

A = A
i
: i k) be a decomposition of A <
s
B, so A
i
<
pr
A
i+1
for i < k.
Let C
i
= A
i
C, hence for every i < k, C
i

i
C
i+1
or C
i
<
pr
C
i+1
.
[Why? By 1.6(4) + 1.6(8)].
For each i < k let (i) R
0
be such that:
Case 1: If c
i

i
c
i+1
then (i) > 0 and h
u
Ai,Ai+1
[M
n
] (h[M
n
])
(i)
for every M
n
random enough.
Case 2: If (c
i

i
c
i+1
) hence c
i
<
pr
c
i+1
but neg
_
A
i+1
C
i+1

A
i
C
i
_
then (i) > 0 and
h
u
Ai,Ai+1
[M
n
]/h
d
ci,ci+1
[M
n
] h[M
n
])
(i)
.
Case 3: If neither Case 1 nor Case 2, then
i
= 0. Note that by 2.6(2) for M
n
random enough h
u
Ai,Ai+1
[M
n
] h
d
Ci,Ci+1
[M
n
] (if D embeddable into M
n
). Let
w

= i < k : case occurs.


First assume

i<k
(i) > 0, and let R
+
be <

i<k
(i)/2.
So let M
n
be random enough such that C is embeddable into M
n
(hence D is
embeddable into M
n
hence also the C
i
s are).
(a) h
u
Ai,Ai+1
[M
n
] h
d
Ci,Ci+1
[M
n
] when C
i
<
pr
C
i+1
[Why? See 2.6(2).]
(b) 1

i<k
h
u
Ai,Ai+1
[M
n
]/h
u
A,B
[M] (hM
n
])
/k
.
[Why? The rst inequality by the denition in 2.1(4), as h
u
A,B
(M) =
h
u
A,B
(M). Second inequality by the second inequality in 2.4(2)().]
So
h
d
C,D
[M]
[by 2.4(2)()]

iw2w3
h
d
Ci,Ci+1
[M
n
] ?
[trivially]
0.1 LAWS SH550 67
(

iw1
1) (

iw2
h
d
Ci,Ci+1
[M
n
])

iw3
h
d
Ci,Ci+1
[M
n
]) ?
[by the statements in the cases]
(

iw1
h
u
Ai,Ai+1
[M
n
]/(h[M
n
])
(i)
(

iw2
h
u
Ai,Ai+1
[M
n
]/h[M
n
]
(i)
)
(

iwi
h
u
Ai,Ai+1
[M
n
])
= (

ik
h
u
Ai,Ai+1
[M
n
]) (h[M
n
])
(i)
(

ik
h
u
Ai,Ai+1
[M
n
]) (h[M
n
])

(h[M
n
])
(i)
[trivially]
So

i<k
(i) > R
+
can serve as the desired .
Second, assume

i<k
(i) = 0 then, as C
k
= D, we have
(c) 1

i<k
h
u
Ci,Ci+1
[M]/h
u
C,D
[M] (h[M])

and by Denition 2.1(5)


(d) for every R
+
for every random enough M
n
into which C
i+1
is embed-
dable
(h[M
n
])

1 h
u
Ai,Ai+1
[M
n
]/h
u
C,D
[M
n
] (h[M])

.
Using (b)+(c)+(d) we can get B, C are

-free over A inside D so an


assumption of ()
+
fail.
Proof of (1)+(2)
The proof is split according to the clause in Denition 6.1(1); i.e. to (d), . . . , (m).
Clause (d):
Trivial.
Clause (e):
Trivial.
Clause (f): Monotonicity in C
68 SAHARON SHELAH
The point to notice is that D is embeddable into M
n
does not imply D is
embeddable into M
n
but read 2.1(3), 2.1(3A).

Clause (h): (Reading Denition 2.1(5)), but here there is a dierence between
6.10(1) and 6.10(2). We have to restrict ourselves to models into which D can be
embeddable. But there may be many into which D

is embeddable but D is not.


As D K

, for some R
+
for arbitrarily large n, the probability that M
n
satises the desired inequalities is , so for 6.10(2), the assumption ()
+
there
guarantees they hold for every random enough M
n
.
For the weakly version in Denition 1.6(1) clause (h) we have restricted ourselves
to the case
D

,D
(see Denition 1.6(1)).
Symmetry: So assume B
D

A
C, A <
s
C, and we should prove C
D

A
C,
without loss of generality A ,= B, A ,= C. So we have A <
s
B, B C = A, C <
s
B.
We should prove B <
s
BC, and the inequality. Let m

N. let R
+
be small
enough.
So let M
n
be random enough and f
0
an embedding of A into M
n
. So
F
1
= g
1
: f
1
is an embedding of C into M
n
extending f
0
.
Clearly h h
d
A,C
[M
n
] and for each f
1
F
1
,
F
2
f1
= f
2
:f
2
is an embedding of C B into
M
n
extending f
1

has h
d
C,CB
[M
n
] members.
So F
2
= F
1
f1
: f
1
F
1
has h
d
A,C
[M
n
] h
d
C,BC
[M
n
] member. Consider the
mapping G
2
: F
2
F
1
=: f
1
: f
1
is an embedding of B into M
n
extending f
0
.
So G
2
is a mapping from F
2
into F
1
. Also F
1
has at most h
u
A,B
[M
n
] which is
h
u
C,BC
[M
n
]/(h[M
n
])

as

(A, B, C, D). So the number [f


2
F
2
: f
2
B = f
1
[
averaging on all f
1
F
1
is h
d
A,C
[M
n
]/(h[M
n
])

which is > m

(as M
n
is random
enough). So for some f
1
F
1
the actual number is > m

, hence B <
s
B C.
Similarly we get the inequality.
Clause (g):
I.e. we assume A B

s
B, A C

C and B
D

A
C (and we should prove
B

A
C

). Now B
D

A
C means that A
s
B D, A C D, B C = A (this
is clause (a) of 2.14(5)) and letting D
1
= B C, also C
s
D
1
(this is clause
(b) of 2.1(5)) and similarly C

s
B C

and for every


1
> 0 for every random
enough M
n
we have 1 h
u
A,B
[M
n
]/h
u
C,D1
[M
n
] < (h[M
n
])
1
(this is clause (c) of
0.1 LAWS SH550 69
Denition 2.10(5)). Looking at the desired conclusion (in particular the denition
of

) we can restrict ourselves to M


n
into which D is embeddable; hence C

, C are
embeddable.
Without loss of generality B

= B or C

= C.
In the rst case, we note that for random enough M
n
into which D is embeddable,
we have that by clause 2.6(1) above:
h
u
A,B
[M
n
] h
u
C

,BC
[M
n
]/h(M
n
)

h
u
C,D1
[M
n
]/h[M
n
]
2
and by the denition of

also h
u
A,B
(M
n
) h
u
C,D1
(M
n
)(h(M
n
))

, together we have
the desired inequality.
In the second case (C

= C) we have A <
s
B

<
s
B, and similar inequalities
using clause () of 2.4(2) give the result.
Clause (i):
Note that the number of members of K

with
[B[ +[C[ [A[ elements and, for simplicity, set of elements
1, . . . , [B[ +[C[ [A[, is nite.
So let D
j
, f
j
for j < j

be such that
(a) C D
j
K

(b) f
j
is an embedding of B into D
j
over C
(c) D
j
= C f
j
(B)
(d) for j
1
,= j
2
, (f
j2
f
1
j1
id
C
) : D
j1
D
j2
is not an isomorphism
(e) under (a)-(d), j

is maximal.
So as said above, j

N. Now let M K be any model to which C is embeddable,


say by g
M
: C M such that there are distinct embeddings f
i
of B into M for
i < h
d
A,B
(M) extending g
M
A (remembering A C); i.e. M is random enough.
So for each i < h
d
A,B
(M) for some unique j
i
< j

we have (f
i
f
1
ji
) g
M
is an
isomorphism from D
ji
onto M (Rang(f
i
) Rang(g
M
). So for some j = j
M
< j

we have
w =: i < h
d
A,B
(M) : (f
i
f
1
j
) g
M
embed D
j
into M equivalently j
i
= j
M

has h
d
A,B
[M]/j

members. So for some j


1
not for every random enough M
n
do
we have j
Mn
,= j. Hence by 6.10(3) we are done.
Clause (j):
Restricting ourselves to models into which B
2
can be embeddable, by (g) + (h)
we can deal with B

( 2), B

= A

B
0
, and for this look at the proof of 6.10(3)
above.
What about the models into which B
2
is not embeddable? Look at the proof of
clause (h) above.
70 SAHARON SHELAH
Clause (k):
Like the proof of (j).
Clause ():
Straight by computing h
A,D1
(M
n
) (approximately) in two ways
where D
1
= B C.
Clause (m): smoothness
Follows from (d) and (f).
2.4
We deal in 7.8 - 7.11 with the polynomial case; in the general case we deal in
7.7.
Concerning 3.12 we add
6.11 Denition. We dene a four place relation

[p] on K
+p

(A
+
, B
+
, C
+
, D
+
) i
A
+

+
B
+

+
D
+
, A
+

+
C
+
D
+
, B
+
C
+
= A
+
and
(C
+
, D
+
) = (A
+
, B
+
). We normally write

when no confusion arises and may


write this B
+
D
+

A
+
C
+
.
Concerning 3.14 we add
6.12 Fact. Using <
+p
x
for x = b, j, t, qr instead <
K
+
x
for x = a, i, s, pr respectively
(K
+
,

[p]) satises Denition 6.1 and satises () of 6.10(2). Moreover, it satises


() of 7.8(1) if

h is polynomial and
3
+
4
of 7.9, in general.
6.13 Claim. Assume ()
ap
of 3.12 and
1
+
2
of 3.15.
1) A sucient condition for (K
+
,

) is nice is

3
(K,

) is nice.
2) A sucient condition for (K
+
,

) obeys

h very nicely (see Denition 7.12)

3
(K,

) obeys some

h very nicely
3) A sucient condition for K
+
is polynomially nice is

3
(K,

) is polynomially nice.
Proof. 1) We use 7.9(4). Now ()
4
there holds by 3.14(6) or use ?(2).
> scite3.14 undened
2) Straight.
3) Dene the conditions for non-polynomial cases.
0.1 LAWS SH550 71

_
p
d
C
+(M
n
) : C
+
B
+
, C
+
A
+

_
m
(h
2
(M
n
))
t2
|M
n
|

{(C
+
):C
+
B
+
,C
+
A
+
}
.
So the expected value of the number of such gs is
(h
1
(M
n
), h
2
(M
n
))
t2+t2
(|M
n
|
(A,B)
)
m
(|M
n
|

{(C
+
):C
+
B
+
,C
+
A
+
}
)
n
= (h
1
(M
n
))
t1+t2
, |M
n
|
m(A,B)
.
So the expected number of such g for some f is
(f : f embeds A into M
n
) (h
1
(M
n
) h
2
(M
n
))
t1+t2
|M
n
|
m,(A,B)
< |(h
1
(M
n
) h
2
(M
n
))
t1+t2
|M
n
|
m(A,B)+|A|
.
This converges to zero if m(A, B) > [A[ which holds for m large enough.
72 SAHARON SHELAH
7 Variants of Nice
Below we dene some properties of (K,

), variants of nice, we will use semi-nice


to prove elimination of quantiers (hence 0-1 laws), it is essentially the weakest
among those discussed below. But it will be natural in various contexts to verify
stronger ones.
The reader may e.g. ignore Denition 7.1(1),(2),(4),(5),(7)-(10), 7.3(7),(8) and
the version of 7.3(5),(6) with explicit; in [Sh 637] we present only one variant.
7.1 Denition. 1) We say (K,

) is explicitly
11
nice if we have:
()
2
for every A <
pr
B (in K

) and k N for some r = r(A, B, k) N for every


C, D such that B
D

A
C and D = B C we have:
1 = Lim
n
Prob
n
_
for every embedding f of C into M
n
satisfying
c
r
(f(A), M
n
) f(C) there is g : D M
n
extending f such that c
k
(g(B), M
n
) g(D)
_
.
If r(A, B, k) = k + [A[ or k we say (K,

) is explicitly
+
nice or explicitly
++
nice
respectively.
[Note: to deal with e.g. successor functions we need slightly more.]
2) We say (A, A
0
, B, D) is an almost (k, r)-good (quadruple) if:
()
k,r
A,A0,B,D
A
0
A D K

and B D and for every random enough M


n
we have:
() every embedding f : A M
n
satisfying
c
r
(f(A
0
), M
n
) f(A) has an extension g : D M
n
satisfying
c
k
(g(B), M
n
) = g(c
k
(B, D)).
If r = k we may write k instead of (k, r).
3) We say (A
+
, A, B, D) is a semi (k, r)-good quadruple if:
()
k,r
A
+
,A,B,D
A A
+
K

and A D, B D K

and for every random enough M


n
we have:
() for every embedding f : A
+
M
n
satisfying
c
r
(f(A), M
n
) f(A
+
) there is an extension g of f A, embedding
D into M
n
such that
c
k
(g(B), M
n
) = g(c
k
(B, D)).
11
the nice appears in 7.1(8) below
0.1 LAWS SH550 73
If r = k we may write k instead of (k, r).
4) We say (A, B, D) is semi (k, r)-good if: A A
+
K

(A
+
, A, B, D) is
semi-(k, r)-good.
5) We say K is almost nice if it is weakly nice and, for every A K

and k for
some , m, r we have:
() for every random enough M
n
, for every f : A M
n
we have:
() for every b M
n
c
m
(f(A), M
n
) we can nd A
0
, A
+
, B
+
such that:
() f(A) A
0
A
+
c
m
(f(A), M
n
),
() A
+
b B
+
M
n
() [B
+
[
() (A
+
, A
0
, A
0
b, B
+
) is almost (k, r)-good
() c
r
(A
0
, M
n
) A
+
() c
k
(f(A) b, M
n
) B
+
.
6) We say that K is semi-nice if it is weakly nice and for every A K

and k for
some , m, r we have:
() for every random enough M
n
, and embedding f : A M
n
and b M
n
we
can nd A
0
, A
+
, B, D such that:
() f(A) A
0
A
+
c
m
(f(A), M
n
)
[note that we can have nitely many possibilities for (, m, r); does not
matter]
() f(A) b B D M
n
() [D[
() (A
+
, A
0
, B, D) is semi (k, r)-good
() c
r
(A
0
, M
n
) A
+
() c
k
(B, M
n
) D.
7) We say the pair (A, B) is (k, r)-good when A
s
B and: if B D, A
s
D then
(A, B, D) is semi-(k, r)-good. We say (A, B) is k-good if it is (k, k)-good; good if it
is (k, k)-good for every k; and -good, if for every k for some r it is (k, r)-good.
8) We say K is nice if A <
s
B implies (A, B) is -good.
9) We say K is explicitly almost nice when it is weakly nice and for every k, for
some r, m, for some random enough M
n
, (i.e. 0 < lim sup Prob
n
), if A
0
A <
s
D, A
0
B D M
n
and [D[ are such that c
m
(B, M
n
) = c
m
(B, D) and
c
r
(A
0
, M
n
) A then (A, A
0
, B, D) is almost (k, r)-good.
10) We say K is explicitly semi-nice when it is weakly nice and for every and k
for some r for some random enough M
n
we have:
() if A <
s
D, B D M
n
, [D[ , A
+
= c
r
(A, M
n
) and
c
k
(B, M
n
) D then (A
+
, A, B, D) is semi (k, r)-good.
7.2 Remark. 1) We may consider other candidates to 7.1(7), 7.1(8).
2) We may consider in the Denition of semi-good (or explicitly) semi-good/nice
to split k to two: in assumption and in conclusion.
3) Also there to demand b / c
m
(A, M
n
).
74 SAHARON SHELAH
7.3 Fact
1) In Denition 7.1(1) (of explicitly nice) we can replace
A <
pr
B by A <
s
B and/or replace c
k
(g(B), M
n
) g(D) by
c
k
(g(B), M
n
) = g(c
k
(B, D)). If (K,

) is explicitly
++
nice then it is explicitly
+
nice which implies it is explicitly nice.
2) If (A, A
0
, B, D) is almost (k, r)-good, then (A, A
0
, B, D) is semi (k, r)-good.
3) (K,

) being explicitly nice implies K is weakly nice.


4) If (A, A
0
, B, D) is semi (k, r)-good and Ac
k
(B, D) D

D then (A, A
0
, B, D

)
is semi (k, r)-good.
5) If the denition of semi-nice we can demand B = f(A) b.
Proof. 1) For the rst phrase, clearly the new version implies the old as
A <
pr
B A <
s
B. So assume the old version and let A <
s
B, so by 1.6 we can
nd n and A
0
<
pr
A
1
<
pr
. . . <
pr
A
n
, A
0
= A, A
n
= B. By 6.1,
A
+1
C A
+1

CA

. Dene k() for n by downward induction on . For = 0


let k(0) = k, for let it be the r(r
+1
, A

, A
+1
) guaranteed by 7.1(1). Now for
random enough M
n
and embedding f : C M
n
such that c
k
(f(A), M
n
) f(C),
we choose by induction on n an embedding f

: C A

M
n
increase with
such that c(f

(C A

) C A
+1
. For = 0 this is given, for +1 use the choice
of r

.
For the second phrase, clearly c
k
(g(B), M
n
) = g(c
k
(B, D)) implies
c
k
(g(B), M
n
) g(D), and for the other direction remember
c
k
(A, N
2
) N
1
N
2
c
k
(A, N
1
) = c
k
(A, N
2
).
In the second sentence, rst implication holds as c
n
(A

, B

) increase with m; the


second implication holds by the denition.
2) Read the denitions.
3) (K,

) explicitly nice K is weakly nice.


Let A <
pr
B and m N and > 0 be given. Let r be as guaranteed by 7.1(1)
and let m

be such that A A

(c
r
(A, A

)) m

(exists by 1.6(13)). Let


(C
i
, D
i
) : i < i

list with no repetitions (up to isomorphism over B) of the


pairs (C, D) such that the quadruple (A, B, C, D) is as in ()
2
of Denition 7.1(1)
with [D[ [B[ m + m

, and let n

N be large enough such that for every


n n

the probability of the event E


i
n
= for every embedding f : C
i
M
n
such that c
r
((A), M
n
) f(C
i
) there is an embedding g : D
i
M
n
extending
f is 1 /i

. So for n n

the probability that all the events E


0
n
, . . . , E
i

1
n
occur is 1 , and it suces to prove that for such M
n
for every embedding
f : A M
n
, there are m disjoint extensions to g : B M
n
. Choose by induction
on an embedding g

: B M
n
extending f with Rang(g

) Rang(f) disjoint
to
_
i<
Rang(g
i
). If we succeed to get g
0
, . . . , g
m1
, we are done, so assume we
are stuck for some < m. By Denition 6.1(2)(i) (existence for

) we can nd
D K

such that C = M
n

_
_
_
j<
Rang(g
j
) c
r
(f(A), M
n
)
_
_
D and there is
an embedding f
+
: B D extending f such that f
+
(B)f(A) is disjoint to C and
0.1 LAWS SH550 75
D = Cf
+
(B) and f
+
(B)
D

f(A)
C. Now there is i < i

such that (D, C)



= (D
i
, C
i
)
more exactly there is an isomorphism h from D onto D
i
such that h(C) = C
i
and
f
+
= h B and apply E
i
n
occurs to M
n
to get contradiction.
4) Read denitions.
7.3
7.4 Claim. 1) Assume (K,

) is an explicitly nice 0-1 context and r(A, B, k) is as


in ()
2
of Denition 7.1(1).
(a) if (AB)
D

A
C, r = r(A, AB, k) (of Denition 7.1(1)()
2
), then (C, A, B, D)
is almost (k, r)-good
(b) if A A
+
K

and A
s
D and B D K

and k r, r(A, D, k) r where r(A, D, k) is as guaranteed by 7.3(1) then


(A
+
, A, B, D) is semi (k, r)-good
(c) if A
s
D, B D K

then (A, B, D) is semi (k, r)-good.


2) In Denition 7.1(5), in () we can allow any b M
n
.
3) In Denition 7.1(6), in () we can restrict ourselves to b M
n
c
m
(f(A), M
n
)
and/or replace in clause () the demand [D[ by [A
+
D[ [.
Proof. 1)
(a) Reread Denition 7.1(1) particularly ()
2
and Denition 7.1(2).
(b) By clause (i) of Denition 6.1(1) (= existence) without loss of generality
D
D
+

A
A
+
for some D
+
K

. By clause (a) we know (A


+
, A, D, D
+
) is
almost (k, r)-good. By 7.3(2) we get the desired conclusion.
(c) Left to the reader.
2) Follows by part (2) (and the Denition 7.1(4)).
3) Left to the reader.
7.4
7.5 Claim. 1) If (K,

) is explicitly nice, then K is explicitly semi-nice, and K is


semi-nice.
2) If (K,

) is explicitly nice and then K is explicitly almost nice and, if in addition,


it has the strong nite basis property it is almost nice.
3) If K is explicitly semi-nice, then K is semi-nice.
4) If K is explicitly almost nice and (K,

) has the strong nite basis property then


K is almost nice.
5) If K is almost nice then K is semi-nice.
Proof. 1) So assume that (K,

) is explicitly nice, so by 7.3(1), even if just


A <
s
B, k N then for some r = r(A, B, k) k we have ()
2
of 7.1(1). Let us
76 SAHARON SHELAH
prove that K is explicitly semi-nice; i.e. Denition 7.1(10), so let and k be given,
and we should provide r as required there. Let r = Maxr(A, B, k) : A <
s
D
K

, [D[ .
We should verify () of Denition 7.1(10), so let M
n
be random enough,
A <
s
D, B D M
n
, [D[ , A
+
= c
r
(A, M
n
) and assume that c
k
(B, M
n
)
D. We should prove that (A
+
, A, B, D) is semi-(k, r)-good. For this it suces to
verify the assumptions of 7.4(1)(b) but they are obvious. To nish the proof note
that by 7.5(3) below K is explicitly semi-nice (see Denition 7.1(10)) implies that
K is semi-nice.
2) Similar to part (1) using this time 7.4(1)(a) above and 7.5(4) below.
3) So let A K

and k N be given and we have to nd , m, r as required in


Denition 7.1(6).
Let f : N N N be such that: for any i, j if M
n
is random enough and
A

M
n
, [A

[ j then c
i
(A

, M
n
) has at most f (i, j) elements.
Let

= f (k, [A[ + 1), now let r

be the r guaranteed to exist in Denition


7.1(10) for k and . Now dene by induction on i

+1 a number m
i
as follows:
m
0
= [A[, m
i+1
= f (r,

)

(m
i
)

and lastly let m =: m

+1
.
So we have chosen , m, r and have to show that they are as required in Denition
7.1(6).
So let M
n
be random enough and f : A M
n
and b M
n
. We dene by induction
on i

+ 1 a set A
i
increasing with i as follows:
A
0
= f(A), A
i+1
= A
i

_
c
r
(A

, M
n
) : A

A
i
and [A

[ .
Clearly we can prove by induction on i that A
0

i
A
i
and [A
i
[ m
i
hence
A
i
c
mi
(f(A), M
n
).
As c
k
(A
0
b, M
n
)A
0
has

members necessarily for some i < + 1 we


have: c
k
(A
0
b, M
n
) is disjoint to A
i+1
A
i
, and choose the minimal such i.
So c
k
(A
0
b, M
n
)A
i
has at most [A[ i members, so by the denition of
A
i+1
there is no A

, such that A
i
c
k
(A
0
b, M
n
) <
i
A

c
k
(A b, M
n
)
hence letting A

= c
k
(A
0
b, M
n
) A
i
we have A

s
c
k
(A
0
b, M
n
). Let
D =: c
k
(A
0
b, M
n
), and B = A
0
b and A
+
= c
r
(A

, M
n
) hence A
+
A
i+1
.
Now we use Denition 1.6(10) (i.e. the choice of r) with (A
+
, A

, B, D), k, , r here
standing for A
+
, A, B, D, k, , r there, clearly the assumption of 1.6(10)() holds
(i.e. A

A
+
M
n
, A

s
D M
n
, B D, [D[ , f(A) = A
0
A

A
+
=
c
r
(A

, M
n
) A
m
+1
= c
m
+1
(f(A), M
n
) and r, m

as required for k, ). Hence


we get (A
+
, A

, B, D) is semi (k, r)-nice.


Now let us check requirements () () in () of Denition 7.1(6). Now clauses
(), (), (), () holds by the suitable choices, and clause () holds by a previous
sentence and () as D = c
k
(B, M
n
).
4) Similar to the proof of part (3).
So assume K is explicitly almost nice and (K,

) has the strong nite basis property


and we should prove that K is almost nice. So we have to check Denition 7.1(5),
clearly K is weakly nice, so we are given A K

and k N and we should nd


, m, r satisfying () from Denition 7.1(5).
Choose r

, m

as in 7.1(9) for with (k, [A[ +1) here standing for (k, ) there. Let
i() be such that:
0.1 LAWS SH550 77
() if D K

, A
i
D for i i(), A
i
A
i+1
, B D, [B[ f (m

, [A[ + 1)
then for some i < i() there is B

such that A
i+1
B B

A
i
and
B

B
D

c
r
(B

, A
i+1
)
[why i() exists? as (K,

) has the nite basis property.]


Dene inductively m
i
for i i() by m
0
= [A[ and m
i+1
= f (r, m
i
). Lastly,
let m = m
i()
, r = r

and = m+f(m, [A[ + 1).


So let M
n
be random enough and let f : A M
n
and b M
n
c
m
(f(A), M
n
)
and we should nd A
0
, A
+
, B satisfying () () of () of 7.1(5). Let B
1
=
c
m
(A b, M
n
).
We dene by induction on i, a set A

i
as follows:
A

0
= f(A), A

i+1
= c
r
(A

i
, M
n
).
Clearly [A
i
[ m
i
. As (K,

) has the nite basis property (and the choice of above)


we can nd i such that A

i
B
1
A

i+1
B
1

i
A

i+1
.
Now choose A
0
= A

i
, A
+
= A

i+1
, B
+
= B
1
A

i+1
and let us check clause
() () of () of 7.1(5), now (), () hold by the choices A
0
, A
+
, B
+
and clause
() holds by the choice of A

i+1
= A
+
, also clause () holds as
[B
+
[ [c
m
(A b, M
n
)[ +[A

i+1
[ f (m, [A[ + 1) +m

i+1
f (m, [A[) +m.
5) So assume that K is almost nice (i.e. Denition 7.1(5)) and let us check Denition
7.1(6). Obviously, K is weakly nice. So let A K

and k N be given and we


should nd , m, r such that () of Denition 7.1(6) holds. We just choose them
as in Denition 7.1(5). Now let M
n
be random enough and f : A M
n
and
we should nd A
0
, A
+
, B, D as in () of Denition 7.1(6). Let A
0
, A
+
, B
+
be as
required in () of Denition 7.1(5). Let us choose B = A
0
b, D = B
+
, and we
have to check clause () () of Denition 7.1(6). Now they hold by the respective
clauses in Denition 7.1(5)(), but for () we have to use 7.3(2).

7.5
7.6 Remark. Let K is explicitly

semi-nice means

K is weakly nice and for every , k there is m such that for every
1
there is
r such that:
() if M
n
is random enough, A <
s
D, B D M
n
, [AB[
0
, [D[
1
and A
+
= c
r
(A, M
n
), c
m
(B, M
n
) D then (A
+
, A, B, D) is semi
(k, r)-good.
Now explicitly semi-nice explicitly

semi-nice semi-nice.
7.7 Claim. Assume that K is a 0-1 context obeying

h = (h
d
, h
u
) which is bounded
by h

, and

h, h

and assume () of 6.10(2).


78 SAHARON SHELAH
1) K is weakly nice.
2) A sucient condition for (K,

) is explicitly
+
nice is

2
if A <
s
B <
i
D and A
s
D, then for some R
+
for every random
enough M
n
we have (h[M
n
])

> h
u
A,D
[M
n
]/h
d
A,B
[M
n
].
3) A sucient condition for (A
+
, A, B, D) being semi (k, r)-good is

3
A A
+
, B D K

and A
s
D and r k +[A[ and:

k,r
A,B,D
if D D

= D C, [C[ k, D

,= D c

(A, D

) and
C B <
i
C, A
i
A

s
D

then for every R


+
for every random
enough M
n
we have > h
u
A,D
[M
n
]/h
d
A

,D
[M
n
].
4) A sucient condition for K being explicitly semi nice (see Denition 7.1(10)
hence semi nice, by 7.5(3)) is [Saharon copied]

4
if B D K

, A
s
D, k N and r [A[ +k then

k,r
A,B,D
if D D

= D C, [C[ k, D

,= D c

(A, D

) and
C B <
i
C, A
i
A

s
D

then for every R


+
for every random
enough M
n
we have > h
u
A,D
[M
n
]/h
d
A

,D
[M
n
].
5) A sucient condition for K being explicitly semi-nice (hence semi-nice by 7.5(3))
is

5
if k, N, M
n
is random enough, B D M
n
and [D[ and
c
k
(B, M
n
) D and A <
s
D and r = k +[A[ then

k,r
A,B,D
if D D

= D C, [C[ k, D

,= D c

(A, D

) and
C B <
i
C, A
i
A

s
D

then for every R


+
for every random
enough M
n
we have > h
u
A,D
[M
n
]/h
d
A

,D
[M
n
].
6) A sucient condition for K being explicitly

semi-nice (see 7.6) is

6
for any , k N there are m, r N such that: if M
n
is random enough,
A B M
n
, [B[ , D = c
m
(B, M
n
) and D
i
D

,
D C, [C[ k, B C <
i
C, A
i
A

s
D

then

k,r
A

,B,D
holds.
Remark. 1) When c
k
(A, M) has no bound by k, [A[, we may consider cases
c
k,+1
(A, M), is repeated closure under c
k
where is the length of the iteration.
What about the bound on the size of c
k,
(A, M)? Consider log[|M
n
|].
2) We may consider giving to A <
i
B possibly a negative (A, B) so measure how
few such cases arise, this may help make additive formulas meaningful.
Proof. 1) By 2.5(2).
2) In Denition 7.1(1), the (K,

) is a 0-1 context holds by 6.10(2) as we are


0.1 LAWS SH550 79
assuming ()
2
of 6.10(2). Let r(A, B, k) = [A[+k. So assume B
D

A
C and r = [A[+k
and k be given and without loss of generalityD = BC. Let M
n
be random enough
and f : C M
n
. Now we know the order of magnitude of
G = g : g is an embedding of D into M
n
extending f.
Also by the denition of

(in 2.1(5))
G
1
=
_
g G :
_
g(D)

f(C)
c
r
(f(C), M
n
)
_
has smaller order of magnitude and also, by the assumption

2
G
2
= g G : c
r
(g(D), M
n
) g(D) c
r
(f(C), M
n
).
Now every g GG
1
G
2
is as required, by 6.7(1) Shmuel?.
3) Similar proof.
4) So let , k be given, we have to choose r as required in Denition 7.1(10). Let
r = + k and let M
n
be random enough, so we have just to check () from Def-
inition 7.1(10). So assume M
n
is random enough, A <
s
D, B D M
n
, [D[
, c
k
(B, M
n
) D and A
+
= c
r
(A, M
n
), and it suces to prove that the quadru-
ple (A
+
, A, B, D) is semi (k, r)-good. For this we use the criterion from part (3)
which holds (by assumptions above and) the assumption

4
.
5), 6) Shmuel - details.
7.8 Claim. Assume that K obeys

h and

h is polynomial over h.
1) If A <
s
B then for some (A, B) R
+
and k = k(A, B) (see Denition 2.3)
every random enough M
n
satises:
() for every embedding f of A into M
n
,
|M
n
|
(A,B)
/h[M
n
]
k
nu(f, A, B, M
n
) |M
n
|
(A,B)
h[M
n
]
k
.
2) In fact if A = A
0
<
pr
A
1
<
pr
<
pr
A
k
= B then we can let k(A, B) =

i<k
k(A
i
, A
i+1
) and (A, B) =

<k
(A

, A
+1
).
So the sum (A, B) does not depend on the choice of the decomposition, i.e. of
A

: k) but only on (A, B) (and K).


3) Some h

bounds

h (see Denition 2.1(3)), we can choose h

(n) = n, but even


can demand that it satises: for every > 0 for every random enough M
n
we have
h

(M
n
) < |M
n
|

.
4) If A <
s
B <
s
C, then (see 7.8(2) above):
(A, C) = (A, B) +(B, C).
5) If A <
s
B
i
D, A <
s
D, then (A, B) (A, D).
6) If A <
s
B D = B C and A C <
s
B then
80 SAHARON SHELAH
(A, B) (C, D).
Proof. Straightforward.
Saharon: about almost nice?
7.9 Claim. Assume that K is a 0-1 context obeying

h and

h is polynomial over h,
so

h is bound by some h

(see 2.5) and let

h, h

.
1) K is weakly nice.
2) B
D

A
C i A
s
B D, A C
s
D, B C = A and (A, B) = (C, C B).
3) If the condition
3
below holds then condition ()
2
of 6.10(2) holds hence (K,

)
is a 0-1 context, where

3
if A B D, A C D, A <
s
B and (C <
s
D) or (B
D

A
C) and
D = B C then (A, B) > (C, D).
4) A sucient condition for (K,

) is explicitly
+
nice is

4
if A <
s
B <
i
D and A
s
D, then (A, B) > (A, D).
5) A sucient
12
condition for K is semi-nice is

5
for every N for some r N we have:
for every random enough M
n
, and every A B M
n
, [B[ , letting
D = c
r
(B, M
n
), A
i
A
0

s
D we have

k,
A0,B,D
if D D
1
= D C, [C[ k, C B <
i
C,
A
0

i
A
1
<
s
D, A
1
embeddable into M
n
over A
then (A
0
, D) > (A
1
, D
1
).
6) A sucient condition for

3
of 7.9(3) above, is

6
if A <
pr
B D, A C <
pr
D = B C and
(B
D

A
C), then (A, B) > (C, D).
Proof. 1) Should be clear.
2) Should be clear.
3) Read ()
2
of 6.10(2).
4) Read the denition.
5) Easy.
6) Easy.
12
still it does not cover all
0.1 LAWS SH550 81
7.10 Denition. K is polynomially nice if it obeys

h which is polynomial over h
and satises

3
+

4
of 7.9.
7.11 Denition. We say K is polynomially semi-nice if it obeys

h which is polyn-
imial over h and satises

3
,

5
of 7.9.
7.12 Denition. 1) We say K, a 0-1 law context obeys

h very nicely it obeys

h
and satises ()
2
of 6.10(2) and ()
2
of 7.7(2) (hence it is explicitly nice), with the

of Denition 2.1(5), of course.


2) We say K, a 0-1 law context obeys a polynomial

h very nicely if it satises

3
of 7.9(3) and

4
of 7.9(4).
3) By 2.5(2), clearly K is weakly nice. As for () of 6.10(2), reread the denitions.
7.13 Denition. We cay (K, c,

) is polynomially almost nice if:


()
4
for every k, for some m, t, s for every A, B D, A
s
D we have:
for every random enough M
n
, if
f : D M
n
, A
s
D, B D, c
s
(f(B), M
n
) f(D),
D e
m
(f(A), M
n
) f(A) then (A, B, D) is k-good.

82 SAHARON SHELAH
REFERENCES.
[BlSh 528] John Baldwin and Saharon Shelah. Randomness and Semigenericity.
Transactions of the American Mathematical Society, 349:13591376,
1997.
[BlSh 156] John T. Baldwin and Saharon Shelah. Second-order quantiers and the
complexity of theories. Notre Dame Journal of Formal Logic, 26:229
303, 1985. Proceedings of the 1980/1 Jerusalem Model Theory year.
[B] James E. Baumgartner. Decomposition of embedding of trees. Notices
Amer. Math. Soc., 17:967, 1970.
[Fa76] Ronald Fagin. Probabilities in nite models. Journal of Symbolic Logic,
45:129141, 1976.
[GKLT] Y.V. Glebskii, D.I. Kogan, M.I. Liagonkii, and V.A. Talanov. Range
and degree of reliability of formulas in restricted predicate calculus.
Kibernetica, 5:1727, 1969. translation of Cybernetics vol 5 pp 142-154.
[Ly85] James F. Lynch. Probabilities of rst-order sentences about unary func-
tions. Transactions of the American Mathematical Society, 287:543
568, 1985.
[Ly90] James F. Lynch. Probabilities of sentences about very sparse random
graphs. In 31st Annual Symposium on Foundations of Computer Sci-
ence, IEEE Comput. Soc. Press, Los Alamitos, California, 1990.
[Sh 637] Saharon Shelah. 0.1 Laws: Putting together two contexts randomly .
in preparation.
[Sh 639] Saharon Shelah. On quantication with a nite universe. Journal of
Symbolic Logic, submitted.
[Sh 581] Saharon Shelah. When 01 law hold for G
n, p
, p monotonic. in prepa-
ration.
[Sh 467] Saharon Shelah. Zero one laws for graphs with edge probabilities de-
caying with distance. Fundamenta Mathematicae, submitted.
[Sh:93] Saharon Shelah. Simple unstable theories. Annals of Mathematical
Logic, 19:177203, 1980.
[Sh:c] Saharon Shelah. Classication theory and the number of nonisomorphic
models, volume 92 of Studies in Logic and the Foundations of Mathemat-
ics. North-Holland Publishing Co., Amsterdam, xxxiv+705 pp, 1990.
[Sh 551] Saharon Shelah. In the random graph G(n, p), p = n
a
: if has prob-
ability 0(n

) for every > 0 then it has probability 0(e


n

) for some
> 0. Annals of Pure and Applied Logic, 82:97102, 1996.
[ShSp 304] Saharon Shelah and Joel Spencer. Zero-one laws for sparse random
graphs. Journal of the American Mathematical Society, 1:97115, 1988.
[Sp] Joel Spencer. Survey/expository paper: zero one laws with variable
probabilities. Journal of Symbolic Logic, 58:114, 1993.

Anda mungkin juga menyukai