Anda di halaman 1dari 11

(January 14, 2009)

[19.1] For distinct primes p, q, compute


Z/p ⊗Z Z/q /pq

where for a divisor d of an integer n the abelian group Z/d is given the Z/n-module structure by

(r + nZ) · (x + dZ) = rx + dZ

We claim that this tensor product is 0. To prove this, it suffices to prove that every m ⊗ n (the image of
m × n in the tensor product) is 0, since we have shown that these monomial tensors always generate the
tensor product.

Z
Since p and q are relatively prime, there exist integers a, b such that 1 = ap + bq. Then for all m ∈ /p and
Z
n ∈ /q,

m ⊗ n = 1 · (m ⊗ n) = (ap + bq)(m ⊗ n) = a(pm ⊗ n) + b(m ⊗ qn) = a · 0 + b · 0 = 0

An auxiliary point is to recognize that, indeed, Z/p and Z/q really are Z/pq-modules, and that the equation
Z
1 = ap + bq still does make sense inside /pq. ///

[19.2] Compute Z/n ⊗Z Q with 0 < n ∈ Z.


We claim that the tensor product is 0. It suffices to show that every m ⊗ n is 0, since these monomials
Z
generate the tensor product. For any x ∈ /n and y ∈ , Q
y y y
x ⊗ y = x ⊗ (n · ) = (nx) ⊗ = 0 ⊗ = 0
n n n

as claimed. ///

[19.3] Compute Z/n ⊗Z Q/Z with 0 < n ∈ Z.


We claim that the tensor product is 0. It suffices to show that every m ⊗ n is 0, since these monomials
Z
generate the tensor product. For any x ∈ /n and y ∈ / , QZ
y y y
x ⊗ y = x ⊗ (n · ) = (nx) ⊗ = 0 ⊗ = 0
n n n

as claimed. ///

[19.4] Compute HomZ (Z/n, Q/Z) for 0 < n ∈ Z.


Let q : Z Z Z QZ
→ /n be the natural quotient map. Given ϕ ∈ HomZ ( /n, / ), the composite ϕ ◦ q
Z
is a -homomorphism from the free -module Z Z QZ
(on one generator 1) to / . A homomorphism
ZQZ
Φ ∈ HomZ ( , / ) is completely determined by the image of 1 (since Φ(`) = Φ(` · 1) = ` · Φ(1)), and
Z QZ
since is free this image can be anything in the target / .
Such a homomorphism Φ ∈ HomZ (Z, Q/Z) factors through Z/n if and only if Φ(n) = 0, that is, n · Φ(1) = 0.
A complete list of representatives for equivalence classes in Q/Z annihilated by n is 0, , , , . . . ,
1 2 3
. n−1

Thus, HomZ (Z/n, Q/Z) is in bijection with this set, by


n n n n

ϕ (x + nZ) = ix/n + Z
i/n

In fact, we see that HomZ (Z/n, Q/Z) is an abelian group isomorphic to Z/n, with

ϕ (x + nZ) = x/n + Z
1/n

1
Paul Garrett: (January 14, 2009)

as a generator. ///

[19.5] Compute Q ⊗Z Q.
Q
We claim that this tensor product is isomorphic to , via the Z-linear map β induced from the Z-bilinar
Q Q Q
map B : × → given by
B : x × y → xy
First, observe that the monomials x ⊗ 1 generate the tensor product. Indeed, given a/b ∈ Q (with a, b
integers, b 6= 0) we have
a x a x a x x x ax
x⊗ = ( · b) ⊗ = ⊗ (b · ) = ⊗ a = ⊗ a · 1 = (a · ) ⊗ 1 = ⊗1
b b b b b b b b b
Z
proving the claim. Further, any finite -linear combination of such elements can be rewritten as a single
Z Q
one: letting ni ∈ and xi ∈ , we have
X X
ni · (xi ⊗ 1) = ( ni xi ) ⊗ 1
i i

This gives an outer bound for the size of the tensor product. Now we need an inner bound, to know that
there is no further collapsing in the tensor product.
Z
From the defining property of the tensor product there exists a (unique) -linear map from the tensor
Q Z
product to , through which B factors. We have B(x, 1) = x, so the induced -linear map β is a bijection
Q
on {x ⊗ 1 : x ∈ }, so it is an isomorphism. ///

[19.6] Compute (Q/Z) ⊗Z Q.


We claim that the tensor product is 0. It suffices to show that every m ⊗ n is 0, since these monomials
QZ Z
generate the tensor product. Given x ∈ / , let 0 < n ∈ such that nx = 0. For any y ∈ , Q
y y y
x ⊗ y = x ⊗ (n · ) = (nx) ⊗ = 0 ⊗ = 0
n n n
as claimed. ///

[19.7] Compute (Q/Z) ⊗Z (Q/Z).


We claim that the tensor product is 0. It suffices to show that every m ⊗ n is 0, since these monomials
QZ Z
generate the tensor product. Given x ∈ / , let 0 < n ∈ such that nx = 0. For any y ∈ / , QZ
y y y
x ⊗ y = x ⊗ (n · ) = (nx) ⊗ = 0 ⊗ = 0
n n n
Q Q
as claimed. Note that we do not claim that /Z is a -module (which it is not), but only that for given
QZ QZ Q
y ∈ / there is another element z ∈ / such that nz = y. That is, /Z is a divisible -module. Z
///

[19.8] Prove that for a subring R of a commutative ring S, with 1R = 1S , polynomial rings R[x] behave
well with respect to tensor products, namely that (as rings)

R[x] ⊗R S ≈ S[x]

Given an R-algebra homomorphism ϕ : R → A and a ∈ A, let Φ : R[x] → A be the unique R-algebra


homomorphism R[x] → A which is ϕ on R and such that ϕ(x) = a. In particular, this works for A an

2
Paul Garrett: (January 14, 2009)

S-algebra and ϕ the restriction to R of an S-algebra homomorphism ϕ : S → A. By the defining property


of the tensor product, the bilinear map B : R[x] × S → A given by

B(P (x) × s) = s · Φ(P (x))

gives a unique R-module map β : R[x] ⊗R S → A. Thus, the tensor product has most of the properties
necessary for it to be the free S-algebra on one generator x ⊗ 1.

[0.0.1] Remark: However, we might be concerned about verification that each such β is an S-algebra
map, rather than just an R-module map. We can certainly write an expression that appears to describe the
multiplication, by
(P (x) ⊗ s) · (Q(x) ⊗ t) = P (x)Q(x) ⊗ st
for polynomials P, Q and s, t ∈ S. If it is well-defined, then it is visibly associative, distributive, etc., as
required.

[0.0.2] Remark: The S-module structure itself is more straightforward: for any R-module M the tensor
product M ⊗R S has a natural S-module structure given by

s · (m ⊗ t) = m ⊗ st

for s, t ∈ S and m ∈ M . But one could object that this structure is chosen at random. To argue that this
is a good way to convert M into an S-module, we claim that for any other S-module N we have a natural
isomorphism of abelian groups
HomS (M ⊗R S, N ) ≈ HomR (M, N )
(where on the right-hand side we simply forget that N had more structure than that of R-module). The
map is given by
Φ → ϕΦ where ϕΦ (m) = Φ(m ⊗ 1)
and has inverse
Φϕ ←− ϕ where Φϕ (m ⊗ s) = s · ϕ(m)
One might further carefully verify that these two maps are inverses.

[0.0.3] Remark: The definition of the tensor product does give an R-linear map
β : R[x] ⊗R S → S[x]

associated to the R-bilinear B : R[x] × S → S[x] by

B(P (x) ⊗ s) = s · P (x)

for P (x) ∈ R[x] and s ∈ S. But it does not seem trivial to prove that this gives an isomorphism. Instead, it
may be better to use the universal mapping property of a free algebra. In any case, there would still remain
the issue of proving that the induced maps are S-algebra maps.

[19.9] Let K be a field extension of a field k. Let f (x) ∈ k[x]. Show that
k[x]/f ⊗k K ≈ K[x]/f

where the indicated quotients are by the ideals generated by f in k[x] and K[x], respectively.

Upon reflection, one should realize that we want to prove isomorphism as K[x]-modules. Thus, we implicitly
use the facts that k[x]/f is a k[x]-module, that k[x] ⊗k K ≈ K[x] as K-algebras, and that M ⊗k K gives a
k[x]-module M a K[x]-module structure by
X X
( si xi ) · (m ⊗ 1) = (xi · m) ⊗ si
i i

3
Paul Garrett: (January 14, 2009)

The map
k[x] ⊗k K ≈ring K[x] → K[x]/f
i
P
has kernel (in K[x]) exactly of multiples Q(x) · f (x) of f (x) by polynomials Q(x) = i si x in K[x]. The
inverse image of such a polynomial via the isomorphism is
X
xi f (x) ⊗ si
i

Let I be the ideal generated in k[x] by f , and I˜ the ideal generated by f in K[x]. The k-bilinear map

k[x]/f × K → K[x]/f

by
B : (P (x) + I) × s → s · P (x) + I˜
gives a map β : k[x]/f ⊗k K → K[x]/f . The map β is surjective, since
X X
β( (xi + I) ⊗ si ) = si xi + I˜
i i

hits every polynomial


P i ˜ On the other hand, if
i si x mod I.
X
β( (xi + I) ⊗ si ) ∈ I˜
i

i j `
P P P
then i si x = F (x) · f (x) for some F (x) ∈ K[x]. Let F (x) = j tj x . With f (x) = ` c` x , we have
X
si = tj c`
j+`=i

Then, using k-linearity,


 
X X X X
(xi + I) ⊗ si =  xi + I ⊗ ( xj+` + I ⊗ tj c`

tj c` ) =
i i j+`=i j,`

X  XX X X
= c` xj+` + I ⊗ tj = ( c` xj+` + I) ⊗ tj = (f (x)xj + I) ⊗ tj = 0=0
j,` j ` j j

So the map is a bijection, so is an isomorphism. ///

[19.10] Let K be a field extension of a field k. Let V be a finite-dimensional k-vectorspace. Show that
V ⊗k K is a good definition of the extension of scalars of V from k to K, in the sense that for any
K-vectorspace W
HomK (V ⊗k K, W ) ≈ Homk (V, W )
where in Homk (V, W ) we forget that W was a K-vectorspace, and only think of it as a k-vectorspace.
This is a special case of a general phenomenon regarding extension of scalars. For any k-vectorspace V the
tensor product V ⊗k K has a natural K-module structure given by

s · (v ⊗ t) = v ⊗ st

4
Paul Garrett: (January 14, 2009)

for s, t ∈ K and v ∈ V . To argue that this is a good way to convert k-vectorspaces V into K-vectorspaces,
claim that for any other K-module W have a natural isomorphism of abelian groups

HomK (V ⊗k K, W ) ≈ Homk (V, W )

On the right-hand side we forget that W had more structure than that of k-vectorspace. The map is

Φ → ϕΦ where ϕΦ (v) = Φ(v ⊗ 1)

and has inverse


Φϕ ←− ϕ where Φϕ (v ⊗ s) = s · ϕ(v)
To verify that these are mutual inverses, compute

ϕΦϕ (v) = Φϕ (v ⊗ 1) = 1 · ϕ(v) = ϕ(v)

and
ΦϕΦ (v ⊗ 1) = 1 · ϕΦ (v) = Φ(v ⊗ 1)
which proves that the maps are inverses. ///

[0.0.4] Remark: In fact, the two spaces of homomorphisms in the isomorphism can be given natural
structures of K-vectorspaces, and the isomorphism just constructed can be verified to respect this additional
structure. The K-vectorspace structure on the left is clear, namely

(s · Φ)(m ⊗ t) = Φ(m ⊗ st) = s · Φ(m ⊗ t)

The structure on the right is


(s · ϕ)(m) = s · ϕ(m)
The latter has only the one presentation, since only W is a K-vectorspace.

[19.11] Let M and N be free R-modules, where R is a commutative ring with identity. Prove that M ⊗R N
is free and
rankM ⊗R N = rankM · rankN

Let M and N be free on generators i : X → M and j : Y → N . We claim that M ⊗R N is free on a set map

` : X × Y → M ⊗R N

To verify this, let ϕ : X × Y → Z be a set map. For each fixed y ∈ Y , the map x → ϕ(x, y) factors through
a unique R-module map By : M → Z. For each m ∈ M , the map y → By (m) gives rise to a unique R-linear
map n → B(m, n) such that
B(m, j(y)) = By (m)
The P
linearity in the second argument assures that we still have the linearity in the first, since for
n = t rt j(yt ) we have
X X
B(m, n) = B(m, rt j(yt )) = rt Byt (m)
t t

which is a linear combination of linear functions. Thus, there is a unique map to Z induced on the tensor
product, showing that the tensor product with set map i × j : X × Y → M ⊗R N is free. ///

[19.12] Let M be a free R-module of rank r, where R is a commutative ring with identity. Let S be a
commutative ring with identity containing R, such that 1R = 1S . Prove that as an S module M ⊗R S is free
of rank r.

5
Paul Garrett: (January 14, 2009)

We prove a bit more. First, instead of simply an inclusion R ⊂ S, we can consider any ring homomorphism
ψ : R → S such that ψ(1R ) = 1S .
Also, we can consider arbitrary sets of generators, and give more details. Let M be free on generators
i : X → M , where X is a set. Let τ : M × S → M ⊗R S be the canonical map. We claim that M ⊗R S is
free on j : X → M ⊗R S defined by
j(x) = τ (i(x) × 1S )
Given an S-module N , we can be a little forgetful and consider N as an R-module via ψ, by r · n = ψ(r)n.
Then, given a set map ϕ : X → N , since M is free, there is a unique R-module map Φ : M → N such that
ϕ = Φ ◦ i. That is, the diagram
MO N
NN
i NΦ N
NN
X
ϕ
/' N

commutes. Then the map


ψ :M ×S →N
by
ψ(m × s) = s · Φ(m)
induces (by the defining property of M ⊗R S) a unique Ψ : M ⊗R S → N making a commutative diagram

M ⊗O R S
E
C
τ @
>
;

O BS 9
B 7Ψ
B 5
i×inc
B 3
Bψ 1
X × {1S } B /
O B .
B
t B ,
B! +
X
ϕ
/N

where inc is the inclusion map {1S } → S, and where t : X → X × {1S } by x → x × 1S . Thus, M ⊗R S is free
on the composite j : X → M ⊗R S defined to be the composite of the vertical maps in that last diagram.
This argument does not depend upon finiteness of the generating set. ///

[19.13] For finite-dimensional vectorspaces V, W over a field k, prove that there is a natural isomorphism
(V ⊗k W )∗ ≈ V ∗ ⊗ W ∗

where X ∗ = Homk (X, k) for a k-vectorspace X.


For finite-dimensional V and W , since V ⊗k W is free on the cartesian product of the generators for V and
W , the dimensions of the two sides match. We make an isomorphism from right to left. Create a bilinear
map
V ∗ × W ∗ → (V ⊗k W )∗
as follows. Given λ ∈ V ∗ and µ ∈ W ∗ , as usual make Λλ,µ ∈ (V ⊗k W )∗ from the bilinear map

Bλ,µ : V × W → k

defined by
Bλ,µ (v, w) = λ(v) · µ(w)

6
Paul Garrett: (January 14, 2009)

This induces a unique functional Λλ,µ on the tensor product. This induces a unique linear map

V ∗ ⊗ W ∗ → (V ⊗k W )∗

as desired.
Since everything is finite-dimensional, bijectivity will follow from injectivity. Let e1 , . . . , em be a basis for
V , f1 , . . . , fn a basis for W , and λ1 , . . . , λm and µ1 , . . . , µn corresponding dual bases. We have shown that
a
Pbasis of a tensor product of free modules is free on the cartesian product of the generators. Suppose that
ij cij λi ⊗ µj gives the 0 functional on V ⊗ W , for some scalars cij . Then, for every pair of indices s, t, the
function is 0 on es ⊗ ft . That is, X
0= cij λi (es ) λj (ft ) = cst
ij

Thus, all constants cij are 0, proving that the map is injective. Then a dimension count proves the
isomorphism. ///

[19.14] For a finite-dimensional k-vectorspace V , prove that the bilinear map


B : V × V ∗ → Endk (V )

by
B(v × λ)(x) = λ(x) · v
gives an isomorphism V ⊗k V ∗ → Endk (V ). Further, show that the composition of endormorphisms is the
same as the map induced from the map on

(V ⊗ V ∗ ) × (V ⊗ V ∗ ) → V ⊗ V ∗

given by
(v ⊗ λ) × (w ⊗ µ) → λ(w)v ⊗ µ

The bilinear map v × λ → Tv,λ given by

Tv,λ (w) = λ(w) · v

induces a unique linear map j : V ⊗ V ∗ → Endk (V ).

To prove that j is injective, we may use the fact that a basis of a tensor product of free modules is free on
the cartesian product of the generators. Thus, let e1 , . . . , en be a basis for V , and λ1 , . . . , λn a dual basis for
V ∗ . Suppose that
Xn
cij ei ⊗ λj → 0Endk (V )
i,j=1

That is, for every e` , X


cij λj (e` )ei = 0 ∈ V
ij

This is X
cij ei = 0 (for all j)
i

Since the ei s are linearly independent, all the cij s are 0. Thus, the map j is injective. Then counting
k-dimensions shows that this j is a k-linear isomorphism.
Composition of endomorphisms is a bilinear map

Endk (V ) × Endk (V ) −→ Endk (V )

7
Paul Garrett: (January 14, 2009)

by
S×T →S◦T
Denote by
c : (v ⊗ λ) × (w ⊗ µ) → λ(w)v ⊗ µ
the allegedly corresonding map on the tensor products. The induced map on (V ⊗ V ∗ ) ⊗ (V ⊗ V ∗ ) is an
example of a contraction map on tensors. We want to show that the diagram

Endk (V ) × Endk (V )
◦ / Endk (V )
O O
j×j j

(V ⊗k V ∗ ) × (V ⊗k V ∗ )
c / V ⊗k V ∗

commutes. It suffices to check this starting with (v ⊗ λ) × (w ⊗ µ) in the lower left corner. Let x ∈ V . Going
up, then to the right, we obtain the endomorphism which maps x to

j(v ⊗ λ) ◦ j(w ⊗ µ) (x) = j(v ⊗ λ)(j(w ⊗ µ)(x)) = j(v ⊗ λ)(µ(x) w)

= µ(x) j(v ⊗ λ)(w) = µ(x) λ(w) v


Going the other way around, to the right then up, we obtain the endomorphism which maps x to

j( c((v ⊗ λ) × (w ⊗ µ))) (x) = j( λ(w)(v ⊗ µ) ) (x) = λ(w) µ(x) v

These two outcomes are the same. ///

[19.15] Under the isomorphism of the previous problem, show that the linear map
tr : Endk (V ) → k

is the linear map


V ⊗V∗ →k
induced by the bilinear map v × λ → λ(v).
Note that the induced map
V ⊗k V ∗ → k by v ⊗ λ → λ(v)
is another contraction map on tensors. Part of the issue is to compare the coordinate-bound trace with
the induced (contraction) map t(v ⊗ λ) = λ(v) determined uniquely from the bilinear map v × λ → λ(v). To
this end, let e1 , . . . , en be a basis for V , with dual basis λ1 , . . . , λn . The corresponding matrix coefficients
Tij ∈ k of a k-linear endomorphism T of V are

Tij = λi (T ej )

(Always there is the worry about interchange of the indices.) Thus, in these coordinates,
X
tr T = λi (T ei )
i

Let T = j(es ⊗ λt ). Then, since λt (ei ) = 0 unless i = t,



X X X 1 (s = t)
tr T = λi (T ei ) = λi (j(es ⊗ λt )ei ) = λi (λt (ei ) · es ) = λt (λt (et ) · es ) =
0 (s 6= t)
i i i

8
Paul Garrett: (January 14, 2009)

On the other hand, 


1 (s = t)
t(es ⊗ λt ) = λt (es ) =
0 (s 6= t)
Thus, these two k-linear functionals agree on the monomials, which span, they are equal. ///

[19.16] Prove that tr (AB) = tr (BA) for two endomorphisms of a finite-dimensional vector space V over
a field k, with trace defined as just above.
Since the maps
Endk (V ) × Endk (V ) → k
by
A × B → tr (AB) and/or A × B → tr (BA)
are bilinear, it suffices to prove the equality on (images of) monomials v ⊗ λ, since these span the
endomophisms over k. Previous examples have converted the issue to one concerning Vk⊗ V ∗ . (We have
already shown that the isomorphism V ⊗k V ∗ ≈ Endk (V ) is converts a contraction map on tensors to
composition of endomorphisms, and that the trace on tensors defined as another contraction corresponds to
the trace of matrices.) Let tr now denote the contraction-map trace on tensors, and (temporarily) write

(v ⊗ λ) ◦ (w ⊗ µ) = λ(w) v ⊗ µ

for the contraction-map composition of endomorphisms. Thus, we must show that

tr (v ⊗ λ) ◦ (w ⊗ µ) = tr (w ⊗ µ) ◦ (v ⊗ λ)

The left-hand side is

tr (v ⊗ λ) ◦ (w ⊗ µ) = tr ( λ(w) v ⊗ µ) = λ(w) tr (v ⊗ µ) = λ(w) µ(v)

The right-hand side is

tr (w ⊗ µ) ◦ (v ⊗ λ) = tr ( µ(v) w ⊗ λ) = µ(v) tr (w ⊗ λ) = µ(v) λ(w)

These elements of k are the same. ///

[19.17] Prove that tensor products are associative, in the sense that, for R-modules A, B, C, we have a
natural isomorphism
A ⊗R (B ⊗R C) ≈ (A ⊗R B) ⊗R C
In particular, do prove the naturality, at least the one-third part of it which asserts that, for every R-module
homomorphism f : A → A0 , the diagram

A ⊗R (B ⊗R C)
≈ / (A ⊗R B) ⊗R C

f ⊗(1B ⊗1C ) (f ⊗1B )⊗1C


 
A0 ⊗R (B ⊗R C)
≈ / (A0 ⊗R B) ⊗R C

commutes, where the two horizontal isomorphisms are those determined in the first part of the problem.
(One might also consider maps g : B → B 0 and h : C → C 0 , but these behave similarly, so there’s no real
compulsion to worry about them, apart from awareness of the issue.)
Since all tensor products are over R, we drop the subscript, to lighten the notation. As usual, to make a
(linear) map from a tensor product M ⊗ N , we induce uniquely from a bilinear map on M × N . We have
done this enough times that we will suppress this part now.

9
Paul Garrett: (January 14, 2009)

The thing that is slightly less trivial is construction of maps to tensor products M ⊗ N . These are always
obtained by composition with the canonical bilinear map

M ×N →M ⊗N

Important at present is that we can create n-fold tensor products, as well. Thus, we prove the indicated
isomorphism by proving that both the indicated iterated tensor products are (naturally) isomorphic to the
un-parenthesis’d tensor product A ⊗ B ⊗ C, with canonical map τ : A × B × C → A ⊗ B ⊗ C, such that for
every trilinear map ϕ : A × B × C → X there is a unique linear Φ : A ⊗ B ⊗ C → X such that

A ⊗ BO ⊗ QC
QQ
τ Q ΦQ
QQ
Q(
A×B×C
ϕ
/X

The set map


A × B × C ≈ (A × B) × C → (A ⊗ B) ⊗ C

by
a × b × c → (a × b) × c → (a ⊗ b) ⊗ c

is linear in each single argument (for fixed values of the others). Thus, we are assured that there is a unique
induced linear map
A ⊗ B ⊗ C → (A ⊗ B) ⊗ C

such that
A ⊗ BO ⊗ CT
T T
T iT
T T
T)
A×B×C / (A ⊗ B) ⊗ C

commutes.
Similarly, from the set map
(A × B) × C ≈ A × B × C → A ⊗ B ⊗ C

by
(a × b) × c → a × b × c → a ⊗ b ⊗ c

is linear in each single argument (for fixed values of the others). Thus, we are assured that there is a unique
induced linear map
(A ⊗ B) ⊗ C → A ⊗ B ⊗ C

such that
(A ⊗ B) ⊗ C
O T T
T Tj
T T
T T
)
(A × B) × C / A⊗B⊗C

commutes.
Then j ◦ i is a map of A ⊗ B ⊗ C to itself compatible with the canonical map A × B × C → A ⊗ B ⊗ C. By
uniqueness, j ◦ i is the identity on A ⊗ B ⊗ C. Similarly (just very slightly more complicatedly), i ◦ j must
be the identity on the iterated tensor product. Thus, these two maps are mutual inverses.

10
Paul Garrett: (January 14, 2009)

To prove naturality in one of the arguments A, B, C, consider f : C → C 0 . Let jABC be the isomorphism for
a fixed triple A, B, C, as above. The diagram of maps of cartesian products (of sets, at least)

(A × B) × C
jABC
/ A×B×C

(1A ×1B )×f 1A ×1B ×f


 
(A × B) × C
j
/ A×B×C

does commute: going down, then right, is

jABC 0 ((1A × 1B ) × f )((a × b) × c)) = jABC 0 ((a × b) × f (c)) = a × b × f (c)

Going right, then down, gives

(1A × 1B × f ) (jABC ((a × b) × c)) = (1A × 1B × f ) (a × b × c)) = a × b × f (c)

These are the same. ///

11