# 14. Chain homotopy

We next need to build a connection between the concept of homotopy in topology, and the behaviour of chain complexes and homology in algebra.

Video (Definition 14.1 to Proposition 14.7)

###### Definition 14.1.

Let $A_{*}$ and $A^{\prime}_{*}$ be chain complexes, and let $f,g\colon A_{*}\to A^{\prime}_{*}$ be chain maps (so $df=fd$ and $dg=gd$). A chain homotopy between $f$ and $g$ is a system of maps $s_{r}\colon A_{r}\to A^{\prime}_{r+1}$ such that $g_{r}-f_{r}=d^{A^{\prime}}_{r+1}s_{r}+s_{r-1}d^{A}_{r}$ for all $r$ (or more briefly, $g-f=ds+sd$). In the case $r=0$, we should interpret $s_{-1}$ as $0$, so the condition is $g_{0}-f_{0}=d_{1}s_{0}$. We say that $f$ and $g$ are chain homotopic if there is a chain homotopy between them. If so, we write $f\cong g$.

###### Remark 14.2.

In Section 2 we introduced a crude intuitive version of homology involving chains as subsets of a space $X$. For such a chain $u\subseteq X$ we can define $\sigma(u)=[0,1]\times u$, which is a chain in $[0,1]\times X$; we call this the thickening of $u$. If $u$ is a filled triangle in $X$, then $\sigma(u)$ is a triangular prism. Now $\partial\sigma(u)$ is the boundary of this triangular prism, which consists of the top, the bottom and the sides. On the other hand, $\sigma\partial(u)$ is what we get by thickening the boundary of $u$, which is just the sides of the prism. After adjusting the $\pm$-signs to account for orientations, we end up with the relation

 $\partial\sigma(u)+\sigma\partial(u)=\text{top}-\text{bottom}.$

This is a relation between chains in $[0,1]\times X$, but if we have two maps $f,g\colon X\to Y$ and a homotopy $h\colon[0,1]\times X\to Y$ between them, then we can apply $h_{*}$ to get a relation between chains in $Y$. The main point of this section is to provide a rigorous and general version of this picture.

###### Example 14.3.

In Example 13.7 we introduced a chain complex $A$ with

 $A_{k}=\begin{cases}{\mathbb{Z}}\{e_{i}\;|\;i\in{\mathbb{Z}}/n\}&\text{ if }k=1% \\ {\mathbb{Z}}\{v_{i}\;|\;i\in{\mathbb{Z}}/n\}&\text{ if }k=0\\ 0&\text{ otherwise. }\end{cases}$

The differential is given by $d(e_{i})=v_{i+1}-v_{i}$ and $d(v_{i})=0$. Define $f\colon A_{*}\to A_{*}$ by $f(e_{i})=e_{i+1}$ and $f(v_{i})=v_{i+1}$. This is a chain map because $d(f(e_{i}))=v_{i+2}-v_{i+1}=f(d(e_{i}))$. We claim that $f$ is chain homotopic to the identity. Indeed, we can define $s_{0}\colon A_{0}\to A_{1}$ by $s_{0}(v_{i})=e_{i}$, and we define $s_{i}\colon A_{i}\to A_{i+1}$ to be zero for all $i\neq 0$. We then find that

 $\displaystyle(ds+sd)(e_{i})$ $\displaystyle=d(0)+s(v_{i+1}-v_{i})=e_{i+1}-e_{i}=f(e_{i})-\operatorname{id}(e% _{i})$ $\displaystyle(ds+sd)(v_{i})$ $\displaystyle=d(e_{i})+s(0)=v_{i+1}-v_{i}=f(v_{i})-\operatorname{id}(v_{i})$
###### Proposition 14.4.

The relation of being chain homotopic is an equivalence relation.

###### Proof.

The zero map is a chain homotopy from $f$ to itself. If $s$ is a chain homotopy from $f$ to $g$, then $-s$ is a chain homotopy from $g$ to $f$. If $t$ is also a chain homotopy from $g$ to $h$, then $s+t$ is a chain homotopy from $f$ to $h$. ∎

###### Proposition 14.5.

Suppose we have chain maps

Suppose that $f_{0}$ is chain homotopic to $f_{1}$, and that $g_{0}$ is homotopic to $g_{1}$. Then $g_{1}\circ f_{1}$ is chain homotopic to $g_{0}\circ f_{0}$.

###### Proof.

We are assuming that $f_{0}$ is chain homotopic to $f_{1}$, which means that there is a chain homotopy $s$ with $f_{1}-f_{0}=ds+sd$. Similarly, there is a chain homotopy $t$ with $g_{1}-g_{0}=dt+td$. Put $u=g_{1}s+tf_{0}$. Using $dg_{1}=g_{1}d$ we get $du=g_{1}ds+dtf_{0}$. Using $f_{0}d=df_{0}$ we get $ud=g_{1}sd+tdf_{0}$. By adding these, we get

 $du+ud=g_{1}(ds+sd)+(td+dt)f_{0}=g_{1}(f_{1}-f_{0})+(g_{1}-g_{0})f_{0}=g_{1}f_{% 1}-g_{1}f_{0}+g_{1}f_{0}-g_{0}f_{0}=g_{1}f_{1}-g_{0}f_{0},$

as required. ∎

###### Definition 14.6.

We write $\operatorname{hChain}(A_{*},A^{\prime}_{*})$ for the set of chain homotopy classes of chain maps from $A_{*}$ to $A^{\prime}_{*}$, or in other words equivalence classes under the equivalence relation defined above. Using Proposition 14.5, we see that these are the morphism sets of a well-defined category $\operatorname{hChain}$, whose objects are chain complexes. This is analogous to the category $\operatorname{hTop}$ introduced in Definition 9.9.

###### Proposition 14.7.

Let $A_{*}$ and $A^{\prime}_{*}$ be chain complexes, and let $f,g\colon A_{*}\to A^{\prime}_{*}$ be chain maps that are chain homotopic. Then the induced maps $f_{*},g_{*}\colon H_{*}(A)\to H_{*}(A^{\prime})$ are the same.

###### Proof.

Let $s$ be a chain homotopy from $f$ to $g$, so $g-f=ds+sd$. Consider an element $[z]\in H_{r}(A)$, so $z\in A_{r}$ with $dz=0$. Recall from Proposition 13.11 that $f(z),g(z)\in Z_{r}(A^{\prime})$ so that the expressions $[f(z)]$ and $[g(z)]$ are meaningful and refer to elements of $H_{r}(A^{\prime})$. By definition we have $f_{*}[z]=[f(z)]$ and $g_{*}[z]=[g(z)]$, so we need to check that these are the same. We have $g(z)-f(z)=d(s(z))+s(d(z))$ but $d(z)=0$ so $g(z)=f(z)+d(s(z))\in f(z)+B_{r}(A^{\prime})$. It follows that $[g(z)]=[f(z)]$ in $H_{r}(A^{\prime})$, or in other words $f_{*}[z]=g_{*}[z]$. This means that $f_{*}=g_{*}$ as claimed. ∎

###### Proposition 14.8.

Let $X$ and $Y$ be topological spaces, and let $f,g\colon X\to Y$ be continuous maps that are homotopic to each other. Then the chain maps $f_{\#},g_{\#}\colon C_{*}(X)\to C_{*}(Y)$ are chain homotopic to each other, so the induced maps $f_{*},g_{*}\colon H_{*}(X)\to H_{*}(Y)$ are the same.

For the proof, we will first choose a homotopy $h$ from $f$ to $g$, so $h$ is a continuous map $[0,1]\times X\to Y$ with $h(0,x)=f(x)$ and $h(1,x)=g(x)$ for all $x\in X$. We need to use this to construct a chain homotopy between $f_{\#}$ and $g_{\#}$, or equivalently a system of maps $\sigma_{k}\colon C_{k}(X)\to C_{k+1}(Y)$ with $\partial\sigma+\sigma\partial=g_{\#}-f_{\#}$. Before giving the general proof, we will discuss the cases $k=0$ and $k=1$.

Consider a point $a\in S_{0}(X)=X$. We can define a continuous map $v\colon\Delta_{1}\to Y$ by $v(1-t,t)=h(t,a)$. This can be regarded as an element of $S_{1}(Y)\subset C_{1}(Y)$, and it satisfies

 $\partial(v)=v(e_{1})-v(e_{0})=h(1,a)-h(0,a)=g(a)-f(a)=g_{\#}(a)-f_{\#}(a).$

We define $\sigma_{0}(a)=v$ and extend linearly to get a homomorphism $\sigma_{0}\colon C_{0}(X)\to C_{1}(Y)$ with $\partial(\sigma_{0}(u))=g_{\#}(u)-f_{\#}(u)$ for all $u\in C_{0}(X)$.

Now consider instead an element $u\in S_{1}(X)$, or in other words, a continuous map $u\colon\Delta_{1}\to X$. We want to define $\sigma_{1}(u)\in C_{2}(Y)$, so $\sigma_{1}(u)$ should be a ${\mathbb{Z}}$-linear combination of continuous maps from the triangle $\Delta_{2}$ to $Y$. We have a map $m\colon[0,1]\times\Delta_{1}\to Y$ given by $m(t,s)=h(t,u(s))$. Here $\Delta_{1}$ is homeomorphic to $[0,1]$ so $[0,1]\times\Delta_{1}$ is a square, with corners $(0,e_{0})$, $(0,e_{1})$, $(1,e_{0})$ and $(1,e_{1})$. We can divide this square into two triangles and restrict $k$ to these triangles, giving two different maps $\Delta_{2}\to Y$, which will be the terms in $\sigma_{1}(u)$. In detail, we define maps $\zeta_{0},\zeta_{1}\colon\Delta_{2}\to[0,1]\times\Delta_{1}$ by

 $\displaystyle\zeta_{0}(t_{0},t_{1},t_{2})$ $\displaystyle=t_{0}(0,e_{0})+t_{1}(1,e_{0})+t_{2}(1,e_{1})$ $\displaystyle\zeta_{1}(t_{0},t_{1},t_{2})$ $\displaystyle=t_{0}(0,e_{0})+t_{1}(0,e_{1})+t_{2}(1,e_{1}).$

The composites $(\Delta_{2}\xrightarrow{\zeta_{i}}[0,1]\times\Delta_{1}\xrightarrow{m}Y)$ (for $i=0,1$) can be regarded as elements of $S_{2}(Y)\subset C_{2}(Y)$. We define $\sigma_{1}(u)=m\zeta_{0}-m\zeta_{1}\in C_{2}(Y)$. This can be extended linearly to give a homomorphism $\sigma_{1}\colon C_{1}(X)\to C_{2}(Y)$. We claim that $\partial\sigma_{1}(u)+\sigma_{0}\partial(u)=g_{\#}(u)-f_{\#}(u)$ for all $u\in C_{1}(X)$. Indeed, it will be enough to prove this when $u\in S_{1}(X)$. We then have

 $\partial\sigma_{1}(u)=\partial(m\zeta_{0})-\partial(m\zeta_{1})=m\zeta_{0}% \delta_{0}-m\zeta_{0}\delta_{1}+m\zeta_{0}\delta_{2}-m\zeta_{1}\delta_{0}+m% \zeta_{1}\delta_{1}-m\zeta_{1}\delta_{2}$

(where $m(t,s)=h(t,u(s))$ as before). Here, for $t=(t_{0},t_{1})=(1-t_{1},t_{1})\in\Delta_{1}$ we have

 $\displaystyle(\zeta_{0}\delta_{0})(t)$ $\displaystyle=\zeta_{0}(0,t_{0},t_{1})=t_{0}(1,e_{0})+t_{1}(1,e_{1})=(1,t)$ $\displaystyle(\zeta_{0}\delta_{1})(t)$ $\displaystyle=\zeta_{0}(t_{0},0,t_{1})=t_{0}(0,e_{0})+t_{1}(1,e_{1})=(t_{1},t)$ $\displaystyle(\zeta_{0}\delta_{2})(t)$ $\displaystyle=\zeta_{0}(t_{0},t_{1},0)=t_{0}(0,e_{0})+t_{1}(1,e_{0})=(t_{1},e_% {0})$ $\displaystyle(\zeta_{1}\delta_{0})(t)$ $\displaystyle=\zeta_{1}(0,t_{0},t_{1})=t_{0}(0,e_{1})+t_{1}(1,e_{1})=(t_{1},e_% {1})$ $\displaystyle(\zeta_{1}\delta_{1})(t)$ $\displaystyle=\zeta_{1}(t_{0},0,t_{1})=t_{0}(0,e_{0})+t_{1}(1,e_{1})=(t_{1},t)$ $\displaystyle(\zeta_{1}\delta_{2})(t)$ $\displaystyle=\zeta_{1}(t_{0},t_{1},0)=t_{0}(0,e_{0})+t_{1}(0,e_{1})=(0,t).$

This can be displayed as follows:

It follows that

 $\displaystyle m\zeta_{0}\delta_{0}(t)$ $\displaystyle=h(1,u(t))=g(u(t))$ $\displaystyle m\zeta_{0}\delta_{1}(t)$ $\displaystyle=h(t_{1},u(t))=m\zeta_{1}\delta_{1}(t_{0},t_{1})$ $\displaystyle m\zeta_{0}\delta_{2}(t)$ $\displaystyle=h(t_{1},u(e_{0}))=\sigma_{0}(u(e_{0}))(t)$ $\displaystyle m\zeta_{1}\delta_{0}(t)$ $\displaystyle=h(t_{1},u(e_{1}))=\sigma_{0}(u(e_{1}))(t)$ $\displaystyle m\zeta_{1}\delta_{2}(t)$ $\displaystyle=h(0,u(t))=f(u(t)).$

Thus, in our formula for $\partial(\sigma_{1}(u))$ we see that the first and last terms give $g_{\#}(u)-f_{\#}(u)$, the second and fifth terms cancel out, and the third and fourth terms give $\sigma_{0}(u(e_{0})-u(e_{1}))=-\sigma_{0}(\partial(u))$. Putting this together, we get $\partial\sigma_{1}(u)+\sigma_{0}\partial(u)=g_{\#}(u)-f_{\#}(u)$ as required.

We now extend the above discussion to cover $k>1$.

Video (Definition 14.9 to Lemma 14.12)

###### Definition 14.9.

For $0\leq i\leq k$ we define $\zeta_{i}\colon\Delta_{k+1}\to[0,1]\times\Delta_{k}$ by

 $\displaystyle\zeta_{i}(t_{0},\dotsc,t_{k+1})$ $\displaystyle=\sum_{j=0}^{i}t_{j}.(0,e_{j})+\sum_{j=i+1}^{k+1}t_{j}.(1,e_{j-1})$ $\displaystyle=t_{0}(0,e_{0})+\dotsb+t_{i}(0,e_{i})+t_{i+1}(1,e_{i})+\dotsb+t_{% k+1}(1,e_{k})$ $\displaystyle=\left(t_{i+1}+\dotsb+t_{k+1},(t_{0},\dotsc,t_{i-1},t_{i}+t_{i+1}% ,t_{i+2},\dotsc,t_{k+1})\right)$

If it is necessary to specify $k$, we will write $\zeta_{k,i}$ instead of $\zeta_{i}$.

###### Example 14.10.

When $k=2$ the maps $\zeta_{i}\colon\Delta_{3}\to[0,1]\times\Delta_{2}$ are given by

 $\displaystyle\zeta_{0}(x_{0},x_{1},x_{2},x_{3})$ $\displaystyle=(x_{1}+x_{2}+x_{3},(x_{0}+x_{1},x_{2},x_{3}))$ $\displaystyle\zeta_{1}(x_{0},x_{1},x_{2},x_{3})$ $\displaystyle=(x_{2}+x_{3}+x_{4},(x_{0},x_{1}+x_{2},x_{3}))$ $\displaystyle\zeta_{2}(x_{0},x_{1},x_{2},x_{3})$ $\displaystyle=(x_{3}+x_{4},(x_{0},x_{1},x_{2}+x_{3})).$

We saw above that $[0,1]\times\Delta_{1}$ is the union of the triangles $\zeta_{0}(\Delta_{2})$ and $\zeta_{1}(\Delta_{2})$, which fit together nicely along one edge. In the same way, it can be shown that $[0,1]\times\Delta_{2}$ is the union of the images of the maps $\zeta_{i}\colon\Delta_{3}\to[0,1]\times\Delta_{2}$, and the intersection of any two of these images is another simplex of lower dimension. However, we will not need this so we omit the proof.

When $k=3$ we have

 $\displaystyle\zeta_{0}(x_{0},x_{1},x_{2},x_{3},x_{4})$ $\displaystyle=(x_{1}+x_{2}+x_{3}+x_{4},(x_{0}+x_{1},x_{2},x_{3},x_{4}))$ $\displaystyle\zeta_{1}(x_{0},x_{1},x_{2},x_{3},x_{4})$ $\displaystyle=(x_{2}+x_{3}+x_{4},(x_{0},x_{1}+x_{2},x_{3},x_{4}))$ $\displaystyle\zeta_{2}(x_{0},x_{1},x_{2},x_{3},x_{4})$ $\displaystyle=(x_{3}+x_{4},(x_{0},x_{1},x_{2}+x_{3},x_{4}))$ $\displaystyle\zeta_{3}(x_{0},x_{1},x_{2},x_{3},x_{4})$ $\displaystyle=(x_{4},(x_{0},x_{1},x_{2},x_{3}+x_{4})).$
###### Definition 14.11.

Given $h\colon[0,1]\times X\to Y$ and $u\colon\Delta_{k}\to X$ as before, we put

 $\sigma_{k}(u)=\sum_{i=0}^{k}(-1)^{i}(h\circ(\operatorname{id}\times u)\circ% \zeta_{i})\in C_{k+1}(Y).$

We extend this linearly to define $\sigma_{k}\colon C_{k}(X)\to C_{k+1}(Y)$.

###### Lemma 14.12.
• (a)

Suppose that $0\leq i\leq k$ and $0\leq j\leq k+1$, so we can form the composite

 $\Delta_{k}\xrightarrow{\delta_{j}}\Delta_{k+1}\xrightarrow{\zeta_{i}}[0,1]% \times\Delta_{k}.$

If $j then $\zeta_{i}\delta_{j}$ is the same as the composite

 $\Delta_{k}\xrightarrow{\zeta_{i-1}}[0,1]\times\Delta_{k-1}\xrightarrow{% \operatorname{id}\times\delta_{j}}[0,1]\times\Delta_{k},$

or in other words $\zeta_{i}\delta_{j}=(\operatorname{id}\times\delta_{j})\zeta_{i-1}$.

• (b)

On the other hand, if $j\geq i+2$ then $\zeta_{i}\delta_{j}=(\operatorname{id}\times\delta_{j-1})\zeta_{i}$.

• (c)

For $1\leq i\leq k$ we also have $\zeta_{i}\delta_{i}=\zeta_{i-1}\delta_{i}$.

• (d)

Finally, we have $\zeta_{0}\delta_{0}(x)=(1,x)$ and $\zeta_{k}\delta_{k+1}(x)=(0,x)$.

###### Proof.

All the maps under discussion are affine maps from $\Delta_{k}$ to $[0,1]\times\Delta_{k}$. To check that two such maps agree, it suffices to check that they have the same effect on the vertices of $\Delta_{k}$. The map $\delta_{j}$ sends $e_{p}$ to $e_{p}$ (if $p) or $e_{p+1}$ (if $p\geq j$). The map $\zeta_{i}$ sends $e_{p}$ to $(0,e_{p})$ (if $p\leq i$) or $(1,e_{p-1})$ (if $p>i$).

• (a)

When $j we find that both $\zeta_{i}\delta_{j}$ and $(\operatorname{id}\times\delta_{j})\zeta_{i-1}$ have the following effect:

 $e_{p}\mapsto\begin{cases}(0,e_{p})&\text{ if }p
• (b)

When $j\geq i+2$ we find that both $\zeta_{i}\delta_{j}$ and $(\operatorname{id}\times\delta_{j-1})\zeta_{i}$ have the following effect:

 $e_{p}\mapsto\begin{cases}(0,e_{p})&\text{ if }p\leq i\\ (1,e_{p-1})&\text{ if }i
• (c)

When $1\geq i\leq k$ we find that both $\zeta_{i}\delta_{i}$ and $\zeta_{i-1}\delta_{i}$ have the following effect:

 $e_{p}\mapsto\begin{cases}(0,e_{p})&\text{ if }p
• (d)

We also have $\zeta_{0}\delta_{0}(e_{p})=\zeta_{0}(e_{p+1})=(1,e_{p})$ and $\zeta_{k}\delta_{k+1}(e_{p})=\zeta_{k}(e_{p})=(0,e_{p})$

Video (Proposition 14.13 to Proposition 14.17)

###### Proposition 14.13.

For all $u\in C_{k}(X)$ we have

 $\partial(\sigma_{k}(u))+\sigma_{k-1}(\partial(u))=g_{\#}(u)-f_{\#}(u).$
###### Proof.

It will be enough to prove this when $u\in S_{k}(X)\subset C_{k}(X)$, so $u\colon\Delta_{k}\to X$. We can then define $m=h\circ(\operatorname{id}\times u)\colon[0,1]\times\Delta_{k}\to X$, so $\sigma_{k}(u)=\sum_{i=0}^{k}(-1)^{i}m\zeta_{i}$. This gives

 $\partial(\sigma_{k}(u))=\sum_{i=0}^{k}\sum_{j=0}^{k+1}(-1)^{i+j}m\zeta_{i}% \delta_{j}.$

We can divide this sum into four parts:

• $A$ is the sum of the terms where $j

• $B$ is the sum of the terms where $i+2\leq j$

• $C$ is the sum of the terms where $j=i$ with $1\leq i\leq k$

• $D$ is the sum of the terms where $j=i+1$ with $0\leq i

• $E$ consists of the terms with $(i,j)=(0,0)$ or $(i,j)=(k,k+1)$.

We thus have $\partial(\sigma_{k}(u))=A+B+C+D+E$.

Now note that $\partial(u)=\sum_{q=0}^{k}(-1)^{q}u\delta_{q}$, and

 $\sigma_{k-1}(u\delta_{q})=\sum_{p=0}^{k}(-1)^{p}(h\circ(\operatorname{id}% \times u\delta_{q})\circ\zeta_{p})=\sum_{p=0}^{k}(-1)^{p}m(\operatorname{id}% \times\delta_{q})\zeta_{p}.$

This gives

 $\sigma_{k-1}(\partial(u))=\sum_{p=0}^{k}\sum_{q=0}^{k}(-1)^{p+q}m(% \operatorname{id}\times\delta_{q})\zeta_{p}.$

We let $A^{\prime}$ be the sum of the terms where $q\leq p$, and we let $B^{\prime}$ be the sum of the terms where $p.

Each term $(-1)^{i+j}m\zeta_{i}\delta_{j}$ in $A$ can be rewritten (using part (a) of the lemma) as $(-1)^{i+j}m(\operatorname{id}\times\delta_{j})\zeta_{i-1}$. This can then be written as $-(-1)^{p+q}m(\operatorname{id}\times\delta_{q})\delta_{p}$, where $p=i-1$ and $q=j$. Because $j for terms in $A$, we see that $q\leq p$, so the rewritten term is the negative of a term in $A^{\prime}$. Similarly, each term $(-1)^{i+j}m\zeta_{i}\delta_{j}$ in $B$ can be rewritten (using part (b) of the lemma) as $(-1)^{i+j}m(\operatorname{id}\times\delta_{j-1})\zeta_{i}$. This can then be written as $-(-1)^{p+q}m(\operatorname{id}\times\delta_{q})\delta_{p}$, where $p=i$ and $q=j-1$. Because $j\geq i+2$ for terms in $B$, we see that $q>p$, so the rewritten term is the negative of a term in $B^{\prime}$. Using this we see that $A^{\prime}=-A$ and $B^{\prime}=-B$. A similar argument with part (c) of the lemma shows that $D=-C$. We now have

 $\displaystyle\partial(\sigma_{k}(u))+\sigma_{k-1}(\partial(u))$ $\displaystyle=(A+B+C+D+E)+(A^{\prime}+B^{\prime})$ $\displaystyle=(A+B+C-C+E)+(-A-B)=E.$

Also, part (d) of the lemma gives

 $\displaystyle m\zeta_{0}\delta_{0}(x)$ $\displaystyle=m(1,x)=h(1,u(x))=g(u(x))$ $\displaystyle m\zeta_{k}\delta_{k+1}(x)$ $\displaystyle=m(0,x)=h(0,u(x))=f(u(x)).$

The first of these terms has sign $(-1)^{0+0}=+1$, and the second has sign $(-1)^{k+(k+1)}=-1$. We therefore have $E=g_{\#}(u)-f_{\#}(u)$ as required. ∎

We can gain some insight into the above proof by considering a simple special case. Suppose that $X={\mathbb{R}}^{N}$ and $Y={\mathbb{R}}^{M}$. Suppose that $f\colon X\to Y$ is affine, i.e. $f(x)=Ax+b$ for some matrix $A$ and vector $b$. Suppose that $g\colon X\to Y$ is also affine, and that $h$ is just the linear homotopy $h(t,x)=(1-t)f(x)+tg(x)$. Consider a linear $3$-simplex $u=\langle a_{0},a_{1},a_{2},a_{3}\rangle\in C_{3}(X)$. As all the maps involved are affine, we see that

 $\displaystyle f_{*}(u)$ $\displaystyle=\langle f(a_{0}),f(a_{1}),f(a_{2}),f(a_{3})\rangle$ $\displaystyle g_{*}(u)$ $\displaystyle=\langle g(a_{0}),g(a_{1}),g(a_{2}),g(a_{3})\rangle.$

As in Example 10.17, we will use abbreviated notation, writing $i$ for $a_{i}$ or $f(a_{i})$, and $\overline{i}$ for $g(a_{i})$, so the above equations become $f_{*}(0123)=0123$ and $g_{*}(0123)=\overline{0123}$. It is then not hard to check that

 $h\circ(\operatorname{id}\times u)\circ\zeta_{2}=\langle f(a_{0}),f(a_{1}),f(a_% {2}),g(a_{2}),g(a_{3})\rangle=012\overline{23},$

and similarly for the other terms in $\sigma(u)$. The terms in $\partial(u)$, $\sigma(u)$, $\partial\sigma(u)$ and $\sigma\partial(u)$ can now be laid out as follows:

Most terms cancel in the indicated groups, which correspond to the expressions $A,\dotsc,E$ in the proof of Proposition 14.13, leaving $\partial\sigma(u)+\sigma\partial(u)=\overline{0123}-0123=g_{*}(u)-f_{*}(u)$ as expected. Here we have just displayed the case $k=3$, but the pattern generalises in an obvious way to other values of $k$. This presentation is only directly relevant for linear simplices and affine maps. However, in the general case, most of the work involves linear simplices in the space $[0,1]\times\Delta_{k}\subseteq{\mathbb{R}}^{k+2}$, and then we finish up by applying the map $h\circ(\operatorname{id}\times u)\colon[0,1]\times\Delta_{k}\to Y$. Because of this, it is possible to deduce the general case from the linear case, although we will not spell out the details here.

###### Corollary 14.14.

If $f\colon X\to Y$ is a homotopy equivalence, then $f_{*}\colon H_{*}(X)\to H_{*}(Y)$ is an isomorphism.

###### Proof.

Choose a map $g\colon Y\to X$ which is homotopy inverse to $f$, so $g\circ f$ is homotopic to $\operatorname{id}_{X}$ and $f\circ g$ is homotopic to $\operatorname{id}_{Y}$. As homology is a functor, the composite $g_{*}\circ f_{*}\colon H_{*}(X)\to H_{*}(X)$ is the same as $(g\circ f)_{*}$. As $g\circ f$ is homotopic to $\operatorname{id}_{X}$, Proposition 14.8 tells us that $(g\circ f)_{*}=(\operatorname{id}_{X})_{*}$. Using functoriality again, we have $(\operatorname{id}_{X})_{*}=\operatorname{id}_{H_{*}(X)}$. Putting this together, we see that $g_{*}\circ f_{*}=\operatorname{id}\colon H_{*}(X)\to H_{*}(X)$, and essentially the same argument shows that $f_{*}\circ g_{*}=\operatorname{id}\colon H_{*}(Y)\to H_{*}(Y)$. Thus, $f_{*}$ and $g_{*}$ are mutually inverse isomorphisms. ∎

###### Remark 14.15.

Another way to organise the above argument is as follows. Propositions 14.7 and 14.8 tell us that $H_{n}$ can be regarded as a functor $\operatorname{hTop}\to\operatorname{Ab}$. Any homotopy equivalence $f\colon X\to Y$ becomes an isomorphism in $\operatorname{hTop}$, and Corollary 6.18 tells us that functors send isomorphisms to isomorphisms, so $H_{n}(f)$ must be an isomorphism.

###### Proposition 14.16.

If $f\colon X\to Y$ is homotopic to a constant map, then the map $f_{*}\colon H_{n}(X)\to H_{n}(Y)$ is zero for all $n>0$.

###### Proof.

Let $g\colon X\to Y$ be a constant map that is homotopic to $f$. Then $f_{*}=g_{*}$ by Proposition 14.8, but $g_{*}=0$ by Remark 13.14. ∎

###### Proposition 14.17.

Suppose that $X$ is contractible. Then $H_{0}(X)={\mathbb{Z}}$ but $H_{n}(X)=0$ for all $n\neq 0$. In particular, this applies if $X$ is a convex subset of ${\mathbb{R}}^{N}$ for some $n$.

###### Proof.

Proposition 9.17 tells us that $X$ is homotopy equivalent to a point, so it has the same homology as a point, which is given by Proposition 10.23. ∎