# 18. Subdivision

Video (Introduction to subdivision)

Consider the following pictures:

On the left we have the $1$-simplex $\Delta_{1}$, divided into two pieces. In Lemma 10.27 we showed that if $u$ and $v$ are joinable paths in $X$, then $u*v=u+v\pmod{B_{1}(X)}$. Equivalently, if we start with a path $w$ and split it in the middle to get two paths $u$ and $v$, then $w=u+v\pmod{B_{1}(X)}$. Thus, there is a sense in which subdivision of paths acts as the identity in homology.

In the middle picture we have divided the simplex $\Delta_{2}$ into $6$ pieces. In the right hand picture, and the interactive demonstration, we have divided $\Delta_{3}$ into $24$ pieces. It will again turn out that this kind of subdivision acts as the identity in homology. To prove this, we need to study the combinatorics of the subdivision process.

We subdivide $\Delta_{1}$ by introducing a new vertex in the middle (which we call the barycentre), giving two copies of $\Delta_{1}$. To subdivide $\Delta_{2}$, we first subdivide each of the $3$ edges in the same way as $\Delta_{1}$, giving $3\times 2=6$ edges altogether. We then take each of these subdivided edges and connect it to the barycentre of $\Delta_{2}$; this divides $\Delta_{2}$ into $2\times 3$ copies of $\Delta_{2}$. Next, we divide all $4$ of the faces of $\Delta_{3}$ in the same way as $\Delta_{2}$, giving $4\times 3\times 2=24$ triangles on the surface of $\Delta_{3}$. We connect all of these to the barycentre of $\Delta_{3}$; this divides $\Delta_{3}$ into $4\times 3\times 2$ copies of $\Delta_{3}$. We can continue in the same way to divide $\Delta_{n}$ into $(n+1)!$ copies of $\Delta_{n}$. We now start to make this more formal.

Video (Definition 18.1 to Proposition 18.10)

###### Definition 18.1.
• (a)

The barycentre of $\Delta_{n}$ is the point $b_{n}=(1,\dotsc,1)/(n+1)\in\Delta_{n}$ (so $b_{3}=(\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4})$, for example). We will write $b$ instead of $b_{n}$ if there is no danger of confusion.

• (b)

Given any linear $k$-simplex $u=\langle a_{0},\dotsc,a_{k}\rangle\in C_{k}(\Delta_{n})$, we define

 $\beta\langle a_{0},\dotsc,a_{k}\rangle=\langle b,a_{0},\dotsc,a_{k}\rangle\in C% _{k+1}(\Delta_{n}).$

More generally, if $u=n_{1}u_{1}+\dotsb+n_{r}u_{r}$ with $n_{1},\dotsc,n_{r}\in{\mathbb{Z}}$ and each $u_{i}$ being a linear $k$-simplex, we define $\beta(u)=n_{1}\beta(u_{1})+\dotsb+n_{r}\beta(u_{r})$.

###### Remark 18.2.

It is possible to define $\beta$ for nonlinear $k$-simplices, but a little work is required to check that the resulting maps $\Delta_{k+1}\to\Delta_{n}$ are always continuous. We do not need the general case so we omit it.

###### Lemma 18.3.

Let $u$ be a linear combination of linear $k$-simplices in $\Delta_{n}$ with $k>0$. Then $\partial\beta(u)+\beta\partial(u)=u$.

###### Proof.

We can easily reduce to the case where $u$ is a single linear $k$-simplex, say $u=\langle a_{0},\dotsc,a_{k}\rangle$. Let $u_{i}$ be the same as $u$ except that $a_{i}$ is omitted, so $\partial(u)=\sum_{i}(-1)^{i}u_{i}$, so $\beta\partial(u)=\sum_{i}(-1)^{i}\beta(u_{i})$. On the other hand, we have $\beta(u)=\langle b,a_{0},a_{1},\dotsc,a_{k}\rangle$. For the initial term in $\partial\beta(u)$ we omit the $b$ and we have a sign $(-1)^{0}$; this just gives us $u$. For each of the remaining terms in $\partial\beta(u)$ we omit the $a_{i}$ appearing in position $i+1$ of $\beta(u)$, and multiply by $(-1)^{i+1}$; this gives $-(-1)^{i}\beta(u_{i})$, which cancels with a term in $\beta\partial(u)$. Putting everything together gives $\partial\beta(u)+\beta\partial(u)=u$ as claimed. ∎

We now want to define certain elements $\theta_{n}\in C_{n}(\Delta_{n})$ for all $n$. The idea is that we subdivide $\Delta_{n}$ into smaller copies of $\Delta_{n}$ as sketched previously, and take $\theta_{n}$ to be the sum of these smaller copies with suitable $\pm$-signs to make the orientations match up correctly. We can mark some points in $\Delta_{1}$ and $\Delta_{2}$ as follows:

It will work out that

 $\displaystyle\theta_{0}$ $\displaystyle=\langle e_{0}\rangle$ $\displaystyle\theta_{1}$ $\displaystyle=\langle e_{01},e_{1}\rangle-\langle e_{01},e_{0}\rangle$ $\displaystyle\theta_{2}$ $\displaystyle=\langle e_{012},e_{12},e_{2}\rangle-\langle e_{012},e_{12},e_{1}% \rangle-\langle e_{012},e_{02},e_{2}\rangle+\langle e_{012},e_{02},e_{0}% \rangle+\langle e_{012},e_{01},e_{1}\rangle-\langle e_{012},e_{01},e_{0}\rangle$

The general picture is as follows.

###### Definition 18.4.

We start with $\theta_{0}=\langle e_{0}\rangle\in C_{0}(\Delta_{0})$. Now suppose that we have $n>0$ and we have already defined an element $\theta_{n-1}\in C_{n-1}(\Delta_{n-1})$ which is a ${\mathbb{Z}}$-linear combination of linear simplices. For $i=0,\dotsc,n$ we have an affine map $\delta_{i}\colon\Delta_{n-1}\to\Delta_{n}$ and thus a chain $(\delta_{i})_{\#}(\theta_{n-1})\in C_{n-1}(\Delta_{n})$, which is again a ${\mathbb{Z}}$-linear combination of linear simplices. We put

 $\displaystyle\theta^{\prime}_{n}$ $\displaystyle=\sum_{i=0}^{n}(-1)^{i}(\delta_{i})_{\#}(\theta_{n-1})\in C_{n-1}% (\Delta_{n})$ $\displaystyle\theta_{n}$ $\displaystyle=\beta(\theta^{\prime}_{n})=\sum_{i=0}^{n}(-1)^{i}\beta((\delta_{% i})_{\#}(\theta_{n-1}))\in C_{n}(\Delta_{n}).$

This defines $\theta_{n}$ for all $n$ by recursion. Next, suppose we have a space $X$ and a map $u\colon\Delta_{n}\to X$, so $u\in S_{n}(X)\subset C_{n}(X)$. The map $u\colon\Delta_{n}\to X$ then gives a map $u_{\#}\colon C_{n}(\Delta_{n})\to C_{n}(X)$, and we define $\operatorname{sd}(u)=u_{\#}(\theta_{n})$.

###### Example 18.5.

The map $\delta_{0}\colon\Delta_{1}\to\Delta_{2}$ sends $e_{0}$, $e_{1}$ and $e_{01}$ to $e_{1}$, $e_{2}$ and $e_{12}$ respectively. It follows that

 $\displaystyle(\delta_{0})_{\#}(\theta_{1})$ $\displaystyle=(\delta_{0})_{\#}(\langle e_{01},e_{1}\rangle-\langle e_{01},e_{% 0}\rangle)=\langle e_{12},e_{2}\rangle-\langle e_{12},e_{1}\rangle$ $\displaystyle\beta((\delta_{0})_{\#}(\theta_{1}))$ $\displaystyle=\langle e_{012},e_{12},e_{2}\rangle-\langle e_{012},e_{12},e_{1}\rangle.$

After expressing $\beta((\delta_{1})_{\#}(\theta_{1}))$ and $\beta((\delta_{2})_{\#}(\theta_{1}))$ in the same way, we obtain the advertised formula for $\theta_{2}$:

 $\theta_{2}=\langle e_{012},e_{12},e_{2}\rangle-\langle e_{012},e_{12},e_{1}% \rangle-\langle e_{012},e_{02},e_{2}\rangle+\langle e_{012},e_{02},e_{0}% \rangle+\langle e_{012},e_{01},e_{1}\rangle-\langle e_{012},e_{01},e_{0}\rangle.$
###### Example 18.6.

Suppose we have a path $u\colon\Delta_{1}\to X$. We identify $\Delta_{1}$ with $[0,1]$ as usual, so the points $e_{0}$, $e_{1}$ and $e_{01}$ become $0$, $1$ and $\frac{1}{2}$ respectively. The map $\langle e_{01},e_{1}\rangle\colon\Delta_{1}\to\Delta_{1}$ is thus $t\mapsto(1+t)/2$, and the map $\langle e_{01},e_{0}\rangle\colon\Delta_{1}\to\Delta_{1}$ is $t\mapsto(1-t)/2$. This means that $\operatorname{sd}(u)=v-w$, where $v(t)=u((1+t)/2)$ and $w(t)=((1-t)/2)$. In other words, $v$ is the second half of $u$ and $w$ is the reverse of the first half of $u$, so $u=\overline{w}*v$.

###### Remark 18.7.

An alternative approach is as follows. Let $\pi$ be a permutation of $\{0,\dotsc,n\}$. For $0\leq i\leq n$ we put $e^{\pi}_{i}=(n-i+1)^{-1}\sum_{j=i}^{n}e_{\pi(j)}\in\Delta_{n}$. This gives a linear $n$-simplex $u_{\pi}=\langle e^{\pi}_{0},\dotsc,e^{\pi}_{n}\rangle\in S_{n}\Delta_{n}$. It can be shown that $\theta_{n}=\sum_{\pi}\text{sgn}(\pi)u_{\pi}\in C_{n}(\Delta_{n})$. We could instead have taken this formula as the definition of $\theta_{n}$; that would make some things easier and some other things harder.

###### Lemma 18.8.

For any $f\colon X\to Y$ and any $u\in C_{n}(X)$ we have $f_{\#}(\operatorname{sd}(u))=\operatorname{sd}(f_{\#}(u))$ in $C_{n}(Y)$.

###### Proof.

We can easily reduce to the case where $u\in S_{n}(X)$, or in other words $u\colon\Delta_{n}\to X$. We then have

 $f_{\#}(\operatorname{sd}(u))=f_{\#}(u_{\#}(\theta_{n}))=(f\circ u)_{\#}(\theta% _{n})=\operatorname{sd}(f_{\#}(u)).$

###### Lemma 18.9.

If we let $\iota_{n}$ denote the identity map $\Delta_{n}\to\Delta_{n}$ considered as an element of $C_{n}(\Delta_{n})$, then we have $\theta^{\prime}_{n}=\operatorname{sd}(\partial(\iota_{n}))$, and therefore $\theta_{n}=\beta(\operatorname{sd}(\partial(\iota_{n})))$.

###### Proof.

We have $\partial(\iota_{n})=\sum_{i=0}^{n}(-1)^{i}(\iota_{n}\circ\delta_{i})=\sum_{i=0% }^{n}(-1)^{i}\delta_{i}$. By definition $\operatorname{sd}$ is linear and has $\operatorname{sd}(\delta_{i})=(\delta_{i})_{\#}(\theta_{n-1})$. It follows that $\operatorname{sd}(\partial(\iota_{n}))=\sum_{i=0}^{n}(-1)^{i}\operatorname{sd}% (\delta_{i})=\sum_{i=0}^{n}(\delta_{i})_{\#}(\theta_{n-1})=\theta^{\prime}_{n}$ as claimed. ∎

###### Proposition 18.10.

The map $\operatorname{sd}\colon C_{*}(X)\to C_{*}(X)$ is a chain map.

###### Proof.

We must show that for all $n\geq 0$ and all $u\in C_{n}(X)$ we have $\partial(\operatorname{sd}(u))=\operatorname{sd}(\partial(u))$ in $C_{n-1}(X)$. If $n=0$ then $C_{n-1}(X)=0$ and so the claim is clear. For $n>0$ we will argue by induction. We can easily reduce to the case where $u\in S_{n}(X)\subset C_{n}(X)$, or in other words $u\colon\Delta_{n}\to X$. We then have $\operatorname{sd}(u)=u_{\#}(\theta_{n})$, and $u_{\#}$ is a chain map, so $\partial(\operatorname{sd}(u))=\partial(u_{\#}(\theta_{n}))=u_{\#}(\partial(% \theta_{n}))$. On the other hand, we have

 $\operatorname{sd}(\partial(u))=\sum_{i=0}^{n}(-1)^{i}\operatorname{sd}(u\circ% \delta_{i})=\sum_{i=0}^{n-1}(-1)^{i}(u\circ\delta_{i})_{\#}(\theta_{n-1})=u_{% \#}(\theta^{\prime}_{n}).$

It will therefore be enough to prove that $\partial(\theta_{n})=\theta^{\prime}_{n}$.

We are assuming inductively that $\partial(\operatorname{sd}(v))=\operatorname{sd}(\partial(v))$ for all $v\in C_{n-1}(X)$. We can take $v=\partial(\iota_{n})$, so $\partial(v)=\partial^{2}(\iota_{n})=0$; it follows that $\partial(\operatorname{sd}(\partial(\iota_{n})))=0$, or in other words $\partial(\theta^{\prime}_{n})=0$. We also know from Lemma 18.3 that $\partial(\beta(\theta^{\prime}_{n}))+\beta(\partial(\theta^{\prime}_{n}))=% \theta^{\prime}_{n}$. As $\beta(\theta^{\prime}_{n})=\theta_{n}$ and $\partial(\theta^{\prime}_{n})=0$ this can be rewritten as $\partial(\theta_{n})=\theta^{\prime}_{n}$, as required. ∎

###### Proposition 18.11.

The chain map $\operatorname{sd}\colon C_{*}(X)\to C_{*}(X)$ is chain-homotopic to the identity.

###### Proof.

We define chains $\kappa_{n}\in C_{n+1}(\Delta_{n})$ recursively as follows. We start with $\kappa_{0}=0$. Now suppose that $n>0$ and we have already defined $\kappa_{n-1}\in C_{n}(\Delta_{n-1})$. For $0\leq i\leq n$ we have a face inclusion $\delta_{i}\colon\Delta_{n-1}\to\Delta_{n}$, using which we can form $(\delta_{i})_{\#}(\kappa_{n-1})\in C_{n}(\Delta_{n})$. We put

 $\kappa^{\prime}_{n}=\iota_{n}-\theta_{n}-\sum_{i=0}^{n}(-1)^{i}(\delta_{i})_{% \#}(\kappa_{n-1})\in C_{n}(\Delta_{n}),$

and $\kappa_{n}=\beta(\kappa^{\prime}_{n})\in C_{n+1}(\Delta_{n})$; this completes the recursion step.

Next, given $u\in S_{n}(X)$ we note that $u\colon\Delta_{n}\to X$ and $\kappa_{n}\in C_{n+1}(\Delta_{n})$ so $u_{\#}(\kappa_{n})\in C_{n+1}(X)$. We define $\sigma(u)=u_{\#}(\kappa_{n})$, and extend this linearly to get $\sigma\colon C_{n}(X)\to C_{n+1}(X)$. We will prove that this gives the required chain homotopy.

As a first step, we will reformulate the definition of $\kappa^{\prime}_{n}$. We have $(\delta_{i})_{\#}(\kappa_{n-1})=\sigma(\delta_{i})$ so

 $\sum_{i=0}^{n}(-1)^{i}(\delta_{i})_{\#}(\kappa_{n-1})=\sum_{i=0}^{n}(-1)^{i}% \sigma(\delta_{i})=\sigma\left(\sum_{i=0}^{n}(-1)^{i}\delta_{i}\right)=\sigma(% \partial(\iota_{n})).$

It follows that

 $\kappa^{\prime}_{n}=\iota_{n}-\theta_{n}-\sigma(\partial(\iota_{n}))=\iota_{n}% -\operatorname{sd}(\iota_{n})-\sigma(\partial(\iota_{n})).$

We will now prove by induction that $\partial(\sigma(u))+\sigma(\partial(u))=u-\operatorname{sd}(u)$ for all spaces $X$ and all $u\in C_{n}(X)$. When $n=0$ the claim is just that $0=0$, which is true. Suppose we have proved the claim for $n-1$. We can then apply it to the element $\partial(\iota_{n})\in C_{n-1}(\Delta_{n})$; we find that

 $\partial(\sigma(\partial(\iota_{n})))+\sigma(\partial(\partial(\iota_{n}))=% \partial(\iota_{n})-\operatorname{sd}(\partial(\iota_{n})).$

Using $\partial^{2}=0$ and $\partial\operatorname{sd}=\operatorname{sd}\partial$ we can rewrite this as

 $\partial\left(\iota_{n}-\operatorname{sd}(\iota_{n})-\sigma(\partial(\iota_{n}% ))\right)=0,$

or in other words $\partial(\kappa^{\prime}_{n})=0$. We can therefore take $u=\kappa^{\prime}_{n}$ in Lemma 18.3 to get $\partial(\beta(\kappa^{\prime}_{n}))=\kappa^{\prime}_{n}$. After recalling our formula above for $\kappa^{\prime}_{n}$ and the definition $\sigma(\iota_{n})=\kappa_{n}=\beta(\kappa^{\prime}_{n})$ we get

 $\partial(\sigma(\iota_{n}))=\iota_{n}-\operatorname{sd}(\iota_{n})-\sigma(% \partial(\iota_{n}))\in C_{n}(\Delta_{n}).$

Now suppose we have a map $u\colon\Delta_{n}\to X$. We apply $u_{\#}$ to the above equation, noting that $u_{\#}\partial=\partial u_{\#}$ and $u_{\#}\operatorname{sd}=\operatorname{sd}u_{\#}$ and $u_{\#}\sigma=\sigma u_{\#}$ and $u_{\#}(\iota_{n})=u$. We get

 $\partial(\sigma(u))=u-\operatorname{sd}(u)-\sigma(\partial(u)),$

or equivalently $\partial(\sigma(u))+\sigma(\partial(u))=u-\operatorname{sd}(u)$. We have proved this for $u\in S_{n}(X)$, but it follows by linearity for all $u\in C_{n}(X)$, as required. ∎

Video (Definition 18.12 and Lemma 18.13)

###### Definition 18.12.

Let $u\colon\Delta_{n}\to{\mathbb{R}}^{N}$ be a linear simplex. We define

 $\operatorname{diam}(u)=\max\{\|u(s)-u(t)\|\;|\;s,t\in\Delta_{n}\}=\max\{\|u(e_% {i})-u(e_{j})\|\;|\;0\leq i,j\leq n\},$

and we call this the diameter of $u$. More generally, given a chain $u=m_{1}u_{1}+\dotsb+m_{r}u_{r}\in C_{n}({\mathbb{R}}^{N})$ we put $\operatorname{diam}(u)=\max(\operatorname{diam}(u_{1}),\dotsc,\operatorname{% diam}(u_{r}))$.

###### Lemma 18.13.

If $u\in C_{n}({\mathbb{R}}^{N})$ is a ${\mathbb{Z}}$-linear combination of linear simplices then we have $\operatorname{diam}(\operatorname{sd}(u))\leq\frac{n}{n+1}\operatorname{diam}(u)$.

###### Proof.

In the case $n=0$ all diameters are zero so the claim is clear. We can therefore assume that $n>0$ and argue by induction. The claim involves the number $c_{n}=n/(n+1)=1-(n+1)^{-1}$; from the second form it is clear that $0\leq c_{n}.

We can easily reduce to the case where $u$ is a single linear simplex, say $u=\langle a_{0},\dotsc,a_{n}\rangle$. Put $d=\operatorname{diam}(u)$, so $\|a_{i}-a_{j}\|\leq d$ for all $i,j$. Put $b=(a_{0}+\dotsb+a_{n})/(n+1)$, which is the barycentre of $u$. For any $i$ we can write $a_{i}$ as $(n+1)^{-1}\sum_{j=0}^{n}a_{i}$. Using this, we get $a_{i}-b=(n+1)^{-1}\sum_{j=0}^{n}(a_{i}-a_{j})$. In the sum on right hand side, the term for $j=i$ is zero and the other $n$ terms have norm at most $d$; it follows that $\|a_{i}-b\|\leq\frac{n}{n+1}d=c_{n}d$. More generally, consider a point $x\in u(\Delta_{n})$, say $x=\sum_{i=0}^{n}t_{i}a_{i}$ with $t_{i}\geq 0$ and $\sum_{i}t_{i}=1$. We can write $b$ as $\sum_{i}t_{i}b$, so

 $\|x-b\|=\|\sum_{i}t_{i}(a_{i}-b)\|\leq\sum_{i}t_{i}\|a_{i}-b\|\leq\sum_{i}t_{i% }c_{n}d=c_{n}d.$

Now let $v$ be a simplex occuring in $\operatorname{sd}(u)$. Then $v=\beta(w)$ for some $w$ occuring in $\operatorname{sd}(u\circ\delta_{i})$ for some $i$, so the vertices of $v$ are the vertices of $w$ together with $b$. It is clear that $\operatorname{diam}(u\circ\delta_{i})\leq d$ so by induction we have $\operatorname{diam}(w)\leq c_{n-1}d\leq c_{n}d$. Also, from the discussion above, any vertex in $w$ has distance at most $c_{n}d$ from $b$. It follows that $v$ has diameter at most $c_{n}d$ as required. ∎