MAS61015 Algebraic Topology

10. Homology

Video (Definition 10.1 to Example 10.9)

Definition 10.1.

Let X be a space. A singular k-simplex in X is a continuous map u:ΔkX. We write Sk(X) for the set of singular k-simplices in X.

Example 10.2.

Recall that Δ0 is the set {e0} with just one point. To give a function u:Δ0X is the same as to give a point u(e0)X; this, we can identify S0(X) with X.

Example 10.3.

As usual, we identify Δ1 with [0,1], with the point (1-t,t)Δ1 corresponding to the point t[0,1]. Thus, a singular 1-simplex in X is the same as a continuous map u:[0,1]X, or in other words a path in X. This means that S1(X) is the set of all possible paths in X.

Example 10.4.

Suppose that a0,,akN. We can then define a map

a=a0,,ak:ΔkN

(or in other words an element aSkN) by

a(t0,,tn)=t0a0++tkak.

We call maps of this type linear simplices.

In the case k=0 we have S0(X)=X and the map a0 just corresponds to the point a0. In the case k=1, the map a0,a1 corresponds to the straight line path from a0 to a1. In the case k=2, the image of the map a0,a1,a2:Δ2N is the triangle with vertices a0, a1 and a2.

Now suppose that XN and that a0,,akX. It may or may not happen that the image of the map a:ΔkN actually lies in X; this must be checked carefully in any context where we want to use this construction. If so, we can regard a as an element of Sk(X).

Example 10.5.

This picture shows a space X2, together with:

  • A linear 2-simplex a0,a1,a2S22, which is not an element of S2(X).

  • Another linear 2-simplex b0,b1,b2S2(X)S2(2).

  • A nonlinear 1-simplex uS1(X).

Definition 10.6.

Let P be a set. We write {P} for the set of formal -linear combinations of elements of P. Thus, if p,q,rP then 5p-9q+7r{P}, for example. We call {P} the free abelian group generated by P. (It is clearly an abelian group under addition.)

Remark 10.7.

Suppose that P is finite, say P={p1,pn}. We then have an isomorphism ϕ:n{P} given by

ϕ(a1,,an)=a1p1++anpn.

However, we will most often be considering cases where P is infinite.

Definition 10.8.

A singular k-chain in X is a formal -linear combination of singular k-simplices, or in other words, an element of {Sk(X)}. We write Ck(X)={Sk(X)} for the group of singular k-chains. For convenience, we also define Ck(X)=0 for k<0.

Example 10.9.

Consider again the picture in Example 10.5:

  • The expression 6a1-4b2+7c1C0(X) is a singular 0-chain.

  • The expression 3a0,a2-b0,b1+uC1(X) is a singular 1-chain.

  • The expression b0,b1,b2S2(X)C2(X) is a singular 2-chain.

  • No expression involving a0,a1 gives a singular chain in X, because the straight line from a0 to a1 is not contained in X.

Remark 10.10.

Suppose we have paths u:ab and v:bc in X. We can reverse u to get a path u¯:ba, or we can join u and v to get a path u*v:ac.

We can regard u*v and u+v as elements of C1(X), but they are not the same. Similarly, we can regard u¯ and -u as elements of C1(X), but they are not the same. There is clearly an important relationship between u*v and u+v, and between u¯ and -u, but it will take a little work to formulate this mathematically.

Video (Predefinition 10.11 to Example 10.15)

We next need to define the algebraic boundary uCk-1(X) for a k-chain uCk(X). We start by considering the cases k=0, k=1 and k=2.

Predefinition 10.11.
  • For uC0(X)={X} we just define u=0.

  • Now consider a singular 1-simplex u:Δ1X. This is a path with endpoints u(e0) and u(e1). These endpoints are elements of the set X, which we identify with S0(X), so the difference u(e1)-u(e0) can be regarded as an element of C0(X). We define (u)=u(e1)-u(e0). More generally, suppose we have a 1-chain u=n1u1++nrur, with ui:Δ1X and ni. We then put

    (u)=n1(u1)++nr(ur)=i=1rni(ui(e1)-ui(e0)).

    This defines a homomorphism :C1(X)C0(X).

    Note that for a linear 1-simplex a0,a1, we just have (a0,a1)=a1-a0. Thus, in the picture below we have a 1-chain

    u=a0,a1+a1,a2+a2,a3+a3,a4+a4,a5+a5,a6

    with

    (u)=(a1-a0)+(a2-a1)+(a3-a2)+(a4-a3)+(a5-a4)+(a6-a5)=a6-a0.
  • We now consider 2-chains. For the simplest case, suppose that XN and u=a0,a1,a2 is a linear 2-simplex. In this case, we define

    (a0,a1,a2)=a1,a2-a0,a2+a0,a1.

    The rule for nonlinear singular 2-simplices is essentially a straightforward adaptation of the linear case, but it will rely on some auxiliary definitions given below. Once we have defined (u) for all uS2(X), we will then define (u) for all uC2(X) by the rule

    (n1u1++nrur)=n1(u1)++nr(ur),

    just as we did for singular 1-chains. This gives a homomorphism :C2(X)C1(X).

  • For a linear 3-simplex u=a0,a1,a2,a3, we will have

    (u)=a1,a2,a3-a0,a2,a3+a0,a1,a3-a0,a1,a2.

    For a general linear k-simplex u=a=a0,,ak, we will have

    (u)=i=0k(-1)i(a with ai omitted ).
Definition 10.12.

For 0in with n>0 we define δi:Δn-1Δn by

δi(t0,,tn-1)=(t0,,ti-1,0,ti,,tn-1).

Equivalently, we have

δi(t)j={tj if j<i0 if j=itj-1 if j>i.

Thus, the coordinates of δi(t) are the same as the coordinates of t, except that we insert a zero in position i.

Example 10.13.

In the case n=1 we have maps δ0,δ1:Δ0={e0}={1}Δ1. These are given by δ0(e0)=δ0(1)=(0,1)=e1 and δ1(e0)=δ1(1)=(1,0)=e0.

In the case n=2, we have

δ0(t0,t1)=(0,t0,t1)  δ1(t0,t1)=(t0,0,t1)  δ2(t0,t1)=(t0,t1,0).

Thus, the image of δi:Δ1Δ2 is the edge of Δ2 opposite the vertex ei.

Similarly, in the case n=3, we have a map δi from the triangle Δ2 to the tetrahedron Δ3, and the image δi(Δ2) is the face of the tetrahedron that is opposite the vertex δi. The case i=0 is shown below.

Even more generally, we see that the map δi:Δn-1Δn gives a homeomorphism from Δn-1 to {tΔn|ti=0}.

Definition 10.14.

Consider an element uSk(X) (with k>0), or equivalently a continuous map u:ΔkX. For each i with 0ik we have a map δi:Δk-1Δk and we can compose this with u to get a map uδi:Δk-1X, or in other words an element uδiSk-1(X)Ck-1(X). We put

(u)=i=0k(-1)i(uδi)Ck-1(X).

More generally, given an element u=p=1rapupCk(X), we define (u)=p=1rap(up)Ck-1(X).

Example 10.15.
  • For a singular 1-simplex u:Δ1X we have (u)=(uδ0)-(uδ1). Here δ0 sends the unique point of Δ0 to e1, so the map uδ0:Δ0X corresponds to the point u(e1)X. Similarly, δ1 sends the unique point of Δ0 to e0, so the map uδ1:Δ0X corresponds to the point u(e0)X. We therefore have (u)=u(e1)-u(e0), just as in Predefinition 10.11.

  • Now consider a linear 2-simplex u=a0,a1,a2, so

    u(t0,t1,t2)=t0a0+t1a1+t2a2.

    We find that

    (uδ0)(t0,t1) =u(0,t0,t1)=t0a1+t1a2=a1,a2(t0,t1)
    (uδ1)(t0,t1) =u(t0,0,t1)=t0a0+t1a2=a0,a2(t0,t1)
    (uδ2)(t0,t1) =u(t0,t1,0)=t0a0+t1a1=a0,a1(t0,t1),

    so uδ0=a1,a2 and uδ1=a0,a2 and uδ2=a0,a1. This gives

    (u)=a1,a2-a0,a2+a0,a1,

    just as in Predefinition 10.11. It should be clear that the same pattern works for all k, giving

    (a0,,ak)=i=0k(-1)i(a0,,ak with ai omitted).

The following result is crucial for the development of homology theory.

Video (Proposition 10.16 to Definition 10.21)

Proposition 10.16.

For all uCk(X), we have 2(u)=((u))=0 in Ck-2(X). Thus, the composite

Ck(X)Ck-1(X)Ck-2(X)

is zero.

Example 10.17.

Recall that we defined Cj(X)=0 for j<0, and any homomorphism to the zero group is automatically the zero homomorphism. Thus, the proposition has no content for k<2. For the first nontrivial case, suppose that XN, and consider a linear 2-simplex u=a0,a1,a2. We then have

(u) =a1,a2-a0,a2+a0,a1
2(u) =(a1,a2)-(a0,a2)+(a0,a1)
=(a2-a1)-(a2-a0)+(a1-a0)=0.

We will often use abbreviated notation for this kind of calculation, writing 012 for a0,a1,a2 and 02 for a0,a2, for example. With this notation, the above calculation becomes

2(012)=(12)-(02)+(01)=(2-1)-(2-0)+(1-0)=0.

We now discuss 2(u) where u=a0,a1,a2,a3,a4C4(X), using the same kind of notation. First, we have

(u)=1234-0234+0134-0124+0123.

We can write the terms of 2(u) in a square array, with (1234) in the first column, (-0234) in the second column, and so on. The result is as follows:

We find that the terms above the wavy line cancel in the indicated groups with the terms below the wavy line, leaving 2(u)=0 as claimed.

Lemma 10.18.

If 0i<jk then δjδi=δiδj-1:Δk-2Δk.

Proof.

Consider a point t=(t0,,tk-2)Δk-2. To form δi(t), we insert a zero in position i. To form δj(δi(t)), we insert another zero in position j. Because j>i, inserting this second zero does not move the first zero, so we end up with zeros in positions i and j.

Similarly, to form δj-1(t), we insert a zero in position j-1. To form δi(δj-1(t)), we insert another zero in position i. As j-1i we see that the first zero is to the right of the point where we insert the second zero, so the first zero gets moved over by one space into position j. Thus, we again end up with zeros in positions i and j. In the remaining positions, we have the numbers t0,,tk-2 in order. Thus, we have δj(δi(t))=δi(δj-1(t)) as claimed. ∎

Example 10.19.

In the case where (i,j,k)=(2,4,6) the claim is that δ4δ2=δ2δ3:Δ4Δ6. Explicitly, for t=(t0,,t4)Δ4 we have

Example 10.20.

We will now prove Proposition 10.16 in the case k=4. Consider a continuous map u:Δ4X, or equivalently an element uS4(X)C4(X). We have

(u)=uδ0-uδ1+uδ2-uδ3+uδ4.

We can write the terms of 2(u) in a square array, with (uδ0) in the first column, (-uδ1) in the second column, and so on. The result is as follows:

Lemma 10.18 gives us the following identities:

δ1δ0 =δ0δ0 δ2δ0 =δ0δ1 δ3δ0 =δ0δ2 δ4δ0 =δ0δ3
δ2δ1 =δ1δ1 δ3δ1 =δ1δ2 δ4δ1 =δ1δ3
δ3δ2 =δ2δ2 δ4δ2 =δ2δ3
δ4δ3 =δ3δ3

Using this, we see that in the previous array, the terms above the wavy line cancel in the indicated groups with the terms below the wavy line, showing that 2(u)=0 as claimed. This generalises the argument for linear simplices given in Example 10.17.

Proof of Proposition 10.16.

Consider a continuous map u:ΔkX, or equivalently an element uSk(X)Ck(X). We have

2(u)=j=0k(-1)j(uδj)=i=0k-1j=0k(-1)i+juδjδi.

We can write this as A+B, where

A =0i<jk(-1)i+juδjδi
B =0jik-1(-1)i+juδjδi.

Here i and j are just dummy variables, so we can rewrite B as

B=0qpk-1(-1)p+quδqδp.

We now reindex again, taking q=i and p=j-1. The condition qp becomes ij-1 or equivalently i<j. The condition pk-1 becomes j-1k-1 or equivalently jk. The sign (-1)p+q becomes (-1)i+j-1=-(-1)i+j. This gives

B=-0i<jkuδiδj-1.

However, Lemma 10.18 tells us that δiδj-1=δjδi here, so B=-A, so 2(u)=A+B=0 as claimed.

This proves that 2(u)=0 whenever u is a singular k-simplex. More generally, and singular k-chain has the form u=a1u1++arur for some integers ai and singular k-simplices ui:ΔkX. We then have 2(ui)=0 for all i and so 2(u)=iai2(ui)=0. ∎

Definition 10.21.
  • (a)

    We say that an element uCk(X) is a k-cycle if (u)=0. We write Zk(X) for the abelian group of k-cycles, so Zk(X)=ker(:Ck(X)Ck-1(X)).

  • (b)

    We say that an element uCk(X) is a k-boundary if there exists vCk+1(X) with (v)=u. We write Bk(X) for the abelian group of k-boundaries, so Bk(X)=img(:Ck+1(X)Ck(X)).

  • (c)

    We note that if uBk(X) then u=(v) for some v, so (u)=2(v)=0 by Proposition 10.16, so uZk(X). This means that Bk(X)Zk(X), so we can form the quotient abelian group Hk(X)=(Zk(X))/(Bk(X)). We call this the k’th homology group of X.

Remark 10.22.

The elements of Hk(X) are cosets z+Bk(X) with zZk(X), so zCk(X) with (z)=0. We will often write [z] for z+Bk(X). Before writing notation like [z] one must check that (z)=0; it is an error to use that notation in other cases. Note that [z]=[z] iff z-zBk(X) iff there exists wCk+1(X) with (w)=z-z.

There is essentially only one example that we can calculate directly from the definition.

Proposition 10.23.

If X consists of a single point, then H0(X)= and Hk(X)=0 for k0.

Proof.

There is only one possible map from Δk to X, sending all possible points in Δk to the unique point of X. We call this map sk, so Sk(X)={sk} and Ck(X)=.sk for all k0 (whereas Ck(X)=0 for k<0 by definition). For k>0 we have (sk)=i=0k(-1)iskδi. Here skδi is a map from Δk-1 to X so it can only be equal to sk-1. This gives

(s1) =s0-s0=0
(s2) =s1-s1+s1=s1
(s3) =s2-s2+s2-s2=0

and so on. In general, we have (s2n+1)=0 and (s2n+2)=s2n+1. It follows that B2n+1(X)=Z2n+1(X)=.s2n+1 and B2n+2(X)=Z2n+2(X)=0. In particular, for all k>0 we have Zk(X)=Bk(X) so the quotient group Hk(X)=(Zk(X))/(Bk(X)) is trivial. On the other hand, Z0(X)=.s0 and B0(X)=0 so H0(X)=(.s0)/0. All this can be tabulated as follows:

Remark 10.24.

We leave the following slight generalisation to the reader. Suppose that X is a finite, discrete set of points, so that every continuous map ΔkX is constant. Then H0(X)=C0(X)={X}, and Hk(X)=0 for k0.

We can also calculate H0(X) for all X.

Proposition 10.25.

There is a canonical isomorphism H0(X){π0(X)} for all topological spaces X. Thus, if |π0(X)|=r then H0(X)r.

This should not be a surprise. Both H0(X) and {π0(X)} are ways of constructing an abelian group from X, in such a way that points connected by a path give the same element of the group. We just need to check that the technical differences between these two constructions do not affect the final answer.

Proof.

First note that C-1(X) is zero by definition, so the map :C0(X)C-1(X) sends everything to zero, so Z0(X)=C0(X). This means that the quotient group H0(X)=Z0(X)/B0(X) is the same as C0(X)/B0(X).

Next, let π:Xπ0(X) be the usual quotient map, which sends every point xX to the corresponding path component [x]π0(X). We can extend this linearly to give a homomorphism π:{X}={S0(X)}=C0(X){π0(X)}, by the rule

π(n1x1++npxp)=n1π(x1)++npπ(xp)=n1[x1]++np[xp]{π0(X)}.

We will show that π is surjective, with kernel B0(X). Assuming this, the First Isomorphism Theorem will give us an isomorphism from C0(X)/B0(X)=H0(X) to {π0(X)}, as required.

Next, for each path component cπ0(X), we choose a point σ(c)c, so c=[σ(c)]. This means that the composite

π0(X)𝜎X𝜋π0(X)

is the identity. We can also extend σ linearly to give a homomorphism σ:{π0(X)}C0(X) by the rule σ(n1c1++n+pcp)=n1σ(c1)++npσ(cp). In this context, we see that the composite

{π0(X)}𝜎C0(X)𝜋{π0(X)}

is again the identity. In particular, any element u{π0(X)} is the same as π(σ(u)), so it is in the image of π; this proves that π is surjective.

Now suppose we have a path vS1(X). We then have (v)=v(e1)-v(e0)C0(X), so π((v))=π(v(e1))-π(v(e0))=[v(e1)]-[v(e0)]{π0(X)}. However, we have a path v joining v(e0) to v(e1), so the corresponding path components are the same, so π((v))=0. As everything is extended linearly, the rule π((v))=0 remains valid for all vC1(X). The image of :C1(X)C0(X) is B0(X), so this means that π(B0(X))=0, or equivalently B0(X)ker(π).

Next, consider a point xX and the corresponding path component c=[x]=π(x). The points x and σ(c)=σ(π(x)) both lie in the same path component c, so there must exist a path from σ(π(x)) to x in X. We choose such a path and call it γ(x). This defines a function γ from X to the set S1(X) of paths in X, which we extend linearly to get a hoomorphism γ:C0(X)C1(X). For any point x we know that γ(x) runs from σ(π(x)) to x, so γ(x)=x-σ(π(x)). As everything is extended linearly, the rule (γ(u))=u-σ(π(u)) is valid for all uC0(X). In particular, if uker(π) then π(u)=0 so this simplifies to (γ(u))=u, proving that u is in the image of , or in other words uB0(X).

We can now conclude that π is surjective with kernel B0(X). By the First Isomorphism Theorem, there is a well-defined homomorphism π¯:H0(X)=C0(X)/B0(X){π0(X)} given by π¯(u+B0(X))=π(u) for all uC0(X), and this is in fact an isomorphism. ∎

Example 10.26.

The above proof can be illustrated by the following diagram. It shows a space X with three path components A, B and C, so π0(X)={A,B,C} and

{π0(X)}={kA+nB+mC|k,n,m}3.

We have chosen points σ(A)A and σ(B)B and σ(C)C, so A=[σ(A)] and B=[σ(B)] and C=[σ(C)]. To say the same thing in different notation, we have π(σ(A))=A and π(σ(B))=B and π(σ(C))=C, so πσ=id. The points a1 and a2 also lie in A, so [a1]=[a2]=A, or equivalently π(a1)=π(a2)=A. The path γ(a1) runs from σ(A)=σ(π(a1)) to a1. Similarly, we have π(b1)=π(b2)=π(b3)=B, and we have labelled a path γ(b3) running from σ(π(b3))=σ(B) to b3.

A typical example of an element of ker(π:C0(X){π0(X)}) could be the element u=a1-a2+b1+b3-2b2. This has γ(u)=γ(a1)-γ(a2)+γ(b1)+γ(b3)-2γ(b2), so

(γ(u)) =(a1-σ(A))-(a2-σ(A))+(b1-σ(B))+(b3-σ(B))-2(b2-σ(B))
=a1-a2+b1+b3-2b2=u,

so u=(γ(u))img()=B0(X). This illustrates the fact that ker(π)=img(), which is a key step in our proof of Proposition 10.25.

We next discuss homology classes of paths, revisiting Remark 10.10.

Lemma 10.27.

Let X be a topological space.

  • (a)

    For any aX the constant path caS1(X)C1(X) actually lies in B1(X), so ca+B1(X)=0 in the quotient group C1(X)/B1(X).

  • (b)

    For any path u:ab in X with reversed path u¯:ba, we have u+u¯B1(X) so u¯+B1(X)=-u+B1(X) in C1(X)/B1(X).

  • (c)

    For any paths u:ab and v:bc we have (u*v)+B1(X)=(u+B1(X))+(v+B1(X)) in C1(X)/B1(X).

Proof.

Exercise.

Video (Path homotopy, loop homotopy and homology)

Video (Definition 10.28 and Proposition 10.29)

Definition 10.28.

Let X be a topological space. A loop in X is a path u:Δ1X with u(e0)=u(e1), so that (u)=0, so we have a coset [u]=u+B1(X)H1(X). If u(e0)=u(e1)=a, we say that u is a loop based at a.

Proposition 10.29.

Let X be a path connected space, and let a be a point in X. Then for every hH1(X) there exists a loop u based at a with h=[u]. Moreover, if u and v are loops based at a then so are ca, u¯ and u*v, and we have [ca]=0 and [u¯]=-[u] and [u*v]=[u]+[v] in H1(X).

Proof.

Let L be the subset of H1(X) consisting of classes that can be expressed as [u] for some loop u based at a. We must show that this is all of H1(X).

It is clear that if u and v are loops based at a, then so are ca, u¯ and u*v. By specialising Lemma 10.27, we see that [ca]=0 and [u¯]=-[u] and [u*v]=[u]+[v] in H1(X). It follows from this that L is a subgroup of H1(X).

Now let v be a loop based at a point bX which may be different from a. As X is path connected, we can choose a path m from a to b. The path u=(m*v)*m¯ is then a loop based at a, and using Lemma 10.27 again we see that

u+B1(X)=m+v-m+B1(X)=v+B1(X),

or in other word [u]=[v] in H1(X). This proves that L contains all loops, irrespective of the base point.

Now let h be an arbitrary element of H1(X). We can write h as z+B1(X), where z is a -linear combination of paths in X. Any term with negative coefficient like -m.u can be replaced by +m.u¯ without affecting the coset, so we can assume that all coefficients are positive. Then we can replace any term like m.u by u repeated m times; this gives an expression like

h=u1++un+B1(X)

for some list of paths ui. As this is a homology class, the representing chain must be a cycle, so we must have (u1++un)=0 in C0(X). As (ui)=ui(e1)-ui(e0), this means that

u1(e1)++un(e1)=u1(e0)++un(e0).

As this is happening in the free abelian group {X}, the terms on the left hand side must just be a permutation of those on the right hand side, so we have a permutation σ of {1,,n} with ui(e1)=uσ(i)(e0) for all i. We can now write σ as a product of disjoint cycles. If one of these cycles is (ijkl), for example, then the paths ui, uj, uk and ul meet end-to-end and so can be joined together to form a loop ((ui*uj)*uk)*ul which is congruent to ui+uj+uk+ul modulo B1(X). By doing this for all cycles, we see that h can be expressed as a sum of loops (probably with different basepoints). Our earlier discussion shows that each of these loops lies in L and then that the sum lies in L, so hL as claimed. ∎

Definition 10.30.

Let u:Δ1X be a loop based at a. A filling in of u is a map v:Δ2X with vδ0=u and vδ1=vδ2=ca.

Lemma 10.31.

If u can be filled in, then [u]=0 in H1(X).

Proof.

Let v be a filling in of u. Then

(v)=vδ0-vδ1+vδ2=u-ca+ca=u,

so uB1(X), so [u]=u+B1(X)=0. ∎