We now consider matching problems again, but from a rather different point of view. Before, we were given a matching problem and we tried to solve it, or count the number of possible solutions. Here instead we will try to find matching problems that have certain special properties, which in particular make them highly symmetrical. Highly symmetrical combinatorial objects are always interesting and often have applications. In particular, the material in this chapter can be used for efficient design of experiments where one wants to test multiple interacting factors without performing more tests than necessary. It can also be used to design computer communication systems that can detect and correct some transmission errors.
Before, we had a set of people and a set of jobs, and for each job we had a subset of people who are qualified to do that job. For each person we also considered the set of jobs that they are qualified to do. This can be expressed in symbols as .
The framework in this chapter will be mathematically equivalent but we will follow tradition in using slightly different terminology. We will have a set of “blocks” and a set of “varieties”. For each block we have a corresponding subset . For any variety we again define .
Consider numbers with and . A block design with parameters is a matching problem as above, with the following properties:
for all
for all
for all with .
In words: there are varieties and blocks, every variety is in precisely blocks, every block contains precisely varieties, every pair of distinct varieties is in precisely blocks.
As and and it is automatic that . If were equal to then that would mean that for all , which is like a job allocation problem in which every person is qualified to do every job. However, we specified as part of the definition that , so as to exclude this uninteresting case. The condition also has the same effect.
Put and and
The corresponding sets are
It is now visible that and and for all and for all . We also have
In fact, we have for all , as we can see by a long but easy check of cases. Thus, the above sets give a block design.
If there is a -block design, then and and and .
Put
We can use the first description to find : there are ways to choose , and then ways to choose , so . Alternatively, we can use the second description. There are ways to choose , and then ways to choose , so . By comparing these, we see that . Now put
We can again use the first description to find : there are ways to choose , then ways to choose , then ways to choose a different element , giving . Alternatively, we can use the second description: there are ways to choose , then ways to choose a different element , then ways to choose , giving . By comparing these, we get . We can now substitute our first equation into our second equation and then divide by to get . Rearranging this, we get . As one of our axioms we assumed that , so , so , so . ∎
In any block design, we have .
It will be harmless to assume that and .
For we let be the ’th row of the incidence matrix, so with
We also put . We claim that
Indeed, as , the dot product is just the sum of the entries in , which is . Similarly, the ’th term in is if and if , so again. On the other hand, if then the ’th term in is if and zero otherwise, so . If we multiply this relation by and multiply the relation by and subtract, we get in the case . A similar argument gives , as claimed. Note also that Proposition 15.4 gives , so .
We next claim that the vectors are linearly independent. Indeed, suppose we have a linear relation
For any , we can take the dot product with the vector , giving . The dot product relations proved above show that all the terms on the left are zero apart from the term where ; we therefore get . As this gives . This works for all , so . This proves linear independence.
It is a basic fact of linear algebra that the maximum possible length of a linearly independent list in is the dimension . Thus, we must have . ∎
Note that the conclusion is a purely combinatorial fact, so it is interesting that we have had to make a detour into linear algebra to prove it.
A symmetric design is one in which .
We next discuss an interesting construction that uses some number theory to produce a symmetric block design.
Let be a prime number of the form , so
We put
and call this the set of quadratic residues. We then have a matching problem with and .
We have iff iff iff , so .
Take , so with and . We have and and , so . This gives
One can check that whenever , so this is a -block design.
Take , so with and . We have and and and and , so
I particular, we have and so for all and for all . We also have
so . In fact we have for all , so we have a -block design. This will follow from Theorem 15.16, which we will prove below.
Recall from Proposition 14.35 that the set is a group under multiplication, with order .
The set is a subgroup of and has . Moreover, for each , precisely one of and is in .
This last claim is clearly visible in the cases (where ) and (where ).
Put for brevity. We can define a homomorphism by , and then is the image of (which is one way to see that it is a subgroup). The First Isomorphism Theorem shows that and so . Here
As is a field, the product of two terms can only be zero if one of the terms is zero, so the equation can only hold if . This shows that , so . We next claim that . Indeed, if we have then would be an element of order in , but that is impossible (by Lagrange’s Theorem) because the order is not divisible by . Next, if and were both in then the element would also be in , which is false. Thus, each of the sets contains at most one element of . As , we see that each of these sets must contain precisely one element of . ∎
From Lemma 15.12 it is clear that for all , and that for all . However, it is not yet clear what we can say about when . For this we need some more definitions.
We put , so . As , this gives . Also, for with we put . We note that is the disjoint union of the subsets , so .
for all .
Recall from Lemma 15.12 that either or is a square. Suppose for the moment that is a square. Suppose that , so and are squares with . It is clear that the product of two squares is a square, so and are squares with , so . Conversely, if then . From this it is clear that .
Now suppose instead that is a square. If then and are squares with , so . Conversely, if then . From this it is again clear that .
We now see that in all cases, and the number of possibilities for is . The equation now becomes . However, we saw previously that , so , so for all . ∎
We will show how the above lemma works out in the case where and so and . The table below shows the differences for with .
We can read off the sets from this. For example, to find we look in the table and see that appears in the position where and , and also in the position where and . We therefore have . The complete list of sets is as follows:
We find that in every case, as predicted by the lemma.
The matching problem in Definition 15.8 is a -block design.
All that is left is to show that for all . Recall that , so iff . Thus, if we see that and of course so . We can therefore define a map by . In the opposite direction, suppose that , so with or equivalently . If we put then we find that (because and ) and also (because and ), so . Using this we see that is a bijection, so . We also know from Lemma 15.14 that , so as required. ∎