Uses and Applications
The pigeonhole principle arises in computer science. For example, collisions are inevitable in a hash table because the number of possible keys exceeds the number of indices in the array. A hashing algorithm, no matter how clever, cannot avoid these collisions. The principle can also be used to prove that any lossless compression algorithm, provided it makes some inputs smaller (as the name compression suggests), will also make some other inputs larger. Otherwise, the set of all input sequences up to a given length l could be mapped to the (much) smaller set of all sequences of length less than l, and do so without collisions (because the compression is lossless), which possibility the pigeonhole principle excludes.
A notable problem in mathematical analysis is, for a fixed irrational number a, to show that the set {: n is an integer} of fractional parts is dense in . One finds that it is not easy to explicitly find integers n, m such that |na − m| < e, where e > 0 is a small positive number and a is some arbitrary irrational number. But if one takes M such that 1/M < e, by the pigeonhole principle there must be n1, n2 ∈ {1, 2, ..., M + 1} such that n1a and n2a are in the same integer subdivision of size 1/M (there are only M such subdivisions between consecutive integers). In particular, we can find n1, n2 such that n1a is in (p + k/M, p + (k + 1)/M), and n2a is in (q + k/M, q + (k + 1)/M), for some p, q integers and k in {0, 1, ..., M − 1}. We can then easily verify that (n2 − n1)a is in (q − p − 1/M, q − p + 1/M). This implies that < 1/M < e, where n = n2 − n1 or n = n1 − n2. This shows that 0 is a limit point of {}. We can then use this fact to prove the case for p in (0, 1]: find n such that < 1/M < e; then if p ∈ (0, 1/M], we are done. Otherwise p in (j/M, (j + 1)/M], and by setting k = sup{r ∈ N : r < j/M}, one obtains | − p| < 1/M < e.
Read more about this topic: Pigeonhole Principle