Heaps' Law

In linguistics, Heaps' law is an empirical law which describes the portion of a vocabulary which is represented by an instance document (or set of instance documents) consisting of words chosen from the vocabulary. This can be formulated as

where VR is the subset of the vocabulary V represented by the instance text of size n. K and β are free parameters determined empirically.

With English text corpora, typically K is between 10 and 100, and β is between 0.4 and 0.6. The law is attributed to Harold Stanley Heaps, but was originally discovered by Gustav Herdan (1960) and is also known as Herdan's law. Under mild assumptions, the Herdan-Heaps law is asymptotically equivalent to Zipf's law (Kornai 1999, Baeaza-Yates and Navarro 2000, van Leijenhorst and van der Weide 2003).


A typical Heaps-law plot. The x-axis represents the text size, and the y-axis represents the number of distinct vocabulary elements present in the text. Compare the values of the two axes.

Heaps' law means that as more instance text is gathered, there will be diminishing returns in terms of discovery of the full vocabulary from which the distinct terms are drawn.


It is interesting to note that Heaps' law applies in the general case where the "vocabulary" is just some set of distinct types which are attributes of some collection of objects. For example, the objects could be people, and the types could be country of origin of the person. If persons are selected randomly (that is, we are not selecting based on country of origin), then Heaps' law says we will quickly have representatives from most countries (in proportion to their population) but it will become increasingly difficult to cover the entire set of countries by continuing this method of sampling.


Famous quotes containing the word law:

    “... But here there is nor law nor rule,
    Nor have hands held a weary tool;
    And here there is nor Change nor Death,
    But only kind and merry breath,
    For joy is God and God is joy.”
    William Butler Yeats (1865–1939)