Sorites paradoxes pose fundamental questions about semantics of natural languages. The sorites paradox regarding numbers is the following: One is a small number. If n is a small number, then n+1 is a small number since there is nothing fundamentally different between n and n+1. Therefore, every number is small.
The problem in the argument lies in the induction step. Although (for large n) n and n+1 are similar quantities, they are not the same, and in some cases, the difference is decisive. Imagine, for example, placing coins one by one into a large bag that cannot withstand a heavy weight. A single coin ordinarily does not make a significant difference; the weight of 137 coins is almost the same as that of 138. Yet, after say the 12345th coin is placed, the bag snaps. Clearly, in this context the difference between 12344 and 12345 is crucial. We may say that 12345 coins are too heavy while 12344 are not. The difference between 12344 and 12345 acquires significance from context.
Hidden sharp differences between adjacent numbers occur in many other contexts. For a example, if a typical today's computer counts events and records the count as (unsigned) integer data type, then after reaching the count of 4294967295, the next event resets the counted number of events to zero. In a sense, 2^32=4294967296 is the least large number since it is the least natural number that cannot be stored in the standard 32-bit word size.
Our ordinary usage of word “small” is amalgamation of the usage and meaning in particular cases. In general, as the number increases, the case for saying “not small” gradually increases until at some tipping point, it is better to say “not small” than to say “small”. In many cases, the exact boundary is more hidden than in the particular cases described above. However, some boundary is necessary for correct logic to work, and hence must exist. The boundary between small and not small must exist in every formal language (that uses classical logic and semantics) that has predicate small.
It is unclear, however, whether such a boundary is a natural one, or merely exists formally. One view is that the general boundaries of vague terms exist only with respect to formal languages. On that view, utterances are true and false only with respect to formal languages. Much rather like the sentence “the number of apples in that bin is small,” can be meaningless in itself for failing to specify the bin, on the formal view it is also meaningless without specification of what is apple and what is small. In ordinary conversation and thought, we assume formal existence of the language but do not specify it.
Unfortunately, the formal view appears to be nihilist in nature for it appears to allow semantics of natural languages only as a formal fiction. For example, a proponent of the formal view may say, “Formally, laws do not exist, but since appearance of laws is necessary, judges should pretend that laws exist.”
One can attempt to avoid the dilemma by saying that in clear cases, human language has natural semantics while in borderline cases it has semantics only relative to formal languages. Unfortunately, the attempt does not work because one can ask, what is the smallest natural number n such that the sentence “Generally speaking, n is a small number” does not have a natural truth value. The border between a vague statement being naturally true and being true only with respect to a formal language is as problematic as the border between truth and falsehood.
The problem of vagueness also arises in morality, for one can ask about the exact boundary between right and wrong. One is forced to conclude either that morality is merely a matter of convention and of formal choice of the definition of wrong—which is unacceptable—or hold that some actions are wrong in themselves, irrespective of the formal language, which by the argument in the paragraph above, implies a sharp boundary between wrong and not wrong, and hence contradicts the spirit of the formal approach to semantics.
For these reasons, the author is inclined to believe that human languages have natural semantics. The question, “what is the largest small number” has a single best answer in the general case (however, since human language is context dependent, context can override the general answer), and the United States legal code has a single best interpretation.
The argument against the natural view is that we do not see how we can determine the boundary. However, human civilization has made an enormous intellectual progress, and incomparably greater progress is still to come. The progress is sometimes accompanied by a conceptual shift, which allows us to see what before we thought cannot be seen. For example, in a primitive society one can maintain that matter has no structure at a micrometer level since there is no way we can distinguish structure at a micrometer level—by sight, touch, or any other then conceivable means. However, modern inventions such as microscopes allow seeing sub-micrometer structure of matter.
In another example, theory of relativity is (wrongly) claimed by some to imply there there is no such thing as the frame of rest since results of physical experiments are believed to be independent of the frame of reference. However, more recently, scientists have discovered a field permeating the universe—cosmic background radiation. By measuring (frequency) Doppler shift at different directions, one can find our velocity relative to the source, which arguably is our actual velocity.
Perhaps, future advances will open our minds to a new and fundamentally different way of understanding semantics of the English language. Alternatively, if we take polls asking people “what is the largest small number,” the distribution of results will grow progressively narrower as human civilization advances in knowledge and intelligence, and eventually most people will give the same, presumably correct answer.