The n-ary system (the base n system) where n>1, like the binary or decimal number systems, represents a “free-lunch” for computing systems

10-11-2021

Why does the n-ary system (the base n system) where n>1, like the binary or decimal number systems, represent a “free-lunch” for computing systems? And what is the ‘free-lunch’ in the, “there is no free-lunch” theory.

The no free-lunch theory is that there is no way you can compress every element of a computational representational system so that you can produce a more efficient system where every possible variation of strings are equally likely to show up. This theory is believed to be true for even a subset of such strings where all strings less than some length c are equally likely to be used.

A “compression” in computing science refers to a way to use less data space and algorithm time to represent values (like the values of a number system) and use them in algorithms (like those used to make arithmetic computations). The advantage of the binary or decimal number systems is that they are very efficient to use them for arithmetic. Imagine that you had to use marks for every item that are to be added together. If you add 1 million things to 1 million things you’d have to count the 2 million things – and make 2000000 marks – in order to do the calculation. This seems ridiculous or nonsensical to some people just because they can use the laws of arithmetic for decimal numbers to immediately come to the conclusion that the sum will be 2 million things. I’ve talked to computer programmers who have concluded that my statement is stupid because it seems irrelevant and incomprehensible.

It takes 12 marks (taken from 10 possible numbers) to represent the numbers 1000000 + 1000000, one mark to represent the addition and 7 additions (of 2 numbers each) with 7 possible carries (no carries are used in this simple example). How does that compare to making and counting 2 million marks? It is clearly much more efficient.

Our n-ary (or base n) where n>1 system of numerical notation along with the rules of arithmetic are amazing inventions that stand behind the power of computational systems. It is a free-lunch that we are, for the most part, benefiting from, or which has given many of us a great deal of power.

So if this seems like a reasonable argument, then how could the theory that there is no free lunch (even if said with some good-natured amusement and even if it is usually a good rule of thumb) possibly be universally true?

There may be no 'free-lunch' in computing systems that are based exclusively on a binary (or some other n-ary) system.  However, given the reasonableness of my argument, that is not a certainty.