Micah Zarin's Blog

Thoughts, Opinions, & Interesting Things

3,139 viewers

You Can’t Escape The Naturals!

You cannot escape the natural numbers. You might start with geometry, or logic, or even the most abstract, structureless category you can imagine. And yet, the moment you introduce anything iterative, anything countable, anything that builds step by step – you’ve let the natural numbers in. Try as you might, you cannot escape them.

Some say that mathematical objects are arbitrary, that the axioms we choose are merely conventions. And while we do choose our axioms, we do not choose what emerges from them. The natural numbers are not an arbitrary human invention. They are an inevitability, arising whenever we try to describe processes, sequences, or structure itself.

But before we get too carried away, let’s take a step back. What exactly does it mean to say that the natural numbers are “inevitable”?

Abstract Nonsense

Mathematics has many dialects, but category theory is something like its universal grammar. It is less concerned with particular objects and more concerned with how objects relate to one another. You can think of a category as a world of mathematical objects where we care more about their relationships than their internal details. Every category consists of:

1. Objects – the things we are studying.

2. Morphisms – the relationships (functions, transformations, maps) between those objects.

For example, in the category Set, the objects are sets, and the morphisms are functions between sets. In the category Grp, the objects are groups, and the morphisms are group homomorphisms. In Top, the objects are topological spaces, and the morphisms are continuous functions.

But category theory is especially useful because it can answer some very “fundamental” questions about the objects we use. For example: What is the most fundamental object of a certain kind? What is the simplest, most universal structure that everything else builds upon?

ℕ is the Seed of Arithmetic

The natural numbers ℕ – the set {0, 1, 2, 3, 4, …} – can be defined in many ways. But category theory tells us something special: they are the most fundamental counting structure, the initial object in the category of commutative semirings.

What does this mean? A commutative semiring is a structure that has:

• Addition (with a zero)

• Multiplication (with a one)

• The distributive property (a · (b + c) = a · b + a · c)

• Commutativity

• Associativity

When we say a commutative semiring has addition with a zero, we mean that there is a way to combine numbers (or elements) where adding a special element, called zero, does nothing. If you take any number and add zero, you still have the same number. Same thing with multiplying a number by one.

Many things are commutative semirings: the natural numbers ℕ, the integers ℤ, the real numbers ℝ, even matrices and polynomials. But ℕ is the initial object in the category of commutative semirings. This means that for any other commutative semiring R, there is a unique function f: ℕ → R that respects addition and multiplication.

In other words, you cannot define a commutative semiring without implicitly defining a copy of ℕ inside it. This is why, even in mathematical universes where you try to avoid the natural numbers, they sneak in through the back door. They are the foundation upon which all counting, all algebra, all structured arithmetic rests.

But what does it mean for a function to “respect” addition and multiplication?

Proof

[SKIP THIS SECTION IF YOU DON’T CARE FOR THE PROOF]

It means that the function must preserve how numbers interact with each other under these operations. If we take two natural numbers m and n, their sum m + n should map to the same sum in R: f(m + n) = f(m) + f(n). Likewise, their product should also be preserved: f(mn) = f(m)f(n). This is a crucial requirement because semirings, like ℕ, are defined by these arithmetic operations, and any meaningful structure-preserving function must ensure that the results of addition and multiplication remain consistent after mapping.

To see why ℕ is the initial object, consider how we would construct such a function. As stated previously, to be a semiring homomorphism, f must map sums in ℕ to sums in R and products in ℕ to products in R. First, let’s consider the role of 0. In ℕ, 0 is the additive identity: for any number n, adding 0 does nothing. This must also hold in R under the function f, meaning f(0) must be the additive identity of R, or f(0) = 0_R. Otherwise, we would have f(0) + f(n) ≠ f(n), violating the preservation of addition.

Similarly, 1 in ℕ acts as the multiplicative identity. Since multiplication in a semiring must preserve identity elements, it must be the case that f(1) = 1_R, the multiplicative identity in R. Otherwise, the property f(1)f(n) = f(n) would not hold, contradicting the requirement that f be a semiring homomorphism.

Now, the rest of the natural numbers are built inductively from 1. The definition of addition in ℕ ensures that each number is the sum of 1 and the number before it: 2 = 1 + 1, 3 = 2 + 1, 4 = 3 + 1, and so on. If f is to preserve addition, then f(2) must be f(1) + f(1), f(3) must be f(2) + f(1), and in general, f(n+1) = f(n) + f(1). This recursive structure means that once we have set f(0) and f(1), all values of f(n) for n > 1 are completely determined—there is no freedom in choosing f beyond these constraints.

The same logic applies to multiplication.

In ℕ, multiplication is defined recursively as repeated addition: 2 × 3 is just 2 + 2 + 2, 4 × 5 is just 4 added to itself 5 times, and in general, for any m, n ∈ ℕ, we define mn as adding m to itself n times. If f is a semiring homomorphism, then it must preserve this structure in R. That means for any two natural numbers m and n,

f(mn) = f(m)f(n).

This follows naturally from the recursive definition of multiplication in ℕ:

f(2 × 3) = f(2 + 2 + 2) = f(2) + f(2) + f(2),

but since we must also satisfy f(2)f(3) = f(2 × 3), and since multiplication in R must distribute over addition, this forces f(2) = f(1) + f(1), f(3) = f(1) + f(1) + f(1), and so on. So just like with addition, once f(1) is set to 1_R, the values of f(n) for all n are fully determined.

At this point, we have shown existence: for any semiring R, there is at least one function from ℕ to R that respects addition and multiplication. But now we must prove uniqueness—that there is no other function satisfying these properties.

Suppose g: ℕ → R is another function that preserves addition and multiplication. Then, by the same reasoning, we must have:

g(0) = 0_R,

g(1) = 1_R,

g(n+1) = g(n) + g(1) for all n,

g(mn) = g(m)g(n) for all m, n.

But these conditions are exactly the same as those that define f, meaning g(n) must be the same as f(n) for all n. Since any function satisfying these properties must follow this recursive structure, no other function can exist, proving uniqueness.

Since for every semiring R there exists a unique semiring homomorphism from ℕ to R, this means that ℕ satisfies the definition of an initial object in the category of semirings. Any other object trying to be initial would need to map uniquely into every other semiring in exactly the same way, but only ℕ has this uniquely determined structure. It is, in a sense, the most fundamental semiring, encoding the purest form of addition and multiplication.

But, is There an Escape?

You might still be skeptical. Couldn’t we build mathematics without them; just avoiding numbers?

Let’s try.

Suppose you want to do logic without numbers. You might start with propositional calculus, where you only have true and false. So far, so good. You might even get away with first-order logic, where you quantify over objects but avoid assuming a built-in number system.

But what happens when you want to prove things? When you want to say, “For every proof step, there is a next proof step”? When you want to say, “This sequence of deductions leads to this conclusion”? You are now working with an inductive process—one step follows another, then another, then another. And that is precisely what the natural numbers are: the abstract essence of iteration.

Or take geometry. Classical Euclidean geometry seems to have nothing to do with numbers at first. It’s about points, lines, and circles, not about counting. But as soon as you start measuring, as soon as you start saying “divide this segment into n equal parts” or “construct the midpoint,” you are invoking a hidden arithmetic. Birkhoff’s axiom system even reduces all of Euclidean geometry to arithmetic in the real numbers. You can try to separate geometry from arithmetic, but you will find that arithmetic grows back like a weed.

You might decide to work with rings, fields, vector spaces, or groups, thinking that these are abstract enough to avoid the tyranny of counting. But as soon as you start discussing finitely generated structures, as soon as you define a basis, as soon as you talk about a dimension, you have smuggled ℕ into your system. Even if you try to work with only finite structures, you will find yourself needing arbitrarily large finite numbers, which is just ℕ in disguise.

We now face an even deeper question. If the natural numbers are so fundamental, why this set {0,1,2,3,…} and not something else? Could we replace it with another structure?

The answer is no. There are many ways to define ℕ—Peano axioms, von Neumann ordinals, categorical constructions—but all of them describe the same fundamental structure:

• A base element (zero).

• A successor operation that gives the next element.

• Induction, the principle that if something holds for 0 and holds for n+1 whenever it holds for n, then it holds for all n.

Any system that satisfies these rules must be ℕ. There is no alternative.

And this is why even in the most abstract settings, even in the most foundationally radical mathematics, the natural numbers appear.

Leave a comment