# What is Pi? Non-Geometrically?

14-Mar-2022

This article was made to celebrate the Pi day, 14th of March.

In this story, I want to take a step back to try to rediscover Pi. Pi, like any other math student anywhere know, is synonimous with circle. This is because both historically and in standard curriculum, we are taught about basic geometrical shape first. Pi can be defined as the ratio between the area of a circle and it’s square radius. Or, it can also be defined as the ratio between the circle’s circumference and it’s diameter.

However later on throughout my study, I found Pi everywhere, from the trigonometry functions, to oscillators equations. At some point, I began to question, does Pi came out from circle or was it the other way around? Does Pi is a fundamental rules and affects the properties of circle?

# Pi as the constant in Euler’s formula

Since we have many-many things named after Euler, let me describe what I meant by Euler’s formula.

It is this one:

$e^{i\theta} = \cos \theta + i \sin \theta$

It is a direct consequence that if we put $\theta=\pi$, we will have the following beautiful identity (most famously called by Euler’s identity)

$e^{i\pi} + 1 = 0$

So, Pi is special after all! However, this begs the question. We know about Pi first. That’s why we can deduce that if we put $\pi$, the right hand side will be equal to 1. What if we never knew about Pi?

To know about Pi, we need to ask the correct question first. In order to unlearn my knowledge about Pi, let’s forget about circle and trigonometry. Start by finding the core problem, then ask the fundamental question.

# Exponential function and the imaginary unit

Let’s say that we found a special function called the exponential function $\exp (x)=e^x$. We can argue that we found this function by reasoning totally unrelated with circle. We found this function by questioning ourselves, “What is a function that it is own derivative?“. That means, if we have a transformation function called “derivative”, we want to find a function that is invariant under that transformation. Which is a fancy way of saying: the function doesn’t change under derivative operations.

From this question, we will naturally found the function using Taylor series, or power series. We will have this function expression:

$e^x = \sum^\infty_{n=0} \frac{x^n}{n!}=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\cdots$

Due to the derivative operator being linear, it is easy to see that each term in the sum was like shifted left when we want to find the derivative.

If we see these expressions, it is an infinite polynomials. Naturally, we want to know if this polynomials have the zeroes, or roots. But, at first glance, we see that every terms are positives, and it has constant 1. Which means, this function can only approach 0 asymptotically to the left.

We can then conclude that in the space of Real numbers, $e^x$ doesn’t have roots.

We can then ask a new question. If the input of the function a complex number, will it have roots?

This is a tricky question because most analytical/complex value explanation uses geometrical fact that the complex plane is closely related with circle. This defeats the whole purposes of our journey (finding Pi without using assumptions about circle).

So, the first attempt we are going to do is using the imaginary unit because it will convert some of the terms to negative.

Like any other complex number, we start by defining a number $z=x+iy$ and use this as the input of our exponential function. To make our life simpler, we set $x=0$ for now. After all we only need the imaginary part. The expression becomes:

$e^{i y}=\sum^\infty_{n=0} \frac{(i y)^n}{n!}=1+{i y}-\frac{y^2}{2!}-i\frac{y^3}{3!}+\cdots$

As you can see, the signs are alternating. More importantly the terms can be separated into the real and imaginary terms. The roots of the polynomials then are the number that makes the real and imaginary part zero at the same time. Since the real part and imaginary part is not additive together. We will just take one of them and see if we can have the roots.

# The roots of the real part

The easiest method to guess if a polynomials have roots is to check if the value changes sign.

Let’s call the real part of the function as $f(y)$. The real part of the function is this:

$f(y)=\sum^\infty_{n=0} \frac{{(-1)}^n y^{2n}}{(2n)!}=1-\frac{y^2}{2!}+\frac{y^4}{4!}-\frac{y^6}{6!}+\cdots$

It is easy to see that when $y=0$, the sums value is 1. Hence it’s sign is positive.

Now, if we take the second derivative of $f(y)$. We will have:

$f''(y)=-\sum^\infty_{n=0} \frac{{(-1)}^n y^{2n}}{(2n)!}=-1+\frac{y^2}{2!}-\frac{y^4}{4!}+\cdots \\ f''(y)=-f(y)$

So, at $y=0$ the value of $f(y)$ is positive, but it’s second derivative is negative. From basic calculus, we know that means the curve of the function is curving down. However, if we look closely at the sum’s terms. It is exactly the same, but the signs are all flipped. Hence we got $f''(y)=-f(y)$.

We find a very interesting conclusion that the curve is always curving in opposite direction as the current sign. Thus, if we found just one root, we can conclude that the function has infinite roots. Because there’s no way it only changes sign just once if the curves always has opposite sign of current value.

Now, how do we find just this one important root?

There are two intuitive approaches:

First, we modify or scale the unit of $y$ in such a way that the roots is exactly a multiple of integers. That way, we can easily know the value of the roots.

Second, we didn’t scale the unit of $y$, instead we just let the natural location of the roots that fits the differential equation above.

If we go with the first approach. We can be sure that $f(y)$ **will** definitely have roots. If it doesn’t have roots, we can choose any arbitrary
constant $C$ and construct a new function $f_1(y)$ in such a way that it will have roots. This is because adding a simple function with constant
means translating it up or down. Because the second derivative is non-zero, it will guarantee that for some $C$, it will have roots.

By taking into account the first approach. We can choose a scalar $K$, in such a way that at least one of the root is an integer multiple, or 1.

In summary:

$f_1(y) = f(Ky) + C$

However, looking at the second derivative, we got:

$f_1^{''}(y) = K^2f{''}(Ky)=-K^2f(Ky)=-K^2(f_1(y) - C) \\ f_1^{''}(y) = K^2(C - f_1(y))$

We now found a contradiction. $K$ has to be non-zero, otherwise the equation doesn’t mean anything because then the second derivative is always zero. Zero K is impossible because $f_1(y)$ and $f(y)$ has to be the same curve or at least have similar properties about their curve.

Assuming that $K$ is non-zero, this would imply that if $y=1$ (an integer, the assumed root), the value of $f_1(y)$ is zero, however $f_1^{''}(y)$ is not zero. That means we have proofed by contradiction that in order for $f_1(y)$ to have roots, with exactly the same curve properties as $f(y)$, then $C=0$.

It means, only a family of function $f(y)$ with the described taylor series will have this properties. The generic equation then becomes:

$f{''}(Ky)=-K^2f(Ky)$

From this first approach, we have two options again:

- Define a unique function with its roots as an integer multiple of 1.
- Define a unique function in such a way that the magnitude of the second derivative is 1.

That sums up the first approach.

Meanwhile, the second approach is much more straight forward, because we just directly compute numerically and find the roots.

# Pi as the only unique solution to the roots of the above taylor series.

It doesn’t matter which path you choose, even if we derive it by intuition alone (and not geometrical in that one), **you will find** $\pi$.

Let’s suppose your function is a unique function which have roots as integer multiple. It will be described in taylor series as:

$f_1(y)=\sum^\infty_{n=0} \frac{{(-1)}^n (Ky)^{2n}}{(2n)!}=1-\frac{(Ky)^2}{2!}+\frac{(Ky)^4}{4!}-\frac{(Ky)^6}{6!}+\cdots$

You find the value of $K$ easily as the square root of absolute value ratio between it’s second derivative and itself at any given position. However, we can argue that it is like a chicken and egg question:

“Dear author, how do we even find the ratio? To get the output of the function, we need $K$!”

Well, I guess you have a point there. Jeez. In that case, we have to find the approximate value of $K$ first.

Recall that we learned these things from $f_1(y)$

- The function is symmetric because each terms in the series has even power
- $f_1(0)=1$ had to be the local maxima, because the function is left-right symmetric around input 0 and curving down
- The nearest roots are $-1$ and $1$, due to the constraint we define above
- Due to the differential equation above, we will have infinite roots. Each of them is an odd integer. That means 1,3,5,7 causes the function to have zero value

To reiterate again, we are sure that we have infinite roots after we found just one. This is because the differential equation would mean that the curve crossing the x-axis by curving down but will curve up again. Since the curve it self is an inverted equation, that means it will cross x-axis again on the way up, and periodically have roots.

You might wonder how do I know that the other roots are odd integers. An easy way to guess is to just calculate the distance between root 1 and root -1. So the distance is 2. That means the next positive roots are $1 + 2 = 3$, and so on. It’s going to be the same distance because the curve from one roots to the other is the same curve, but inverted (when we look at the differential equation).

Notice that we haven’t actually graphed anything, yet already found interesting information from the polynomials alone. It is more of an algebraic intuition, rather than geometrical.

Let us recall a simple basic fact about polynomials. If we know a polynomials function have 3 roots, a, b, c. We can guess its factorized form as:

$p(x)=(x-a)(x-b)(x-c) \\ p(x)=x^3-(a+b+c)x^2+(ab+bc+ac)x-abc \\ x^3-(a+b+c)x^2+(ab+bc+ac)x-abc=(x-a)(x-b)(x-c)$

As you can see, the left side is a power series. The right side is it’s factorized form. At this point, it might be your “aha” moment ticking in.

Can we express the taylor series of our function as a factor products?

Naturally, since our taylor series has infinite terms, we imagine that the factors are an infinite products.
The big question is, **is it indeed possible and logical to do that? Is it even provable? For infinite polynomials?**

Well, actually, we don’t have to prove for the general case. We just want to see if this is true for a special case. In this case, our function $f_1(y)$ above.

By the way, I might have to add, using variable $y$ is confusing, so let’s just swap it as $x$.

A root a, can be expressed as polynomial factors as $(x-a)$. However there are an equal number of way of expressing it, as long as it satisfies $x=a$ at the end. For example, consider the following expressions:

$x-a=0\\ a-x=0\\ 1-\frac{x}{a}=0\\ \frac{x}{a}-1=0\\$

Out of 4 expressions above, the most appropriate for us is the third one. It ticks all these boxes:

- when $x=0$, the products of all the factors will be 1. Which satisfies $f_1(0)=1$
- guarantees convergence for the coefficient of $x^n$, due to the total product of the roots being a very big number.

With that in mind, we just need to collect all the roots and combine it as factors. From $x=0$ to positive infinity, we have positive odd numbers. From $x=0$ to negative infinity, we have negative odd numbers. It is surprisingly convenient that the expressions is quite simple:

$f_1(x)=\cdots (1+\frac{x}{7}) (1+\frac{x}{5}) (1+\frac{x}{3}) (1+\frac{x}{1}) (1-\frac{x}{1}) (1-\frac{x}{3}) (1-\frac{x}{5}) (1-\frac{x}{7}) \cdots \\ f_1(x) = (1-\frac{x^2}{1^2}) (1-\frac{x^2}{3^2}) (1-\frac{x^2}{5^2}) (1-\frac{x^2}{7^2}) \cdots \\$

Do we have some sort of symbol like the sum symbols $\sum$, but for product?

Yes! Also, coincidentally it is a **big Pi** symbol: $\prod$.
Using the product notation, we can express it like this.

$f_1(x)=\prod_{m=0}^{\infty}(1-\frac{x^2}{(2m+1)^2})$

We are now interested to see if the following equality between infinite sums and products are true:

$f_1(x)=\sum^\infty_{n=0} \frac{{(-1)}^n (Kx)^{2n}}{(2n)!}=\prod_{m=0}^{\infty}(1-\frac{x^2}{(2m+1)^2})$

Remember that a polynomials are equal only if its coefficients are equals too. Thus we need to compare all the $x^n$ coefficients. For $x^0$, the coefficient is 1, and it is indeed equal.

For $x^2$, we can collect all the coefficient of the corresponding products in the right hand side, and we get this:

$-\frac{K^2x^2}{2!}=-x^2\sum_{m=0}^{\infty}\frac{1}{(2m+1)^2} \\ K^2=2\sum_{m=0}^{\infty}\frac{1}{(2m+1)^2}$

There you go! You can now approximate the value of $K$.

Ok, but it is only half the way. We just found $K$, right? Is $K=\pi$? It is not, due to some simple things. Our ancient mathematician civilizations found out about geometry first, and it define the ratio of circle, before they understand Taylor series. This is why instead of $K$, they choose to define elementary function using $\pi$. For instance, our unique function $f_1(x)$ is equivalent to $f_1(x)=\cos(\frac{\pi}{2}x)$.

That being said: $K=\frac{\pi}{2}$. Which would then imply:

$\pi^2=8\sum_{m=0}^{\infty}\frac{1}{(2m+1)^2}$

All the constant described here have a close relationship. The only difference is that our forefathers are leaning towards defining the differential equation $f{''}(x)=-f(x)$ and found $\cos(x)$ with the consequence that the roots of the function is not a multiple of integer, but rather, some sort of ratio of $\pi$. Meanwhile, our approach are leaning towards defining simpler integer roots, with the consequence that the differential equation has non-unitary ratio $f{''}(x)=-K^2f(x)$.

Both are equivalent, but I think our approach is more intuitive from a non-geometrical perspectives.

# Can we still find the same conclusion if we use the imaginary part to find the roots?

Yes. But it is going to take a while to re-explain the approach, so I will just list the steps quickly.

The imaginary part of the $e^{iy}$ function will have terms: $g(y)=\sum_{n=0}^\infty \frac{(-1)^n x^{2n+1}}{(2n+1)!}$

Define $g_1(y)=g(Ly)$

Differential equation is still the same, though $g_1{''}(x)=-L^2 g_1(x)$

Suppose we want to use integer multiple roots, this time it is exactly integer multiples: 0,1,2,3,4, etc.

This time, $g_1(0)=0$ instead of local maxima.

The tricky part here, because $x=0$ is one of the roots, to find the product form, we need to include it. So, $x$ is the only factor that is not quadratic (because we only expect it once). The other is quadratic since it exists in positive and negative side of the axis. We also need to scale the total product by $L$, in order for the $x$ coefficient to have the same value.

$g_1(x)=Lx\prod_{m=1}^\infty (1-\frac{x^2}{m^2}) \\ \sum_{n=0}^\infty \frac{(-1)^n (Lx)^{2n+1}}{(2n+1)!}=Lx\prod_{m=1}^\infty (1-\frac{x^2}{m^2})$

If we expand the products for the $x^3$ coefficient, we will have:

$\frac{(Lx)^3}{3!}=Lx^3\sum_{m=1}^\infty \frac{1}{m^2} \\ \frac{L^2}{6}=\sum_{m=1}^\infty \frac{1}{m^2} \\ L^2=6\sum_{m=1}^\infty \frac{1}{m^2}$

In this case here we can immediately follow from the $\sin(x)$ function that $L=\pi$.

# Final remark

Overall, it is quite fun to imagine a “what-if” scenario if our civilizations found Pi not from a geometrical approach.

The steps above are far from rigor. For example, how do you even justify that an infinite products will actually converges and have its own Taylor expression?

However, let’s just imagine for a moment. If we never found 3.14… to be defined as Pi, will we still be able to find that numerical number? In our approach above, we found two distinct number $K$ and $L$, then we found out that $2K=L$. Depends on which function or numbers that we want to declare as fundamental, or elementary, we might have a different symbols. We might even have a different understanding of math.

For example, as we can see that both $K$ and $L$ is a solution to the same differential equation $f(x)=-A^2f{''}(x)$. The difference is just from the initial value and the first derivative. In that, we can conclude that two different Taylor series expressions of $f_1(x)$ and $g_1(x)$ rises from a single more fundamental function. Since both functions are linear, that means actually the more fundamental function is the exponential function, which is a linear combination of both.

This brings us to the following question. Can we find a more fundamental constant by finding the roots of $e^{i\theta}$?

Well, perhaps not today. I haven’t found it yet, because $e^{i\theta}$ doesn’t have any values of $\theta$ that causes both the real and imaginary part zero at the same time. So we can’t use the same approach.