Published
- 14 min read
Fourier Series: part 1
Taylor Series
You are probably already familiar with Taylor series. Taylor series is an approximate expansion of a function around using polynomials. It was usually recognized as:
The property of Taylor series is that it is a local approximation. So the function can be replaced by the series as long as it converges and the function has derivatives. Each of the coefficient of the series , corresponds to the value of the n-th derivative at point .
In other words, the Taylor series expansion can be understood as a set of the derivative’s value that defines the function.
But what if the n-th derivative can be inferred from the function? For example, suppose that there exists a function whose its own derivatives?
A function whose its own derivatives
Making a statement: “function whose its own derivatives” is equivalent of defining function that is invariant under the derivative operator.
Suppose that it has a Taylor expansion. Since derivative operator is linear, we can operate on it term by term. The derivative of the n-th term:
We now have a recurrent relation:
However, at , then . This is because the value of the derivative at must be the same for any n-th derivative. It is the smallest integer possible. Consequently:
Thus we found our special function:
This function is so special, we called it the “exponential” function. Since it also exhibits properties of exponential form:
We will elaborate this on a separate article. But for now, we realized that there exists a function like this.
Now, what if we replaced with . What is this function?
This would of course blow up to infinity since would be a very big number if is positive. But what about this function?
It will converge really fast.
A function with alternating derivatives
Note that if we swap with . We will have this function .
The Taylor series will have its terms alternating between positive and negative. The n-th derivative will be:
As you can see, the sign alternates for every multiple of 2.
If we swap with or . With is the imaginary number unit. We will have alternating derivatives with every multiple of 4.
Fourier Series
We will now began to extend our ideas into something that is known as “Fourier Series”. Instead of swapping with and confuse ourselves, we renamed the parameter to perform substitution
Suppose that we have a well-behaved Taylor Series of a function around . let us swap and tries to see what happened. We will have:
If we use , conveniently we have:
Now, we might be tempted to perform substitution . This series is similar with , except the indices comes from the negative integers. The only reason we won’t be able to do that is because doesn’t make sense.
So let’s use the property of derivative to see the Taylor expansion. One nice property of function is that it is differentiable everywhere. Keeping in mind that the derivative will be around , we tried to simplify our derivative expression like below. If we use the full chain, it will be very long.
However around is the coefficient of that Taylor series. If we directly match the coefficient, we will have:
Now, if we want to make some kind of replacement for negative factorial by doing substitution , then the left hand side must also makes sense.
The term can still make sense and defined.
But the interesting part is . What does it mean to have a negative n-th derivative? You might guess intuitively that it is an anti-derivative. Suppose that is defined when . If is the anti-derivative of , then (the first derivative of ), has to be by definition. So, in order for a function to have n-th derivative with negative , then it must have an anti-derivative. You can get the anti-derivative using integral.
In summary, the notion of this negative factorial can be completely replaced, if the function itself has anti-derivative, infinitely. Or in other words, you can integrate the function infinitely, and its values on is defined.
Here comes the good part. If a function have alternating derivative, like , consequently we can integrate it infinitely. The function and , is a little bit more special because both of them have cycle of multiple of 4. This relation is different from (cycle of 1) and (cycle of 2).
Now back to our previous series. We have justification to write as , another constants. Even though is a negative number.
We will then have this pair of series:
The above rearrangement is possible by swapping and then renaming to . We got a series with the same term representation, but the index counts from negative infinity.
Adding those two together, we have a series that spans from to , a much larger spans than the Taylor series!
Now, let’s ponder a bit. If the right side converges, then any function that is invariant under its input reciprocals can be represented like this. In a much more general sense, even the coefficient can just be combined and renamed as .
Let’s define a new function , based on this series.
We will call the right hand side, the Fourier Series with complex coefficient. This is because there are other equivalent representation of Fourier Series, but for now let’s just use this.
Example of Fourier Series
There is a very straightforward example of a function that is invariant under its input reciprocals.
Suppose that . Now suppose . We have a function
This is actually a representation of the function as complex exponent. If we express it using Fourier Series/Sums, we will have coefficient set of , , and any other .
From this example, we can intuitively understand that not all function can be locally expressed as Taylor series, because by definition the series doesn’t have negative terms.
However, a Fourier series has index that spans from negative infinity to infinity. This made us possible to express function as linear combination of circular complex function.
Fourier Series as linear terms
From the form of the Fourier Series (and Taylor series as well), we can see that a function might be represented as infinite sums with each term from something like this .
The term conveniently use the same index , as if it was a part of vector.
Let’s say that we can represent as a basis vector. As a notational convenience let’s write it as:
In this case, the right hand side was meant to be the “basis” vector of index . Rather than as in the exponential function.
Then is also a component of vector .
So, it “might” behaves like a vector, but with one important detail. Usually a vector have finite components, or finite index . However, if we thought of Fourier Series as vector, then it has infinite components .
Interestingly, when we see term , there is no real preference whether if we choose as the basis, or if we choose as the basis. If we choose complex number as the components of the basis, then it is a basis that is fixed. It doesn’t evolve over parameter (which conveniently can be thought of as time). If we choose complex number as the components of the basis, then it is a basis that is evolving over input parameter time .
All the above notation can also be expressed in terms of Tensor notation. But, this is a much more advanced topic, and need more grounds to cover. However, I just think it was worth to mention that if we use Einstein summation convention, we can write tensor as or
Inner product within Fourier Series basis
From the analogy of using vectors, given terms such as , or . It is possible to recover using dot product (or inner product, in a more abstract term).
Let’s take a look at simple example of 3D vectors, with basis , , . A vector can be written as
The result of a dot product of with one of the basis, like , will result in its component. This is because a dot product of orthogonal basis will result in if the index is the same, and otherwise. As an example:
So, expanding a dot product of with , will result something like this:
We got the corresponding component of the basis . Which is .
Now we want to know if Fourier Series has similar properties. We have two problem at the moment. First, a Fourier Series has infinite basis like what we understand before. Second, the basis evolve over time. Let’s tackle the problem one by one.
Assume the position and direction of the basis at specific time from the function of . The term has component and basis . Suppose we want to recover only the component from multiplicative rule. Then, since the basis is a complex number, we must multiply it with the reciprocal of the number, so that we got .
From this, we got some general rule that if any Fourier basis has its inverse basis such that its product were 1.
So, if we want to retrieve the component of , we can start by multiplying the function with .
The problem now, if , then the basis still exists for the rests of the component! Let’s say if , then the basis for becomes . This is not zero like the case in vector above. It is only shifted.
Now, since the basis evolve over time, we might want to evaluate the basis in the span of time . We want to see if we can do any specific transform in this time interval/period that causes the basis term to become zero and make the constant term vanish for other .
If we transform the equation using derivative, the component that we figured out before, will vanish because it is a constant term. So, rather than using derivative/differential, we will try to use integral transform.
As a proposition, let’s say there exists interval , in which the terms we are trying to integrate becomes zero. Let’s take a look at the basis term, adjacent to , which is . The integral when we are trying to evaluate the component of with is:
We can be sure from the definition of Fourier Series we established before that , or rather in general , were all coefficient without parameter. So the only way possible for the left hand side to become 0, is if the right hand side uses interval where . This interval can be to or to . To attain symmetry around , let’s use interval to .
Now note that the integral above can be successfully evaluated to zero because . But every possible will be an integer. Also, it just really convenient that for any integer . Except when , something special with the integral happens.
For our target component which is , the integral becomes:
With this integral transform, the result were scaled by . So, if we want to have the result , then the integral itself must be normalized by the span/interval .
From this result, we conclude that there exists specific integral transform such that we can extract the component from the function using operation, just like an inner product. To summarize:
With some rearrangement, (and some more fundamental proof needed, regarding how we can swap sums and integral around), we obtain the kind of transform needed to extract the component/coefficient .
As a test, let’s use it against function that we happened to know has components for and .
First, we must solve this indefinite integral. If we use integration by parts, the parts will cycle, so we can’t represent it as a closed form. We will slightly modify the integral by adding a parameter .
If we evaluate above expression by substituting , we got indeterminate form . We will use L’Hôpital’s rule. We took the derivative of numerator and denumerator with respect to . Then substitute .
We use it back to the previous expression:
The result is the same with what we have in the previous example.
Recap
This article concluded the following section about Fourier Series. Stepping up from the ideas of Taylor Series/Expansion, we want to find similar Series by substituting . Although it doesn’t guarantee (yet) that such substitution preserves the function, we rediscovered the concept of Fourier Series.
For now, the construction is rather from bottom to top. By combining multiple periodic function with basis , we can represent it as function with a nice property.
The construction seems to be in opposite direction of Taylor Series.
For a Taylor Series/Expansion to work, we evaluate a function locally at point , then we can take the coefficient from its subsequent derivative at that point.
The Fourier Series on the other hand, kind of like the opposite. Given a periodic function in the span of , if we integrate it globally over each value of within , we can get the coefficient needed to construct the series.