Published
- 20 min read
Fourier Series: part 6: Transforming equation using Fourier Transform
I see a funny math joke post from twitter:
today’s fun math fact pic.twitter.com/aV0XTofVhx
— guille (@GuilleAngeris)
April 29, 2024
Basically, given equation
Find the solution to function .
Of course, since we already know this is the very definition of exponential function, we got the solution already. But the point of the joke is to use invalid integral operation and algebra, and still got the correct result.
If we think about it backwards, how can this happen? Why we still got the correct answer?
The key in the post is that it treats integral as variable, and then apply algebra to it (which can be argued as invalid operation. LMAO). This is actually what Operator Theory is all about.
Operator Theory… for dummies
I guess, I don’t have that much credentials to talk about the formalism. But what I perceive as operator theory is the study of an object called “operator” acts on “function” spaces.
To compare with the usual approach of algebra, that we used to see. A variable is something that can be replaced with “numbers”. The idea of numbers is then extended from integer to complex numbers. This variable can be combined with an “algebra operation”, such as addition, subtraction, multiplication, etc.
At some point, people developed composition of numbers, such as vectors, or matrices. These objects can be “named” or “referred to” using variables. Because, it can be referred to as “variables”, naturally people thought that the usual rule of algebra can still be applied to it.
Now, for the operator theory, (in my humble opinion), is kind of similar. We use variables to refer or represents “functions”, instead of the actual number itself. So the above equation can be thought of as a function itself, or a variable that can be replaced by a “function” in “function spaces”.
I mean, we know that still applies whether if or . So there can be more than one function that satisfies the above criteria.
Transforming operators and changing spaces and domains
A derivative can be thought of as an operator because it is actually a composition of algebra/calculus operation (subtraction, division, and limits). Suppose that the variable above was meant to be a function, because derivative operator works on a function instead of a particular number. However, notice that the equation does not contain any “parameters”. By parameters, I mean the input of the function itself.
As an example, a function with equation is an equation with as the parameter of the function . Both and are variables. But represents function in function spaces (can be replaced by functions), while represents numbers in number spaces (can be replaced by numbers, such as 1, 2, 3, etc.)
Meanwhile, equation is an equation with free parameters. In fact, it doesn’t even says how many parameters does the function has. If it is , then we can be sure that there is only one parameter .
Now, in my previous articles, we already discussed Fourier Transform. Note that we covered these following concepts:
- Fourier Transform changes a variable by both its function representation and the domain of the parameters
- Derivative operator can be “transformed” using Fourier Transform to become a multiplication
Linking these two concepts drastically change our perspectives to solve problems or equation.
Previously we only do FT on a function or variables. What if we transformed the whole equation? What will happen?
Suppose that for an equation to be “transformed”, you need to transform left and right side the same way.
Function then become its dual function .
Function then become its dual function .
However, using the property of Fourier Transform for any applicable function , it follows that: .
So we have two equation now. The original equation:
… And the “transformed” equation (we replace with plain for my convenience):
Since both refer to the same thing, it’s just they were two different forms. It means, solving one has to mean solving the other one as well. If we found the solution of the second equation, we can have the solution of the first equation by doing Inverse Transform! It is such a powerful concept.
Now, the important point to note, is that our second equation has very similar form with the above joke post. It’s just that the integral sign is replaced/represented as . This is not just a coincidence, but it is an inevitable property. Our derivative operator which previously “acts” as an operator, is now a number. These are two different kinds of object, but now we can “operate” on the operator as just a number!
It means, algebra can be used here. Note that this time the equation contains both the “functional” and the parameter of the function . So, represents any kind of function that fits the above equation and parameters.
Since algebra can work now:
Notice that, this algebra is a kind of algebra that works on “functionals” . Which means the symbols here means function that always maps to , given any parameter . In order to not distract you from zero as in numbers, let’s replace as a variable as well. Let’s call it . Our solution then has become:
We now has the solution for .
From here, there are several ways to “solve” the issue.
Suppose that we start from . From using the factorization rule, there are only two possible outcome.
The first one is simply . It means, whatever form the function takes, the equation is always true if the value is fixed to only one value.
From there, we construct the function using Fourier Series.
The above solution were particularly straightforward to construct since the sum only has one term at We also know that has to be in the form of the delta function because it needs to be zero everywhere except at . This is to accommodate the previous factorization rule. Lastly, since both value of at specific is a constant, we can then replace it as constant .
We can see the difference here. When we are solving equation . We treat as functions. It is not fixed to a specific number. However, if we treat as number, then it is immediately has value .
Now, let’s try to solve this using the equation .
In equation , we already have a closed form solution of . But notice this is functional, and is a function as well.
Now, here’s the funny part. Suppose we use (just like above twitter post):
Then the expression becomes:
The functional needs to be an expression that is zero everywhere, except at . An immediate idea was that . With being a constant Then, for each term of the indices , we have this expression:
If we took an inverse Fourier Transform on both sides, notice that becomes . To show you the trick. Suppose that . Then .
For , and then applying IFT on both sides, .
For the next , you can already guess what happens. If we summarize:
Funnily enough, it is circular because we use the fact that the derivative of is also , although that was actually the solution we want to find here. Hahaha. Lastly, the summation is possible for arbitrary constant, because previously it was implied from the power series that the sums has to converge to a bounded value.
Several examples of equations that is explained clearly by Fourier Transform
In physics, one of the fundamental principle is that “a physical law must be true in all inertial reference frame”. In the context of function transforms, it basically means that once you have a physical law, the same law applies whether if the equation is transformed or not. This has dramatic impact because some equation is harder to solve as is, but is easier to solve in the transformed domain. We can then get the original solution by transforming it back.
To illustrate the idea, let’s see several examples
Harmonic Oscillator
You may have know about the equation of motion for a simple harmonic oscillator with displacement and time parameter .
In most high school textbook, it was explained using some elaborate integration technique that the solution of this differential equation is in the form .
Fourier transform can explain why the solution has to be this way.
The first assumption to note, is that the equation is an equation of motion. The value has to be bounded. This justifies us to do a Fourier Transform.
The second question is how we do the transform? Since the function has only one parameter, then it was an obvious choice to do the transform over function . Let’s say the dual is . Meaning .
Applying Fourier Transform to both sides:
Similar with what we do previously. We notice that there are two values that will cause the equation to be true for every function. If we use this to construct the Fourier series, we will have:
With a little bit more algebra and initial conditions, we can further simplify the solution above into the more familiar form or . From there we got the relation:
There is a beautiful insight from this equation. Apparently, the object oscillates, because of pure mathematical fact that the position function has two mode of frequency in the Fourier domain. Fourier domain is an abstract concept, but in here it corresponds to the actual physical behaviour… the oscillating frequency.
Another interesting to note is that the expression of uses complex exponentials, but the number being evaluated for any is always real number. This is made possible because analytic functions is continuous in the complex plane and the coefficient being used here make sure that is always real. You can use Euler’s relation to derive this relationship.
Wave Equation
Another equation that is probably one of the most important thing in Quantum Mechanics is the wave equation.
As a simple illustration, consider classical wave equation in one dimensional space. The equation will have form:
This time, we have a quantity with two parameters (in space) and (in time). Because we have two parameters, we need to decide how we are going to do the transform. Suppose we have a dual that is a Fourier transform of over variable . That is: .
The last equation above is just the harmonic oscillator equation again! We can just use the same solution again, or use another Fourier transform. This time over variable into variable .
We have basically 4 possible frequencies. Two from and two from . Since the difference would only be at the sign of the function, we can write it in general like this:
Conventionally, in the terminologies of waves, we often uses as the frequency of the oscillation (the dual of time ) and as the wavelength (the dual of space ). Thus, essentially is the propagation speed, because . In other common notation, we have (the angular speed) and the wave number.
With this, the solution above can be rearranged into (by omitting ).
The interesting insight we got from here is that the solution is a linear combination of each basis of the parameters. This means a travelling wave can be thought of as sum of standing oscillation in both of space axis and time axis. This is most commonly referred to as the wave superposition principle
Heat Equation
Historically, it was said that Joseph Fourier invented Fourier Transform to solve Heat Equation.
The form of heat equation resembles much of wave equation. Except that the time derivative has order 1 (only the first partial derivative).
Do the transform over and again, and we will have :
This time, there is only one corresponding frequency , but two frequencies .
The general solution take in the form of:
This doesn’t look like the usual solution of the Heat Equation, but we can show that these are the equivalent forma by providing the initial boundary condition.
Suppose that the object that we want to observe is a single dimensional bar (a rod) of length . That means at , we have to define the initial temperature at every point in the bar. We call this .
For simplicity, of course we can also set to be a constant value as well. This means the bar is at thermal equilibrium at time . But let’s just say we have arbitrary function . Using general solution above, we have:
If you see above, it is essentially a Fourier sums for only specific frequency. For any possible functions , we can rewrite it per definition of Fourier transform.
However, practically, when solving the equation for a real object like bar or plane, we didn’t use the integral form to infinity. Simply because the rod itself has finite length, so is not defined by measurement beyond length .
This is why when we usually on solving the Heat Equation, we use Fourier series (for the discrete frequencies, periodic function ):
We can put further constraint in the solution using more initial value/boundary condition. For example, if both ends of the bar were insulated, that means heat can’t flow and the temperature is always at equilibrium in those ends. Namely and . From here, we recover .
Once we have as some kind of specific solution, the entire solution with respect to time can be rewritten as:
This has more resemblance with the more known heat equation solution.
One interesting aspect to see from the solution is that the solution of heat equation is a specific case for the solution of wave equation. For a long time, physicists has been wondering if the reason why two different phenomenon (wave and heat transfer) can be modeled using the same mathematical description is hinted towards the same physical origin.
In other words, does heat/temperature propagate as waves because individual atoms transferred energy using vibration (harmonic oscillator)?
The answer has distinct and subtle difference between the two. In the wave equation, the Laplacian operator (the operator that transform over all spatial spaces) is proportional to a constant second order derivative over time. Since second derivative of time refers to the acceleration, it has subtle effect that the speed of propagation is always constant in the case of wave.
In the heat equation, the Laplacian is proportional to the first derivative over time. Which means, there are no speed limit on how heat propagates. It may even be instantaneous.
To see what I mean, notice that the argument of the exponent for both heat and wave equation for the spatial frequency is an imaginary number. However, in the case of heat equation it uses real argument instead of imaginary number. This subtle difference caused the time component of the solution to have a decaying property, instead of oscillations. Which means on heat equation, only the spatial component oscillates, but the time component does not.
This has a causal effect in such way that heat equation caused the temperature to be even out or averaged over time, since it has asymptote. You can’t predict the past/initial condition from a future state of heat equation. But you can somewhat do that in wave equation, assuming there is no discontinuous jump in the propagation.
If we think about it in terms of information exchange, a wave propagation is as ideal as you can get to transfer information around. While the heat equation describes theoritical limit on how information becomes noises over time.
Speed of rocket
The Tsiolkovsky rocket propulsion is an excellent examples on how to use Fourier Analysis as an “overkill” way of solving another differential equation. I said overkill because usually one can just use separation of variables to find the solution. But we will use Fourier transform and then be surprised that the solution does not oscillates!
The common expression of this ideal rocket equation is like this:
Notice that the time parameter can be completely eliminated from the equation. So this equation actually doesn’t care about the time parameter.
As we can see, the parameter is now, so if we do Fourier transform, the corresponding dual parameter is .
It eventually yield the same solution. In the last step we just renamed the constant C into the initial condition .
This derivation relies on using the FT of and IFT of . This relationship can be taken from using a tempered distribution, which is out of scope for now.
However since the solution has to be the same, then we can also work the other way around. For example, finding FT of from using this exact rocket equation.
Fourier Transform and separation of variables
If you noticed on both Wave Equation and Heat Equation, the solution it described has an interesting property. It was separable.
By separable, I mean both of the solution of has two parameters (spatial) and (temporal). If we have two functions that only depends on individual parameter: and , then the general solution is always in the form of . Basically the parameters don’t mix.
I found this idea pretty interesting to see. At the beginning, physicists uses this approach as an “assumption” or hypothesis, in order for them to find a special solutions.
A step by step approach is mostly involves by imagining if a solution exists, what if the solution is nice enough, that it can be separated as a products of two function and . Because, if it is, then we will have an easier time to solve the partial differential equation, since the order of the operator can be swapped and the operator becomes linear (the parameter that we are not currently operating is treated as constant values).
So this is kind of like a hit-and-miss guess. If we solve it, then it’s good. If it’s not, then tough luck.
However, Fourier Analysis offer much more in depth knowledge about why the parameter has to be separated.
It turns out that, if there are any equation that is transformed using Fourier transform and it becomes “nicely” linear, then the solution has to be separable because in Fourier domain, a parameter can be seen as an orthonormal basis with its dual. So if all the parameters are orthonormal, then using Inverse Fourier Transform, it kinds of guarantee that the original equation is separable (as products of two independent function).
In the Fourier domain, all frequencies commutes. It doesn’t care if the frequency is from a temporal parameter or spatial parameter. If it commutes as a product, then it is a basis. Sometimes it made us confuse about what does “frequency” really means. But basically it helps us think about it easily if we thought about “frequencies” as just the Fourier domains parameters.
So, to rephrase, if the parameters in the Fourier domain (the dual of the original parameter) commutes as products, then the original equation is separable.
To conclude this article, we have multiple different concepts and perspectives:
- Operator theory
- Fourier analysis
- Separation of variables
- Spectral theorem
All of these turns out to be the same thing or equivalent. Solving the problem using one of the methods is equivalent to solving it on the corresponding domain of knowledge. This is a mathematically powerful tools because it allows us to convert existing class of problems at hand into a different class and solve it using a different perspectives that is potentially easier than solving it directly in the original domain of knowledge.
It is also why Quantum Mechanics is heavily influenced by both linear algebra (use of matrix) and wave analysis (use of wave operator). It turns out that both are essentially the same thing, even though both approach were developed by two different person at the same time.