From step (1) to (2), we use integration by parts.
From step (2) to (3), we rely on the assumption that if g(t) is periodic, then the first term will vanish to zero.
If it is not periodic, then we are going to assume that the average also vanish to zero as limt→∞g(t)=0.
Finally we got result (4).
So interestingly, an FT of a derivative wrt to t, is the same as the FT of the original function, times iτf, with f being the dual variable of t (its frequency domain).
Basically FT turns derivative into a multiplication in the frequency domain.
The other interesting aspect is that you will get the same result, even if you tried to derive it using the inverse FT.
Step (1) and (2) is using the definition of inverse Fourier Transform.
Step (3) to (4) is assuming the convergence of the integrals, then by linearity, we can swap the order of integrals
and derivative, because it is against a different variable.
In step (4), we took derivative of eiτft because it is the only term that contains variable t.
Step (5) is using the definition of inverse Fourier Transform, but backwards. The term in the bracket must be the Fourier Transform of g′(t).
The last row is a direct result if we took the derivative of g(t), as much as n times.
Fourier Transform of a constant function
The next interesting thing to observe after figuring out derivative, is how we model a Fourier Transform of a constant function.
We can have two different approach.
Approach A: use the fact that a constant function can be thought of as a periodic function with finite smallest non-zero period T.
Then, we can use Fourier Series approximation to construct back the original function.
Approach B: use the fact that a constant function can also be thought of as a non-periodic function with infinite period T.
Then, we can use Inverse Fourier Transform integral to construct back the original function.
Constant function, interpreted from periodic function
A constant function is tricky. It expands to both ∞ and −∞, so the integral (area under the curve), is definitely ∞ as well.
In terms of periodicity, we can think of constant function as a function that has “arbitrarily small” fundamental period.
We can set it as small as possible, and it can still be considered as a periodic function, since the output is constant.
This made us wondering. Suppose that there exists a smallest interval possible (but still non-zero), that can’t be divided again.
Can we use this as the fundamental period T?
Let’s challenge this idea. But first, let us use a constant h instead of T to represent this smallest interval. Also, remember, it can’t be zero.
In this interval, the value of g(t) is decidedly 1 (a constant).
We are going to use periodic approximation from article 2-T3
From step (1) to (2), we evaluate the value of g(t)=1 in this interval.
From step (2) to (3), notice that for a constant value, we can shift the interval so that t is the center.
The range is shifted into t±2h because the length needs to be the same h.
From step (7) to (8), we use the definition of sin(ϕ)=2ieiϕ−e−iϕ.
From step (8) to (9), we use the definition of sine cardinal function: sinc=ϕsin(ϕ).
Looking at the last expression, we know that the whole interval of domain t is approximately Nh.
This is because the integral under the curve had to match (means the frequency f=0).
For a constant function g(t), we can pick any t and it should behave the same way as if t=0, the center of the function.
So, evaluating t=0 would cause e−iτft=1. We will then being left with the final expression:
G(f,N,h)=Nhsinc(τf2h)
But, it’s not over yet. Ideally the function should be independent from any variable other than f. So, we need to figure out how
to eliminate N and h. We will use the Fourier Series form to do that. Taken from article 2-T5
Notice that, since gh(t) should be a constant function, then its derivative with respect to t is zero.
However, this can only mean, term inside the sum evaluates to zero (the right hand side).
In step (2) to (3) from derivation above, you will notice that the factor iτf cancels out with the derivative of eiτft.
This allows a very neat equation under the summation signs. Also, a peculiar one at that.
Suppose that we want to check the equality before integration, then for any t, and any f it seems that:
1=eiτfh
This would contradict our assumption in a strange way. The consequences: one of the following needs to be true:
h is actually dependent on the value of f that is currently being evaluated in the Fourier Transform.
h can be the smallest non-zero value possible (doesn’t matter what it is), but fh needs to be related with an integer n such that fh=n.
So that eiτn=ei2πn=ei2π=1
The condition for 1. will be covered automatically if we are dealing with signal scaling.
But this is not the case for constant function, since stretching the function horizontally will have no effect.
So, we have to accept point 2.
But if we accept point 2, then sin(τf2h)=sin(nπ) which is zero for all n.
Then, the function G(f,N,h) is zero for all n, which means the sums also zero.
There is one weird catch, though. When f=0, G(f,N,h)=Nh like what we concluded before (because sinc(0)=1).
Concluding what we have found now, we know that G(f,N,h) is one peculiar “function” that is zero everywhere, except when f=0.
Now, we need to figure out the value of Nh and check if this has limit of some sorts.
From the Fourier Series representation, but taking t=0 (the center), the value of gh(t)=0 must be zero.
We also substitute f=hn.
The last sum is easy to solve. Since we already decided that n is an integers, sinc(nπ) is zero everywhere, except when n=0, where the value of sinc(0)=1.
It would immediately follows that gh(t)=h=1.
Whaaaat? So, in fact h=1 is the smallest interval that we can use.
Our G(f,N,h)=G(f,N)=Nsinc(fπ).
To make it consistent with our convention that integer frequency should be named k,
we uses G(k,N)=Nsinc(kπ).
In our particular case here, N really depends on the choice of the span of domain t.
So if t ranges from −∞ to ∞, then limN→∞=∞, so the series has a very tall spike when k=0, but zero in everywhere else.
G(k,N)=Nsinc(kπ)(P8)
Constant function, interpreted from non-periodic function
As we have discussed before, we can also thought of constant function as a non-periodic function with infinite periods.
This line of thinking makes it easy to generalize the concept, but it is a much more challenging ideas to constructs.
Our starting line is the same. Take a segment of a constant function g(t) with the span T. Its Fourier Transform is a function in frequency domain that can represent the function, up to this span T. This step is similar with Approach A, where we thought that there is a periodic slices with interval h.
However, this time, instead of multiplying it by N (the count of the interval).
We instead want to enlarge the slice by taking the limit of T to infinity.
This is analoguous to stretching the function horizontally to positive and negative x-axis direcation.
Our G(f) is a representation of the interval T, now. We are going to use
the Fourier Transform definition in article 2-T1.
Step (1) to (2), is just a notational convenience. GT(f) means G(f) in the limit that limT→∞.
This way, the end result GT(f) is a function that took the form of a function that its limit hasn’t been taken yet.
Let’s evaluate it for a moment. From this form, it is clear that when f=0, the function GT(0) evaluates to ∞ and becomes really huge. This is because sinc(0)=1 and the value 2T approaches a really huge number.
What we are not sure yet, is how this function behaves on continuous f.
Suppose that fT=n an integer, just like what we concluded in Approach A.
It means that the function evaluates to zero.
But what about when fT not equal to an integer?
Especially when we are going to set T as a really huge number.
Let’s calculate the Inverse Fourier Transform, and we are going to see from there.
Just like before, it would imply that for any t and any T, it seems that:
1=eiτ2fT=ei4πfT
We arrived at the same contradiction. But, the situation is different now.
Previously, the only resolution makes sense is statement 2, where h needs to be the smallest non-zero value possible, with fh=n as an integer.
This time, f is continuous, and T is in the limit of some large number approaching to infinity (not a small number). There is a specific relation needs to be hold such that both f and T were dependent from each other in such a way that fT=2n.
It implies a natural constraint, such that we can set f to be as small as possible, as long as fT=2n. If we stretch the signal to T approaching infinity, for example, in the case of our constant function. Then its Fourier Transform domain f is so localized that G(f) only has non-zero values around f=0.
It has similar (if not the same) consequences as Approach A, albeit with entirely different reasoning.
In conclusion, GT(f) behaves the same way as its Fourier Series counterpart.
It is zero everywhere when fT=2n, except when f=0, where its value spikes to infinity.
Next, from the FT to IFT relationship, we will try to find another condition such that
the transform is invertible. Suppose that at t=0, the value g(t)=1 is a constant.
We also substitute f=2Tn
Since there are no variable left on the right hand side. We can guess that the right hand side evaluates to 1, in order for the limit to converge with the left hand side g(t)=1. But we had to make sure.
Previously we evaluate a sum, and it was easy to do because for any value of n, the function evaluates to zero, except when n=0. This time n represents a continuous variable. This last integral is a little bit tricky to solve. But we will use several steps to change the form.
In order to evaluate the integral, notice that sinc(ϕ) is an even symmetric function. So we can change the integral into this:
∫−∞∞sinc(nπ)dn=2∫0∞sinc(nπ)dn
We expand sinc(nπ)=nπsin(nπ)
2∫0∞sinc(nπ)dn=2∫0∞nπsin(nπ)dn
We are going to decompose n1 as an integral: n1=∫0∞e−nsds
2∫0∞nπsin(nπ)dn=π2∫0∞∫0∞sin(nπ)e−nsdsdn
Using Fubini’s Theorem, assuming the integral converges, we swap the order of the integration.
That last integral is just a trigonometric integral, that can be solved by choosing s=πtan(ϕ). So that dϕds=πsec2(ϕ).
With the integration boundaries becomes ϕ→[0,2π]
With this, we concluded that indeed g(t)=gT(t)=1. In the limit of large T, turns out the integral converges, even without us having to specify that n has to be integers.
So the Fourier Transform of a constant signal with arbitrarily large span T is:
Note that, although the concept is different, when T=21, we recover the Fourier Series representation of a constant function: G(k)=sinc(kπ)
Constant function, interpreted as distribution
Originally, we have two approaches A and B.
We now realized that both is similar but not equal.
It appears that one approach is most suited when the time interval is very small.
The other one is most suited when the time interval is very large.
Can’t we have both?
For instance, both representation of the FT, G(f) seems to behave like these:
The value G(f) at f=0, is either very large, dependent on the span of the domain t, or just ∞.
The value G(f) at f=0, is zero everywhere.
The integral over all domain f for G(f) is 1.
This seems to be a very weird function. How come it has value ∞, but the integral area under the curve is normalized at 1? Doesn’t look like a function to me.
In the case of Fourier Transform, this looks like an identity basis.
In the previous two approach, we started from a constant function, then we try to find its FT. However, if we go the other way around: “What is a function that its FT is a constant function?”, then we might understand something.
Suppose that there exists a function (or property, whatever), called the delta function δ(t). Delta function has a property in such a way that its FT D(f) is a constant 1.
Can this function exists?
The reason why we called it “Delta function” is probably special in its way. It is called “Delta” because it represent a small slice of interval.
Note that it is the inverse/reverse direction of approach A & B where we assume that constant function is in the time domain.
Now we assume the constant function is in the frequency domain.
The only benefit that we have, is that we can immediately construct a Fourier Series δ(t) just by composing the frequencies.
From a Fourier Series definition if a function δ(t) has a Fourier Transform D(f)=1. Then it immediately follows:
δ(t)=N1f=−∞∑∞1eiτft=N1f=−∞∑∞eiτft
Using this kind of representation. We can figure out 3 criterias of δ(t) to be the same with the function G(f) we are trying to find before.
When t=0, δ(t) values approach infinity. This is because the sum happens to sum 1 from negative infinity to infinity. So this becomes a huge number.
When t=0, there is a corresponding indices f, in such a way that τft=nπ where n is an integer.
The end result only makes sense if the equality is true. Which means the normalization factor of this delta function
depends from the span of the integration, the domain of t.
Now, here’s the connection we are trying to make.
Since δ(t) and D(f)=1 is a Fourier Dual, with g(t)=1 and G(f) is a Fourier Dual…
It would imply that δ(t)=G(f).
These symbols must have some correspondence. Since all the values for x=0 is the same.
We can only care when limT→∞. It should increase the same way in the case x=0.
Now, since both of T depends on the span of the integration (the domain of x). We should see some relationship.
We can see that 2T is affected by the normalization factors. So in a sense, δ(x) is some sort of distribution.
This distribution, can be approximated by sinc(ax) for any value of a.
Suppose that this distribution exists, then it will be easier for us to define Fourier Transform pair, as a function.
Let’s explore the ideas further.
For non-periodic function, suppose there exists a distribution called δ(t).
Because it is a distribution, its integral over its domain is normalized:
∫−∞∞δ(t)dt=1(E1)
Suppose that its Fourier Transform is 1. Meaning:
F{δ(t)}=δ(f)=1(E2)
Now, in parallel, suppose there exists a Fourier Transform dual g(t) and G(f).
Meaning, we can write g(t) like this, using inverse Fourier Transform:
g(t)=∫−∞∞G(f)eiτftdf(E3)
Using the same trick like we do by finding a Fourier Transform of a derivative,
now we try to find a Fourier Transform of a total integral over all of its domain.
Integrate left side and right side over all values of t.
Now compare the last right hand side with the following definition from probability theory.
Suppose we have a probability distribution p(x), and a function f(x).
Then the expectation of f(x) given random variable X, is defined by:
E[f(X)]=∫−∞∞p(x)f(x)dx
With this analogy, we can think of δ(t) as some probability distribution of t.
Meaning, the value G(0) is the expectation value of function G(f).
This is quite surprising because we just found a connection between Fourier Transform and probability distribution.
The reason why G(0) is the expectation value, is simply because δ(f) is some kind of distribution
that has sudden spike of peak in the zero-th frequency f=0. So, in a probability sense, its probability
around f=0 is a near certainty. Which is why, the expectation ends up being G(0), the value of G(f) at f=0.
Using this kind of definition, we kind of understand that δ(x) is not a function, but rather
some kind of distribution that allows Fourier Transform to be computed iteratively in succession.
For any function g(t) that is a Fourier Dual, there exists a δ(t) function such that:
g(0)=∫−∞∞δ(t)g(t)dt=∫Tδ(t)g(t)dt(P10)
With delta function defined as “intermediary” density function, we can have define constant function as either FT or inverse FT of the delta function.
Applying previous Fourier Transform properties will yield interesting properties of this delta function.
Time shifting the delta function
Shifting the time argument of the delta function means the Fourier Transform frequency were shifted.
F{δ(t−t0)}=e−iτft0δ(f)=e−iτft0(P13)
Frequency shifting the delta function
Shifting the frequency argument of the delta function means the inverse transform got multiplied by a faster circle factor.
F−1{δ(f−f0)}=eiτf0tδ(t)=eiτf0t(P14)
Time scaling the delta function
Scaling the time argument of a delta function (make it wider) implies that the frequency gets smaller.
Also the other way around.
So the relationship is reciprocal.
F{δ(at)}=∣a∣1δ(af)=∣a∣1(P15)
This is a peculiar result because it means if the function is very localized, then the frequency domain were spread out.
Next, for arbitrary constant function with constant value a, then its Fourier Transform is a peak-scaled delta function.
F{a}=aF{1}=aδ(f)(P16)
Notice that it doesn’t matter if a is positive and negative above left hand side,
the frequency still peaked at f=0. The above relationship were derived using the linearity property of Fourier Transform, so the constant a can be taken out of the transform,
leaving only constant F{1}. However, we can derive similar thing, but got different form using property P15.
In step 4, notice that FT-ing a function twice will yield the same function, but the domain is flipped. However, delta function is even-symmetric, so doing FT on it twice will yield the same function.
Combining this result with P16 will yield:
δ(at)=∣a∣δ(t)(P17)
This is a really interesting property, because it implies something when we zoom in to the function.
Suppose we zoomed in to inspect values around t=0. If we scale it up, then the values got bigger in the vertical direction.
But, it looks the same way, as if the original function is streched horizontally with a factor of a.
Another way to proof this property is to integrate left and right side over t, and notice that both hands equal to a.
Delta function as a limit of sinc function
Now that we have a relation P17, we can relate this with the sinc function.
The sinc function is what we have when we do FT on constant value 1, using integration, in property P9.
F{1}=T→∞lim∫−TT1e−iτftdt=T→∞lim2Tsinc(τfT)
Substituting 2T=h1, means we change the limit to make h as smallest positive number possible.
Then replace Fourier Transform of F{1}=δ(f).
By using P17, we have:
Here’s what makes the above limit interesting.
The sine cardinal function actually does not flat out zero when f=0. In fact it oscillates vertically.
However, if the space between the zeroes of sine cardinal gets really small, then the oscillation gets practially flat as well.
The key insight is that to keep h as small as possible, but non zero so that the value at right hand side can be evaluated numerically.
In order for the peak to align, the right hand side has peak at f=0 with magnitude 1. That means, in this context, there must exist h such that hδ(f)=1,
and the delta function had to be defined that way, in relation with the partition limit h.
Basically, you can make h as small as possible… but still not zero, otherwise it would contradict all these assumption.
This is inline with our previous argument that says that a constant function can be thought of as a periodic function with the smallest non-zero period h.
Remarks
In this article we rediscover the core property of Fourier Transform under differentiation and integration.
The FT of a derivative is just a multiplication of iτf times the FT of the original function
The FT of a total integral of a function is equal to the value of the FT of the original function at zero-th frequency, and also the same as the
expectation value of the FT function using delta distribution
The delta “function” is an identity object that behaves like a distribution in order for a constant signal FT to be defined
The sinc function approximate delta function in the limit of smallest non-zero partition of periods.