# Fourier Series: part 4

23-Feb-2024

# Fourier Transform Identities and Properties

We covered the basics of FT properties in previous article of part 3

In this article, we are going to cover some more properties, in a more abstract way.

## Fourier Transform of a derivative

If we have function $g(t)$, and its derivative $g'(t)$. Then there is a relation between the fourier transform of $g(t)$ and $g'(t)$.

Suppose that the FT of $g(t)$ is $G(f)=\widehat{g}(f)$.

Then the FT of the derivative $g'(t)$ with respect to $t$ is:

From step (1) to (2), we use integration by parts. From step (2) to (3), we rely on the assumption that if $g(t)$ is periodic, then the first term will vanish to zero. If it is not periodic, then we are going to assume that the average also vanish to zero as $\lim_{t\to\infty}g(t) = 0$. Finally we got result (4).

So interestingly, an FT of a derivative wrt to $t$, is the same as the FT of the original function, times $i \tau f$, with $f$ being the dual variable of $t$ (its frequency domain).

Basically FT turns derivative into a multiplication in the frequency domain.

The other interesting aspect is that you will get the same result, even if you tried to derive it using the inverse FT.

Step (1) and (2) is using the definition of inverse Fourier Transform. Step (3) to (4) is assuming the convergence of the integrals, then by linearity, we can swap the order of integrals and derivative, because it is against a different variable. In step (4), we took derivative of $e^{i\tau f t}$ because it is the only term that contains variable $t$. Step (5) is using the definition of inverse Fourier Transform, but backwards. The term in the bracket must be the Fourier Transform of $g'(t)$.

In summary, we conclude:

The last row is a direct result if we took the derivative of $g(t)$, as much as $n$ times.

## Fourier Transform of a constant function

The next interesting thing to observe after figuring out derivative, is how we model a Fourier Transform of a constant function.

We can have two different approach.

Approach A: use the fact that a constant function can be thought of as a periodic function with finite smallest non-zero period T. Then, we can use Fourier Series approximation to construct back the original function.

Approach B: use the fact that a constant function can also be thought of as a non-periodic function with infinite period T. Then, we can use Inverse Fourier Transform integral to construct back the original function.

### Constant function, interpreted from periodic function

A constant function is tricky. It expands to both $\infty$ and $-\infty$, so the integral (area under the curve), is definitely $\infty$ as well.

In terms of periodicity, we can think of constant function as a function that has “arbitrarily small” fundamental period. We can set it as small as possible, and it can still be considered as a periodic function, since the output is constant.

This made us wondering. Suppose that there exists a smallest interval possible (but still non-zero), that can’t be divided again. Can we use this as the fundamental period $T$?

Let’s challenge this idea. But first, let us use a constant $h$ instead of $T$ to represent this smallest interval. Also, remember, it can’t be zero. In this interval, the value of $g(t)$ is decidedly 1 (a constant).

We are going to use periodic approximation from article 2-T3

From step (1) to (2), we evaluate the value of $g(t)=1$ in this interval.

From step (2) to (3), notice that for a constant value, we can shift the interval so that $t$ is the center. The range is shifted into $t\pm\frac{h}{2}$ because the length needs to be the same $h$.

From step (7) to (8), we use the definition of $\sin(\phi) = \frac{e^{i\phi} - e^{-i\phi}}{2i}$.

From step (8) to (9), we use the definition of sine cardinal function: $\operatorname{sinc} = \frac{\sin(\phi)}{\phi}$.

Looking at the last expression, we know that the whole interval of domain $t$ is approximately $Nh$. This is because the integral under the curve had to match (means the frequency $f=0$).

For a constant function $g(t)$, we can pick any $t$ and it should behave the same way as if $t=0$, the center of the function. So, evaluating $t=0$ would cause $e^{-i \tau f t} = 1$. We will then being left with the final expression:

But, it’s not over yet. Ideally the function should be independent from any variable other than $f$. So, we need to figure out how to eliminate $N$ and $h$. We will use the Fourier Series form to do that. Taken from article 2-T5

Notice that, since $g_h(t)$ should be a constant function, then its derivative with respect to $t$ is zero. However, this can only mean, term inside the sum evaluates to zero (the right hand side).

In step (2) to (3) from derivation above, you will notice that the factor $i \tau f$ cancels out with the derivative of $e^{i\tau f t}$. This allows a very neat equation under the summation signs. Also, a peculiar one at that.

Suppose that we want to check the equality before integration, then for any $t$, and any $f$ it seems that:

This would contradict our assumption in a strange way. The consequences: one of the following needs to be true:

- $h$ is actually dependent on the value of $f$ that is currently being evaluated in the Fourier Transform.
- $h$ can be the smallest non-zero value possible (doesn’t matter what it is), but $fh$ needs to be related with an integer $n$ such that $f h = n$. So that $e^{i\tau n}= e^{i 2 \pi n} = e^{i 2\pi} = 1$

The condition for 1. will be covered automatically if we are dealing with signal scaling. But this is not the case for constant function, since stretching the function horizontally will have no effect. So, we have to accept point 2.

But if we accept point 2, then $\sin(\tau f \frac{h}{2}) = \sin(n \pi)$ which is zero for all $n$. Then, the function $G(f,N,h)$ is zero for all $n$, which means the sums also zero.

There is one weird catch, though. When $f=0$, $G(f,N,h)=Nh$ like what we concluded before (because $\operatorname{sinc}(0)=1$).

Concluding what we have found now, we know that $G(f,N,h)$ is one peculiar “function” that is zero everywhere, except when $f=0$. Now, we need to figure out the value of $Nh$ and check if this has limit of some sorts.

From the Fourier Series representation, but taking $t=0$ (the center), the value of $g_h(t)=0$ must be zero. We also substitute $f=\frac{n}{h}$.

The last sum is easy to solve. Since we already decided that $n$ is an integers, $\operatorname{sinc}(n \pi)$ is zero everywhere, except when $n=0$, where the value of $\operatorname{sinc}(0)=1$.

It would immediately follows that $g_h(t)=h=1$.

Whaaaat? So, in fact $h=1$ is the smallest interval that we can use.

Our $G(f,N,h)=G(f,N)=N \, \operatorname{sinc}(f \pi)$.

To make it consistent with our convention that integer frequency should be named $k$, we uses $G(k,N) = N \, \operatorname{sinc}(k \pi)$.

In our particular case here, $N$ really depends on the choice of the span of domain $t$. So if $t$ ranges from $-\infty$ to $\infty$, then $\lim_{N\to\infty}=\infty$, so the series has a very tall spike when $k=0$, but zero in everywhere else.

### Constant function, interpreted from non-periodic function

As we have discussed before, we can also thought of constant function as a non-periodic function with infinite periods.

This line of thinking makes it easy to generalize the concept, but it is a much more challenging ideas to constructs.

Our starting line is the same. Take a segment of a constant function $g(t)$ with the span $T$. Its Fourier Transform is a function in frequency domain that can represent the function, up to this span $T$. This step is similar with Approach A, where we thought that there is a periodic slices with interval $h$.

However, this time, instead of multiplying it by $N$ (the count of the interval).
We instead want to **enlarge** the slice by taking the limit of $T$ to infinity.
This is analoguous to stretching the function horizontally to positive and negative x-axis direcation.

Our $G(f)$ is a representation of the interval $T$, now. We are going to use the Fourier Transform definition in article 2-T1.

Step (1) to (2), is just a notational convenience. $G_T(f)$ means $G(f)$ in the limit that $\lim_{T\to\infty}$.

This way, the end result $G_T(f)$ is a function that took the form of a function that its limit hasn’t been taken **yet**.

Let’s evaluate it for a moment. From this form, it is clear that when $f=0$, the function $G_T(0)$ evaluates to $\infty$ and becomes really huge. This is because $\operatorname{sinc}(0)=1$ and the value $2T$ approaches a really huge number.

What we are not sure yet, is how this function behaves on continuous $f$. Suppose that $fT=n$ an integer, just like what we concluded in Approach A. It means that the function evaluates to zero. But what about when $fT$ not equal to an integer? Especially when we are going to set $T$ as a really huge number.

Let’s calculate the Inverse Fourier Transform, and we are going to see from there.

Just like what we do before, we are going to take derivative with respect to $t$. Since this is a constant function, the derivative should be zero.

Just like before, it would imply that for any $t$ and any $T$, it seems that:

We arrived at the same contradiction. But, the situation is different now. Previously, the only resolution makes sense is statement 2, where $h$ needs to be the smallest non-zero value possible, with $fh=n$ as an integer.

This time, $f$ is continuous, and $T$ is in the limit of some large number approaching to infinity (not a small number). There is a specific relation needs to be hold such that both $f$ and $T$ were dependent from each other in such a way that $fT=\frac{n}{2}$.

It implies a natural constraint, such that we can set $f$ to be as small as possible, as long as $fT=\frac{n}{2}$. If we stretch the signal to $T$ approaching infinity, for example, in the case of our constant function. Then its Fourier Transform domain $f$ is so localized that $G(f)$ only has non-zero values around $f=0$.

It has similar (if not the same) consequences as Approach A, albeit with entirely different reasoning.

In conclusion, $G_T(f)$ behaves the same way as its Fourier Series counterpart. It is zero everywhere when $fT=\frac{n}{2}$, except when $f=0$, where its value spikes to infinity.

Next, from the FT to IFT relationship, we will try to find another condition such that the transform is invertible. Suppose that at $t=0$, the value $g(t)=1$ is a constant. We also substitute $f=\frac{n}{2T}$

Since there are no variable left on the right hand side. We can **guess** that the right hand side evaluates to 1, in order for the limit to converge with the left hand side $g(t)=1$. But we had to make sure.

Previously we evaluate a sum, and it was easy to do because for any value of $n$, the function evaluates to zero, except when $n=0$. This time $n$ represents a continuous variable. This last integral is a little bit tricky to solve. But we will use several steps to change the form.

In order to evaluate the integral, notice that $\operatorname{sinc}(\phi)$ is an even symmetric function. So we can change the integral into this:

We expand $\operatorname{sinc}(n \pi)=\frac{\sin(n \pi) }{n \pi}$

We are going to decompose $\frac{1}{n}$ as an integral: $\frac{1}{n} = \int_0^\infty e^{-n s} \, ds$

Using Fubini’s Theorem, assuming the integral converges, we swap the order of the integration.

Let’s break apart into smaller terms. Suppose that $I(n)=\int_0^\infty \sin(n\pi) \, e^{-ns} \, dn$, then we solve it.

Substituting this result back, we have:

That last integral is just a trigonometric integral, that can be solved by choosing $s=\pi \tan(\phi)$. So that $\frac{ds}{d\phi}=\pi \sec^2(\phi)$. With the integration boundaries becomes $\phi \to [0,\frac{\pi}{2}]$

With this, we concluded that indeed $g(t)=g_T(t)=1$. In the limit of large $T$, turns out the integral converges, even without us having to specify that $n$ has to be integers.

So the Fourier Transform of a constant signal with arbitrarily large span $T$ is:

Note that, although the concept is different, when $T=\frac{1}{2}$, we recover the Fourier Series representation of a constant function: $G(k)= \operatorname{sinc}(k \pi)$

### Constant function, interpreted as distribution

Originally, we have two approaches A and B. We now realized that both is similar but not equal. It appears that one approach is most suited when the time interval is very small. The other one is most suited when the time interval is very large.

Can’t we have both?

For instance, both representation of the FT, $G(f)$ seems to behave like these:

- The value $G(f)$ at $f=0$, is either very large, dependent on the span of the domain $t$, or just $\infty$.
- The value $G(f)$ at $f\ne0$, is zero everywhere.
- The integral over all domain $f$ for $G(f)$ is 1.

This seems to be a very weird function. How come it has value $\infty$, but the integral area under the curve is normalized at $1$? Doesn’t look like a function to me.

In the case of Fourier Transform, this looks like an identity basis.

In the previous two approach, we started from a constant function, then we try to find its FT. However, if we go the other way around: “What is a function that its FT is a constant function?”, then we might understand something.

Suppose that there exists a function (or property, whatever), called the delta function $\delta(t)$. Delta function has a property in such a way that its FT $D(f)$ is a constant 1. Can this function exists?

The reason why we called it “Delta function” is probably special in its way. It is called “Delta” because it represent a small slice of interval.

Note that it is the inverse/reverse direction of approach A & B where we assume that constant function is in the time domain. Now we assume the constant function is in the frequency domain. The only benefit that we have, is that we can immediately construct a Fourier Series $\delta(t)$ just by composing the frequencies.

From a Fourier Series definition if a function $\delta(t)$ has a Fourier Transform $D(f)=1$. Then it immediately follows:

Using this kind of representation. We can figure out 3 criterias of $\delta(t)$ to be the same with the function $G(f)$ we are trying to find before.

- When $t=0$, $\delta(t)$ values approach infinity. This is because the sum happens to sum 1 from negative infinity to infinity. So this becomes a huge number.
- When $t\ne0$, there is a corresponding indices $f$, in such a way that $\tau f t = n\pi$ where $n$ is an integer.
- The integral over the domain $t$ must equal to 1.

This last property can be proven using integral.

The end result only makes sense if the equality is true. Which means the normalization factor of this delta function depends from the span of the integration, the domain of $t$.

Now, here’s the connection we are trying to make.

Since $\delta(t)$ and $D(f)=1$ is a Fourier Dual, with $g(t)=1$ and $G(f)$ is a Fourier Dual… It would imply that $\delta(t)=G(f)$.

But…, the representation is different:

When $x=0$:

These symbols must have some correspondence. Since all the values for $x\ne 0$ is the same. We can only care when $\lim_{T\to \infty}$. It should increase the same way in the case $x=0$. Now, since both of $T$ depends on the span of the integration (the domain of x). We should see some relationship.

We can see that $2T$ is affected by the normalization factors. So in a sense, $\delta(x)$ is some sort of distribution. This distribution, can be approximated by $\operatorname{sinc}(ax)$ for any value of $a$.

Suppose that this distribution exists, then it will be easier for us to define Fourier Transform pair, as a function.

Let’s explore the ideas further.

For non-periodic function, suppose there exists a distribution called $\delta(t)$. Because it is a distribution, its integral over its domain is normalized:

Suppose that its Fourier Transform is $1$. Meaning:

Now, in parallel, suppose there exists a Fourier Transform dual $g(t)$ and $G(f)$. Meaning, we can write $g(t)$ like this, using inverse Fourier Transform:

Using the same trick like we do by finding a Fourier Transform of a derivative, now we try to find a Fourier Transform of a total integral over all of its domain. Integrate left side and right side over all values of $t$.