Home

Published

- 7 min read

Fourier Series: part 7: Convolution Theorem


Definition

The final piece when we learnt about Fourier Series is an important piece of theorem called Convolution Theorem.

We are going to derive it and show why is it important using the common notation.

Suppose that you have arbitrary function u(t)u(t) and v(t)v(t).

These function has corresponding dual Fourier domain function: U(f)U(f) and V(f)V(f). Let us construct a third function UV(f)UV(f) which is a product of these two functions.

UV(f)=U(f)V(f)\begin{align*} UV(f) &= U(f) V(f) \\ \end{align*}

Suppose we do inverse Fourier Transform on both sides.

UV(f)=U(f)V(f)F1{UV(f)}=F1{U(f)V(f)}=f=[U(f)V(f)]eiτftdf=f=[[x=u(x)eiτfxdx]V(f)]eiτftdf=x=u(x)[f=V(f)eiτfxeiτftdf]dx=x=u(x)[f=V(f)eiτf(tx)df]dx=x=u(x)v(tx)dx\begin{align*} UV(f) &= U(f) V(f) \\ \mathcal{F^{-1}}\left\{UV(f) \right\} &= \mathcal{F^{-1}}\left\{ U(f) V(f) \right\} \\ &= \int_{f=-\infty}^{\infty} \left[ U(f) V(f) \right] \, e^{i\tau f t } \, df \\ &= \int_{f=-\infty}^{\infty} \left[ \left[ \int_{x=-\infty}^{\infty} u(x) \, e^{-i\tau f x} \,dx \right] V(f) \right] \, e^{i\tau f t } \, df \\ &= \int_{x=-\infty}^{\infty} u(x) \left[ \int_{f=-\infty}^{\infty} V(f) \, e^{-i\tau f x} \, e^{i\tau f t } \, df \right] \,dx \\ &= \int_{x=-\infty}^{\infty} u(x) \left[ \int_{f=-\infty}^{\infty} V(f) \, e^{i\tau f (t-x) } \, df \right] \,dx \\ &= \int_{x=-\infty}^{\infty} u(x) \, v(t-x) \,dx \\ \end{align*}

Let’s ponder a bit about the last line. Basically the integral sums over variable xx, as a product of u(x)u(x) and v(x)v(x). It then transform over the parameter to tt. Since it basically transforms/blends two functions u(x)u(x) and v(x)v(x) and combine it as another function with its own parameter tt. We call this operation a convolution.

Notice that since a product of uu and vv commutes, it doesn’t matter which order the convolution is processed.

Given a new symbols r(t)={u(x)v(x)}(t)r(t)=\left\{ u(x) * v(x) \right\}(t)

A convolution is defined as:

r(t)={u(x)v(x)}(t)=x=u(x)v(tx)dx=x=u(tx)v(x)dx\begin{align*} r(t)&=\left\{ u(x) * v(x) \right\}(t) \\ &= \int_{x=-\infty}^{\infty} u(x) \, v(t-x) \,dx \\ &= \int_{x=-\infty}^{\infty} u(t-x) \, v(x) \,dx \\ \end{align*}

The Fourier transform then converts a convolution into a product of each Fourier transform:

F{uv}(f)=U(f)V(f)F{u(t)v(t)}(f)={UV}(f)\begin{align*} \mathcal{F}\left\{ u*v \right\}(f) &= U(f) V(f) \\ \mathcal{F}\left\{ u(t) v(t) \right\}(f) &= \left\{ U * V \right\}(f) \\ \end{align*}

The more popular form of the Convolution Theorem (as seen in the wiki), actually uses an inverse Fourier transform. Like this:

r(t)={uv}(t)=F1{U(f)V(f)}\begin{align*} r(t) = \left\{ u*v \right\}(t) &=\mathcal{F^{-1}}\left\{ U(f) V(f) \right\} \\ \end{align*}

Usage

This seems like a random finding, but consider its usage in signal processing, convolution were used heavily to modify signals. Using this theorem, it is apparent that if we combine two signals with differing frequency (in the Fourier domain), the resulting signal is the convolution. It is particularly useful because a signal source is mainly developed using the frequency parameters and not the time domain phase.

Delta function filters

As a simple base case example. Consider any signal u(t)u(t). It’s Fourier transform is of course U(f)U(f). Notice that U(f)=U(f)1U(f)= U(f) \cdot 1.

By the convolution theorem:

r(t)={uv}(t)=F1{U(f)V(f)}=F1{U(f)1}=u(x)δ(x)u(t)=x=u(x)δ(tx)dx\begin{align*} r(t) = \left\{ u*v \right\}(t) &=\mathcal{F^{-1}}\left\{ U(f) V(f) \right\} \\ &= \mathcal{F^{-1}}\left\{ U(f) \cdot 1 \right\} \\ &= u(x) * \delta(x) \\ u(t) &= \int_{x=-\infty}^{\infty} u(x) \, \delta(t-x) \,dx \\ \end{align*}

We now have some kind of idea on how delta function behave under a convolution. Apparently, under integral sign, delta function acts like a filter. It zeroes out any other intervals of u(x)u(x) and only picks the value of u(x)u(x) when tx=0t-x=0. Which means, the value going out is only u(t)u(t).

Derivative filters

Remember that a Fourier transform of a derivative is just the Fourier transform of the function times iτfi\tau f?

Yes, it is also a convolution.

Remember that if V(f)=iτfV(f)=i\tau f then v(t)=δ(t)v(t)=\delta'(t) using the polynomials FT.

r(t)={uv}(t)=F1{U(f)V(f)}F1{U(f)iτf}=F1{U(f)iτf}u(t)=u(x)δ(x)u(t)=x=u(x)δ(tx)dx\begin{align*} r(t) = \left\{ u*v \right\}(t) &=\mathcal{F^{-1}}\left\{ U(f) V(f) \right\} \\ \mathcal{F^{-1}}\left\{ U(f) \cdot i\tau f \right\} &= \mathcal{F^{-1}}\left\{ U(f) \cdot i\tau f \right\} \\ u'(t) &= u(x) * \delta'(x) \\ u'(t) &= \int_{x=-\infty}^{\infty} u(x) \, \delta'(t-x) \,dx \\ \end{align*}

We now have some revelation that δ(tx)\delta'(t-x) behaves like δ(tx)\delta(t-x) in a convolution. Except, instead of filtering the value of u(x)u(x), it filters the value of the derivative of u(x)u(x).

Integration filters

Same logic with derivative filters, but we apply it using division of iτfi\tau f.

Remember that if V(f)=1iτfV(f)=\frac{1}{i\tau f} then v(t)=sgn(t)2v(t)=\frac{\operatorname{sgn}(t)}{2}.

r(t)={uv}(t)=F1{U(f)V(f)}F1{U(f)iτf}=F1{U(f)1iτf}tu(t)dt+C=u(x)sgn(x)2tu(t)dt+C=x=u(x)sgn(tx)2dx\begin{align*} r(t) = \left\{ u*v \right\}(t) &=\mathcal{F^{-1}}\left\{ U(f) V(f) \right\} \\ \mathcal{F^{-1}}\left\{ \frac{U(f)}{i\tau f} \right\} &= \mathcal{F^{-1}}\left\{ U(f) \cdot \frac{1}{i\tau f} \right\} \\ \int_{-\infty}^t u(t) \, dt +C &= u(x) * \frac{\operatorname{sgn}(x)}{2} \\ \int_{-\infty}^t u(t) \, dt +C &= \int_{x=-\infty}^{\infty} u(x) \, \frac{\operatorname{sgn}(t-x)}{2} \,dx \\ \end{align*}

The above expression is quite remarkable because the sign function acts as a summation filters! It only sums up u(x)u(x) up to an input threshold tt. It’s a little bit difficult to imagine because the constant term CC needs to be adjusted. However, for the averaged values:

tu(t)dt=x=u(x)(sgn(tx)+1)2dx\begin{align*} \int_{-\infty}^t u(t) \, dt &= \int_{x=-\infty}^{\infty} u(x) \, \frac{(\operatorname{sgn}(t-x) + 1)}{2} \,dx \\ \end{align*}

Averaging

One of the most highly usable use cases for convolution is to distort signals according to a specific patterns. Thus modifying the spectrum of the signals without losing the information it produced.

Suppose that your signal u(t)u(t) has a spectrum pattern U(f)U(f) in which all the frequencies are being encoded. Suppose you create a filter V(f)V(f), specifically just so that you can multiply spectrum U(f)U(f) with V(f)V(f) to enhance just a specific frequency (or even delete some frequency that happens to be a noise).

V(f)V(f) can be arbitrary function so that the product UVUV basically does everything you want for any frequencies you might want to control.

We then construct the convoluted signals r(t)r(t) by doing the inverse transform from the UVUV product.

We then send it somewhere.

At the destination, we perform the transform then perform inverse multiplication to recover U(f)U(f). In which we can recover the original signal u(t)u(t).

Simple operation such as Gaussian Blur or Sharpening, can be done both via convolution or using multiplication in the frequency domain.

It was mostly called “averaging” because for each entries of the original signal, the value gets averaged by convolving its value with the given “weighted average” according to the second map v(t)v(t).

Taylor series

Convolution allows us to provide deep insight regarding operation in frequency domain.

Imagine a convolution map that is r(t)=δ(t)r(t)=\delta(t). Its Fourier trasnform is definitely R(f)=1R(f)=1.

Notice that we can express 1=f1f=f21f2=fn1fn1=f\frac{1}{f}=f^2 \frac{1}{f^2}=f^n \cdot \frac{1}{f^n}.

Suppose that U(f)=fnU(f)=f^n meaning u(t)=δ(n)(t)(iτ)nu(t)=\frac{\delta^{(n)}(t)}{(i\tau)^n}

Suppose that V(f)=1fnV(f)=\frac{1}{f^n} meaning v(t)=sgn(t)2t(iτt)n(n1)!v(t)= \frac{\operatorname{sgn}(t)}{2t} \frac{(i\tau t)^n}{(n-1)!}

r(t)={uv}(t)=F1{U(f)V(f)}δ(t)=F1{fn1fn}=δ(n)(x)(iτ)nsgn(x)2x(iτx)n(n1)!=x=xn12(n1)!sgn(x)δ(n)(tx)dx\begin{align*} r(t) = \left\{ u*v \right\}(t) &=\mathcal{F^{-1}}\left\{ U(f) V(f) \right\} \\ \delta(t) &= \mathcal{F^{-1}}\left\{ f^n \cdot \frac{1}{f^n} \right\} \\ &= \frac{\delta^{(n)}(x)}{(i\tau)^n} * \frac{\operatorname{sgn}(x)}{2x} \frac{(i\tau x)^n}{(n-1)!} \\ &= \int_{x=-\infty}^{\infty} \frac{x^{n-1}}{2(n-1)!} \, \operatorname{sgn}(x) \, \delta^{(n)}(t-x) \,dx \\ \end{align*}

The above function looks crazy at first because both signum and delta is hard to define around tx=0t-x=0. But if we don’t evaluate it and just return the expression according to each functions. We got an interesting relationship.

For example, suppose that n=1n=1. We got:

δ(t)=x=xn12(n1)!sgn(x)δ(n)(tx)dx=x=12sgn(x)δ(tx)dx=x=12sgn(x)dδ(tx)dtdx=ddtx=12sgn(x)δ(tx)dx=ddtsgn(t)2=sgn(t)2\begin{align*} \delta(t) &= \int_{x=-\infty}^{\infty} \frac{x^{n-1}}{2(n-1)!} \, \operatorname{sgn}(x) \, \delta^{(n)}(t-x) \,dx \\ &= \int_{x=-\infty}^{\infty} \frac{1}{2} \, \operatorname{sgn}(x) \, \delta'(t-x) \,dx \\ &= \int_{x=-\infty}^{\infty} \frac{1}{2} \, \operatorname{sgn}(x) \, \frac{d\delta(t-x)}{dt} \,dx \\ &=\frac{d}{dt} \int_{x=-\infty}^{\infty} \frac{1}{2} \, \operatorname{sgn}(x) \, \delta(t-x) \,dx \\ &= \frac{d}{dt} \frac{\operatorname{sgn}(t)}{2} \\ &= \frac{\operatorname{sgn}'(t)}{2} \\ \end{align*}

Notice that in the derivation above, we are able to extract derivative operator from δ(tx)\delta'(t-x) simply because the derivative can be taken as parameter tt instead of xx. Because parameter tt only depends from outside of the integral, we can brought it outside.

From there, we evaluate the integral using the delta filters.

So it allows us to represent a derivative of special function that can’t be calculated directly.

Notice that we are able to recover Taylor series coefficient from this process. Suppose that n=2n=2, we got:

δ(t)=x=xn12(n1)!sgn(x)δ(n)(tx)dx=x=x21!sgn(x)δ(tx)dx=d2dt2(t21!sgn(t))\begin{align*} \delta(t) &= \int_{x=-\infty}^{\infty} \frac{x^{n-1}}{2(n-1)!} \, \operatorname{sgn}(x) \, \delta^{(n)}(t-x) \,dx \\ &= \int_{x=-\infty}^{\infty} \frac{x}{2\cdot 1!} \, \operatorname{sgn}(x) \, \delta''(t-x) \,dx \\ &= \frac{d^2}{dt^2} \left( \frac{t}{2\cdot 1!} \, \operatorname{sgn}(t)\right) \\ \end{align*}

We can even perform the operation further:

δ(t)=d2dt2(t21!sgn(t))=ddt(121!sgn(t)+t21!sgn(t))=221!sgn(t)+t21!sgn(t)=2δ(t)+tδ(t)tδ(t)=δ(t)\begin{align*} \delta(t) &= \frac{d^2}{dt^2} \left( \frac{t}{2\cdot 1!} \, \operatorname{sgn}(t)\right) \\ &=\frac{d}{dt} \left( \frac{1}{2\cdot 1!} \, \operatorname{sgn}(t) + \frac{t}{2\cdot 1!} \, \operatorname{sgn}'(t)\right) \\ &=\frac{2}{2\cdot 1!} \, \operatorname{sgn}'(t) + \frac{t}{2\cdot 1!} \, \operatorname{sgn}''(t) \\ &= 2\delta(t) + t\delta'(t) \\ t\delta'(t) &= -\delta(t) \\ \end{align*}

We have this weird relationship between the derivative of the delta function with itself.

Now here’s the catch with Taylor series.

Since a function g(t)g(t) can locally have Taylor series representation, and the function is represented by polynomial, we can suggest something like this:

g(t)δ(t)=g(t)δ(t)\begin{align*} g(t) \, \delta'(t) &= - g'(t) \, \delta(t) \\ \end{align*}

Which is what the derivative filter does for Taylor series.

Related Posts

There are no related posts yet. 😢