In machine learning or deep learning, the models are designed in such a way that they follow a mathematical function. From data analysis to predictive modelling there is always some mathematics behind it. For example, in clustering, we use the euclidean distance to find out the clusters. Fourier transform is also a famous mathematical technique for transforming the function from one domain to another domain which can also be used in deep learning. In this article, we will be discussing the Fourier transform and will understand how it can be applied in the field of deep learning. The major points to be covered in this article are listed below.

**Table of Contents**** **

Access Free Data & Analytics Summit Videos>>

- What is a Fourier Transform?
- Mathematics Behind Fourier Transform
- Fourier Transform using Python
- How are Neural Networks Related to Fourier Transforms?
- Fourier Transform in Convolutional Neural Network
- How to use Fourier Transforms in Deep Learning?

**What is a Fourier Transform?**

In mathematics, the transformation technique is used for mapping a function into a different function space from its original function space, and when we talk about the Fourier transform it is also a transformation technique that transforms such functions which are depending on the time domain space domain into such function which depends on the spatial domain or temporal frequency domain. We can take audio waves as an example where Fourier transformation allows us to represent it in terms of the volumes and frequencies of its notes.

We can say that the transformation performed by the Fourier transform of any function is a function of frequency. Where the magnitude of the resultant function is the representation of the frequency contained by the original function.

Let’s take an example of a signal whose function of time-domain looks as follows:

Let’s take a portion of another signal in the same time frame

Let the name of the signals be A(n) and B(n) where n is the time domain. So if we add these signals the structure of the signals will look as follows:

C(n) = A(n) + B(n)

So here we can see that the addition of the signal of the functions has another form of a signal of any other function and if we talk about to extraction of the signal A or B from this addition signal C it becomes a problem for us because the addition of these signal happened only in the aptitudes, not in the time. The addition is having a similar time but different magnitude.

Here the Fourier transform allows us to separate the function from their addition using its behaviour of conversion between time domain to frequency domain. In addition, we find that we can extract the frequencies from the signal but separating them using the time domain is not possible. The separation of signals using the Fourier transform will look like the below image.

In the above image, we can see that we can easily mark the difference between the signal now if again we want to get these signals in the time domain we can convert them using the inverse Fourier transform.

In the next section of the article, we will discuss the mathematics behind the Fourier transform so that our view can be more clear about the proceeding of the Fourier transform.

**Mathematics Behind Fourier Transform**

Before going into the mathematics of Fourier let us know that the representation of a signal in the time domain can be done by the series of sinusoidal. So if the function consists of a continuous signal it can be represented by the function f(t).

Where,

We can see the function is made up of the addition of the infinite sinusoids which can be considered as the representation of the signal of function. Also, we can see there are in the function two coefficients that are necessary to define the structure of the output signal.

These coefficients can be obtained by solving the Fourier transform integral which is basically a function of the frequency. The resultant of the Fourier transform can be considered as the set of coefficients Mathematically it can be written as given below:

**Fourier Transform**

And the inverse of this function can be considered as the function of time which we use for converting the frequency domain function to the time domain function.

**Inverse Fourier Transform**

Solving these above integral gives the value of a and b but hereThe case we are talking about is a case where the signals are continuous signals in real life most of the problems are accrued from the discreetly sampled signals and to find out the coefficient for such signals transformation we are required to perform a discrete Fourier transform (DFT). Using DFT we can get a same-length sequence of equally-spaced samples of a function that is already made up of a sequence of equally-spaced samples. The coefficients for the above-given function f(t) can be obtained by the following function.

The value of a and b will be,

Using these terms a and b in the function f(t) we can find the signals in the frequency domain. Here in this section, we have seen the mathematics behind the Fourier transform. Now let’s just see how can we perform on sine waves using python.

**Fourier Transform Using Python**

In this section of the article, we are going to transform the addition of two sine waves into their frequency domains for this purpose we can use the scipy library of python which basically provides us with all the transformations techniques we use in mathematics.

**Importing the libraries**

```
import numpy as np
import matplotlib.pyplot as plt
from scipy.fft import fft, fftfreq
```

Making the sine waves and adding them

```
# sample points
N = 1200
# sample spacing
T = 1.0 / 1600.0
x = np.linspace(0.0, N*T, N, endpoint=False)
sum = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(80.0 * 2.0*np.pi*x)
plt.plot(sum)
plt.title('Sine wave')
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.grid(True, which='both')
plt.show()
```

Output:

Here in the above output, we can see the sine waves which we have generated using NumPy now we can transform it using the FFT module of the scipy library.

```
sumf = fft(sum)
xf = fftfreq(N, T)[:N//2]
plt.ylabel('frequency')
plt.xlabel('sample')
plt.title("FFT of sum of two sines")
plt.plot(xf, 2.0/N * np.abs(sumf[0:N//2]))
plt.show()
```

Output:

Here we can clearly see what are the frequencies of different waves which was not clear in the addition because before this the wave was generated as the function of the time domain. As of now, we have seen the various insights of the Fourier transformation but now the question is how it is related to the neural network? By above all intuition we can say that the Fourier transform is also a technique to approximate other functions in the frequency domain which makes it related to neural networks. In the next section of this article, we will see how neural network and Fourier transform is related.

**How are Neural Networks Related to Fourier Transforms?**

As of now, we see the Fourier transform as a function that can help in approximating other functions and also we know that the neural networks can also be considered as the function approximation technique or universal function approximation technique. Let’s have a look at the below image.

This image is a representation of the neural network in which the Fourier transform technique is used. A basic neural network comes with the motivation to approximate a function that is unknown and its value at some given points. The majority of neural networks have a task to learn the overall function or learn the function at the values point which is given in the algorithm or data where iteration techniques help parameters to learn according to the situation where Fourier network finds the parameters by evaluating the given function. The formula used in the actual function helps in the computation of the parameter.

Let’s take an example of a convolutional neural network to understand this relationship more in-depth. In the next section of the article, we will discuss the relationship of Fourier transform with neural network in the context of a convolutional neural network.

**Fourier Transform in Convolutional Neural Network**

As we know about the convolutional neural network, the convolutional layers are the main base of such kind of network and in the network, the main work of any convolutional layer is to apply the filter to the input data or to the feature maps. So that it can convolution the output from the previous layer. The task of the layer is to learn the weight of the filter. We see in a complex convolutional neural network that the number of layers is high and the filters per layer are also high which makes the calculation cost very high.

Using the Fourier transform into the convolutional neural network we can transform the layers calculation in element-wise products in the frequency domain and the task of the network will be the same. We can just save the calculator energy by using the Fourier transform. Most of the cases we see are applying the fast Fourier transform in the convolutional network.

By the above, we can say that the convolutional layer or the process of the convolutional layer is related to the Fourier transform. Mostly the convolutional layers in the time domain can be considered as the multiplication in the frequency domain. We can easily understand the convolutional by the polynomial multiplication.

Let’s say we have to function y and g of any value x like below:

y(x) = ax + b

g(x) = cx + d

And the polynomial multiplication of these functions can be written by a function h

h(x) = y(x).g(x)

= (ax + b)(cx + d)

= ac x^2 + (ad+bc) x + bd

By the above, we can say that the convolutional layer process can be defined as the multip[lication of these above-given functions. The vector form of the functions can be written as the following:

y[n] = ax[n] + b

g[n] = cx[n] + d

And the vector multiplication of the vector forms is:

h[n] = y[n] X g[n]

H[w] = F(y[n]) ‧ F(g[n]) = Y[w] ‧ G[w]

h[n] = F^-1(H[w])

The notation ‘.’ in the multiplication represents the multiply and X is convolutional. The F and F^-1 are Fourier transform and inverse Fourier transform respectively. “n” and “w” donate time domain and frequency domain respectively.

By the above, we have proven that ultimately the convolutional layer implies the Fourier transform and its inverse in the multiplication if the functions are related to the time domain. We have seen in this section how the Fourier transformation takes part in the convolution process. In the next section of the article, we will learn how we can use the Fourier in the deep learning algorithms.

**How to use Fourier Transforms in Deep Learning?**

In the above section, we have seen that the convolution process in the time domain can be simply considered as the multiplication in the frequency domain. Which is proof that it can be used in the various deep learning algorithms even though it can be used in the various static predictive modelling algorithms. In the above section of the article, we have seen how it helps in convolutional neural networks and how it replaces the convolution process of the convolutional layer. Let’s go with a similar example of the convolutional neural network so that we will not get diverted from the main subject of the article.

As we have discussed and seen that the mathematics behind the convolution is to perform multiplication in the time domain and the mathematics behind the Fourier transform is to do multiplication in the frequency domain. So to apply the Fourier transform in any convolutional neural network we can perform some changes in the input and the filter.

If the matrices of the input and filters in the CNN can be converted into the frequency domain to perform the multiplication and the outcome matrices of the multiplication in the frequency domain can be converted into the time domain will not perform any harm to the accuracy of the model. The conversion of matrices from the time domain to the frequency domain can be done by the Fourier transform or fast Fourier transform and conversion from the frequency domain to the time domain can be done by the inverse Fourier transform or inverse fast Fourier transform.

The below image is a representation of how we can use Fast Fourier Transform in place of the convolution.

As we have discussed the number of filters and layers in any complex network is very high and because of the higher numbers, the calculation process using the convolution becomes very slow. Whereas, using the Fourier transform we can reduce the complexity of such calculation and can make the model work faster.

**Final Words**

In this article, we have seen a basic definition of the Fourier transform along with mathematics behind it and how we can perform it using python. With all this, we get to know how the neural networks and Fourier transform are related. To understand this relation, we have taken an example of the convolutional neural network and seen how we can use the Fourier transform in deep learning models like convolutional neural networks.

## Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.Join our Telegram Group. Be part of an engaging community

Yugesh is a graduate in automobile engineering and worked as a data analyst intern. He completed several Data Science projects. He has a strong interest in Deep Learning and writing blogs on data science and machine learning.