Before defining the Fourier Transform, it would be interesting to study its origins and how it came about.
Eigen Functions, Shift Operators, Periodic Functions
We have seen the shift operator in the Euler finite difference methods and the heat equation approximation.
$$E_{f(x)} = f(x+h)$$
We also saw in the introduction to Stability and Instability of Numerical Methods how the second central difference operator
$$f(x+h) – 2f(x) + f(x-h) = h^2.f^{(2)}(x) + \frac{2}{4!}h^4.f^{(4)}(x)… = \lambda \; \text{sin} \:cx $$
that is, it spits out a multiple of the original function $f(x)$ and is known as the eigenfunction of the operator. Similarly for
$$f(x) = e^{cx}$$
as
$$E_{f(x)} = e^{c(x + h)} = e^{ch}f(x)$$
the shift operator results in a multiple of $e^{cx}$, so $e^{cx}$ is an eigenfunction of the shift operator $E$. The concept of eigenfunctions is just as important as the concept of eigenvalues and eigenvectors for matrices.
Sometimes the shift operator is referred to as a translation operator. The translate of $f$ by $a$
$$T_af(x) = f(x+a) \quad x \in \mathbb{R}^n$$
Of course we already know that for differentiation of
$$f(x) = e^{cx}$$
as
$$f'(x) = ce^{cx}$$
successive derivatives all result in a multiple of $e^{cx}$, $e^{cx}$ is an eigenfunction that acts like a multiplier. So for shifting and for differentiation, $e^{cx}$ is quite neat compared to say a general polynomial $x^{cx}$. AT the time of its discovery this must have been a big deal and indeed it was.
In the late 18th century, Fourier and others realised that although not all functions are exponentials, you could represent general functions $s(x)$ as sums or integrals of exponentials and reap the benefits of the nice properties of exponentials. For instance if
\begin{align}
s(x) = \sum_{k = 1}^n a_k e^{c_k x} \quad \text{ then} \quad s'(x) = \sum_{k = 1}^n a_k c_k e^{c_k x}
\end{align}
we can also shift the original function
\begin{align}
s(x) = \sum_{k = 1}^n a_k e^{c_k x} \quad \text{ then} \quad s(x + h) = \sum_{k = 1}^n a_k e^{c_k h} \cdot e^{c_k x}
\end{align}
The sum need not be finite. We can have an infinite series with $c_k$ no longer being a general number, but a special number.
\begin{align}
s(x) &= \sum_{k =- \infty }^{\infty } a_k e^{ik x} \tag{1}
\end{align}
This could be rewritten as a series involving cosine and sine using the fact that
$$e^{ik x} = \cos {k x} + i\sin {kx}$$
\begin{align}
s(x) &= \sum_{k =- \infty }^{\infty } a_k e^{ik x} = \sum_{k =- \infty}^{\infty } a_k( t)( \cos {k x} + i\sin {kx} )
\end{align}
This representation shows why the Fourier series are natural from the point of view of music. If you double the length of an organ pipe, you get a note which is an octave lower. If you halve the length of an organ pipe, you get a note which is an octave higher. The same applies to a taut piece of string. So Fourier observed that the functions that can be represented in this way are all $2\pi$ periodic.
$$e^{i2\pi}=1$$
which implies that
$$e^{i(kx + 2\pi) } = e^{ikx}e^{i2\pi} = e^{ikx}\cdot 1 = \cos {k x} + i\sin {kx}$$
Meaning that with every successive $2\pi$ interval you get the original function back. Now let’s explore what happens to $(1)$ each time we differentiate
\begin{align}
s'(x) &= \sum_{k =- \infty }^{\infty } ia_k e^{ik x} \tag{2}
\end{align}
differentiating twice
\begin{align}
s^{(2)}(x) &= \; -\sum_{k =- \infty }^{\infty } ia_k^2 e^{ik x} \tag{2}
\end{align}
Fourier found a rather pretty formula for the $a_k$ coefficients
$$a_k = \dfrac{1}{2 \pi} \int_{- \pi}^{\pi} s(x) e^{ik x} dx$$
Recall that
$$\dfrac{1}{2 \pi} \int_{- \pi}^{\pi} e^{ij x} dx = \delta_{oj} \quad \text{kroenecker delta}$$
which takes the value $1$ if $j = 0$ and value $0$ if $j \neq 0$.