Heat Equation – Fourier Series

Fourier was interested in solving the heat equation (diffusion equation). The equation had been known in various forms since Newton’s days (Newton’s Law of Cooling). But the heat equation was a big problem in Fourier’s time and no one knew how to solve it.

$$\dfrac{\partial{u}}{\partial{t}} = \sigma^2 \dfrac{\partial^2{u}}{\partial{x^2}}$$

where we shall unrealistically assume the heat conduction constant $\sigma^2 = 1$ and
$$u(0,t) = u(\pi,t) = 0$$

We can make this an odd function by defining
$$u(-x,t) = -u(x,t) = 0$$

which mirrors the function beneath the x-axis in the range $[-\pi, 0]$. Repeating the function that is now defined between $[-\pi, \pi]$ over the entire real line makes it $2\pi$ periodic. Imagine an infinitely long rod in the previous discussion of the Heat Equation.

The next significant thing Fourier considered was to use the Fourier series (see Origins of the Fourier Transform) and to make the coefficients functions of time $t$.

\begin{align}
u(x,t) &= \sum_{k =- \infty }^{\infty } a_k( t)e^{ik x} \tag{1}
\end{align}
Differentiating with respect to $t$,
$$\dfrac{\partial{u}}{\partial{t}} = \sum_{k =- \infty }^{\infty } a_k'( t)e^{ik x}$$

Differentiating twice with respect to $x$,
$$\dfrac{\partial^2{u}}{\partial{x^2}} = \; – \sum_{k =- \infty }^{\infty } a_k( t)k^2 e^{ik x}$$

At this point Fourier realised he has obtained a solution to the heat equation! And the solution is a Fourier series in $x$ with coefficients $a$ dependent on time $t$. In particular, given that the heat equation holds (as it had already been established even though unsolved) it meant that looking at the two equations above,

$$a_k'( t) = \; – a_k( t)k^2 $$

This is an ordinary differential equation which can be solved

$$a_k( t) = \; – a_k(0)e^{-k^2 t} $$

So the solution $(1)$

$$u(x,t) = \sum_{k =- \infty }^{\infty } a_k(0)e^{-k^2 t} e^{ik x}$$

Observe that as $k$ increases the solution is damped by a much higher order $-k^2$. This ensures a smooth function. The smoothness is linked directly to the rate of decay of the function.

Conditions for the convergence of the series have not been discussed specifically but Fourier series convergence has been the subject of a big part of mathematics for hundreds of years. However, if $s(x)$ is infinitely differentiable, then $a_k \rightarrow 0$ faster than $\dfrac{1}{||{k}^n||}, N > 0$

On the other hand, a function
\begin{align}
s(x) =
\begin{cases}
1, \quad 0 \leqslant x \leqslant \pi\\
-1, \quad -\pi < x < 0
\end{cases}
\end{align}
is slow converging as this is a non-smooth function. So this leads to the problem of how to deal with non-smooth functions. Such as in finance where not all problems are $2\pi$ periodic. For example in the case of derivative assets where the function from $\mathbb{R} \rightarrow \mathbb{C}$ is no longer periodic. This is what led to the Fourier Transform.

 
The idea of keeping only some of the Fourier coefficients has been used extensively throughout the second half of the twentieth century, in solving equations and well as applicantions reliant on advances computing technology. Audio and video compression algorithms (as well as compression for photos) have used the Fourier coefficients substantially. Compression essentially jettisons various Fourier coeffients. Applications predating advances in computing, such things as electronic organs which use a lot of components called oscillators which produce the $e^{ik}$ sound which the human ear hears as a whistle on a different frequency dependeing on $k$. Several oscillators are then used together to simulate various instruments. These analog organs are increasingly rare nowadays. For instruments that are flute-like this application was rather good as they required a relatively small number of terms. But for instruments such as a piano requiring a larger number of terms such applications were awful. Then digital sampling came and changed everything.