In this chapter we introduce the reader to the Fast Fourier Transform, to its history
and its development, to prepare the background over which we we will show some
of the latest and most used versions of this extremely important algorithm. We will
also give a look at the C++ implementation of the Cooley-Tuckey Algorithm, so that
comparisons will be easy to do when we will talk about the Sparse FFT. 2.1 What is
the FFT? The Fast Fourier Transform is an algorithm, as we said, to compute the
DFT and its inverse. Its importance derives from the fact that it made the Fourier
analysis of digital signal an affordable task: an algorithm which naively implements
DFT definition takes O(N2 ) arithmetical operations to define the set of N
coefficients, while an FFT is much faster, and can compute the same DFT in only
O(N log N) operations. This time complexity is often called linearithmic. The
difference in speed can be enormous, especially for long data sets, where N could
be 1069 . Moreover, many FFT have an interesting feature that grants an
increase on accuracy at the expense of increase computations. We say many
due to the fact that there is NOT a single FFT algorithm, instead there are many
different algorithms which involve a wide range of mathematics to attain high
speed. We cannot see all of them, but the reader is free of reviewing what we left
out in [3]. One of the most used is the Cooley-Tukey Algorithm (CT), which was
developed in 1965 and which we will review later on in this chapter. It is a divide
and conquer algorithm that recursively breaks down a DFT of any composite size
N = N1N2 into many smaller DFTs of sizes N1 and N2, along with O(N)
multiplications by complex root of unity traditionally called twiddle factors. Another
wide spread algorithm cares about the case in which N1 and N2 are coprime, and
is known as Prime-Factor (or Good-Thomas) algorithm.