We now have a way of computing the spectrum for an arbitrary
signal: The Discrete Fourier Transform (DFT) computes the spectrum at
NN equally spaced frequencies from
a length- NN sequence. An issue
that never arises in analog "computation," like that
performed by a circuit, is how much work it takes to perform the
signal processing operation such as filtering. In computation,
this consideration translates to the number of basic
computational steps required to perform the needed
processing. The number of steps, known as the
complexity, becomes equivalent to how long the
computation takes (how long must we wait for an
answer). Complexity is not so much tied to specific computers or
programming languages but to how many steps are required on any
computer. Thus, a procedure's stated complexity says that
the time taken will be proportional to some
function of the amount of data used in the computation and the
amount demanded.
For example, consider the formula for the discrete Fourier
transform. For each frequency we choose, we must multiply each signal
value by a complex number and add together the results. For a
real-valued signal, each real-times-complex multiplication requires
two real multiplications, meaning we have
2N
2N multiplications to perform. To add the results
together, we must keep the real and imaginary parts
separate. Adding NN numbers
requires
N−1
N1
additions. Consequently, each frequency requires
2N+2(N−1)=4N−2
2N
2
N1
4N
2
basic computational steps. As we have
NN frequencies, the total number of
computations is
N(4N−2)
N
4N
2
.
In complexity calculations, we only worry about what happens as
the data lengths increase, and take the dominant term—here the
4N2
4
N2
term—as reflecting how much work is involved in
making the computation. As multiplicative constants don't matter
since we are making a "proportional to" evaluation, we find the
DFT is an
ON2
O
N
2
computational procedure. This notation is read "order
NN-squared". Thus, if we double
the length of the data, we would expect that the computation
time to approximately quadruple.
In making the complexity evaluation for the DFT, we assumed
the data to be real. Three questions emerge. First of all,
the spectra of such signals have conjugate symmetry, meaning
that negative frequency components (
k=N2+1…N+1
k
N2
1
…
N1
in the DFT) can be computed from the
corresponding positive frequency components. Does this
symmetry change the DFT's complexity? Secondly, suppose the
data are complex-valued; what is the DFT's complexity now?
Finally, a less important but interesting question is
suppose we want KK frequency
values instead of NN; now what
is the complexity?
When the signal is real-valued, we may only need half the
spectral values, but the complexity remains unchanged. If
the data are complex-valued, which demands retaining all
frequency values, the complexity is again the same. When
only KK frequencies are needed,
the complexity is
OKN
O
K N
.
"Electrical Engineering Digital Processing Systems in Braille."